summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* fs: Make unload_nls() NULL pointer safeThomas Gleixner2009-09-2412-69/+27
| | | | | | | | | | | | | | | | | | Most call sites of unload_nls() do: if (nls) unload_nls(nls); Check the pointer inside unload_nls() like we do in kfree() and simplify the call sites. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Steve French <sfrench@us.ibm.com> Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Cc: Petr Vandrovec <vandrove@vc.cvut.cz> Cc: Anton Altaparmakov <aia21@cantab.net> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* freeze_bdev: grab active reference to frozen superblocksChristoph Hellwig2009-09-243-61/+120
| | | | | | | | | | | | | | Currently we held s_umount while a filesystem is frozen, despite that we might return to userspace and unlock it from a different process. Instead grab an active reference to keep the file system busy and add an explicit check for frozen filesystems in remount and reject the remount instead of blocking on s_umount. Add a new get_active_super helper to super.c for use by freeze_bdev that grabs an active reference to a superblock from a given block device. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* freeze_bdev: kill bd_mount_semChristoph Hellwig2009-09-243-10/+8
| | | | | | | | | | | | | | Now that we have the freeze count there is not much reason for bd_mount_sem anymore. The actual freeze/thaw operations are serialized using the bd_fsfreeze_mutex, and the only other place we take bd_mount_sem is get_sb_bdev which tries to prevent mounting a filesystem while the block device is frozen. Instead of add a check for bd_fsfreeze_count and return -EBUSY if a filesystem is frozen. While that is a change in user visible behaviour a failing mount is much better for this case rather than having the mount process stuck uninterruptible for a long time. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* exofs: remove BKL from super operationsBoaz Harrosh2009-09-241-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | the two places inside exofs that where taking the BKL were: exofs_put_super() - .put_super and exofs_sync_fs() - which is .sync_fs and is also called from .write_super. Now exofs_sync_fs() is protected from itself by also taking the sb_lock. exofs_put_super() directly calls exofs_sync_fs() so there is no danger between these two either. In anyway there is absolutely nothing dangerous been done inside exofs_sync_fs(). Unless there is some subtle race with the actual lifetime of the super_block in regard to .put_super and some other parts of the VFS. Which is highly unlikely. Signed-off-by: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs/romfs: correct error-handling codeJulia Lawall2009-09-241-1/+1
| | | | | | | | | romfs_fill_super() assumes that romfs_iget() returns NULL when it fails. romfs_iget() actually returns ERR_PTR(-ve) in that case... Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vfs: seq_file: add helpers for data fillingMiklos Szeredi2009-09-242-36/+77
| | | | | | | | | | | | | | | Add two helpers that allow access to the seq_file's own buffer, but hide the internal details of seq_files. This allows easier implementation of special purpose filling functions. It also cleans up some existing functions which duplicated the seq_file logic. Make these inline functions in seq_file.h, as suggested by Al. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vfs: remove redundant position check in do_sendfileJeff Layton2009-09-241-3/+0
| | | | | | | | | | | | | | As Johannes Weiner pointed out, one of the range checks in do_sendfile is redundant and is already checked in rw_verify_area. Signed-off-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Robert Love <rlove@google.com> Cc: Mandeep Singh Baines <msb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vfs: change sb->s_maxbytes to a loff_tJeff Layton2009-09-242-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | sb->s_maxbytes is supposed to indicate the maximum size of a file that can exist on the filesystem. It's declared as an unsigned long long. Even if a filesystem has no inherent limit that prevents it from using every bit in that unsigned long long, it's still problematic to set it to anything larger than MAX_LFS_FILESIZE. There are places in the kernel that cast s_maxbytes to a signed value. If it's set too large then this cast makes it a negative number and generally breaks the comparison. Change s_maxbytes to be loff_t instead. That should help eliminate the temptation to set it too large by making it a signed value. Also, add a warning for couple of releases to help catch filesystems that set s_maxbytes too large. Eventually we can either convert this to a BUG() or just remove it and in the hope that no one will get it wrong now that it's a signed value. Signed-off-by: Jeff Layton <jlayton@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Robert Love <rlove@google.com> Cc: Mandeep Singh Baines <msb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vfs: explicitly cast s_maxbytes in fiemap_check_rangesJeff Layton2009-09-241-4/+5
| | | | | | | | | | | | | | | | | If fiemap_check_ranges is passed a large enough value, then it's possible that the value would be cast to a signed value for comparison against s_maxbytes when we change it to loff_t. Make sure that doesn't happen by explicitly casting s_maxbytes to an unsigned value for the purposes of comparison. Signed-off-by: Jeff Layton <jlayton@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Robert Love <rlove@google.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mandeep Singh Baines <msb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* libfs: return error code on failed attr setWu Fengguang2009-09-241-2/+3
| | | | | | | | | | | | | Currently all simple_attr.set handlers return 0 on success and negative codes on error. Fix simple_attr_write() to return these error codes. Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* seq_file: return a negative error code when seq_path_root() fails.Tetsuo Handa2009-09-241-0/+1
| | | | | | | | | | | | | seq_path_root() is returning a return value of successful __d_path() instead of returning a negative value when mangle_path() failed. This is not a bug so far because nobody is using return value of seq_path_root(). Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vfs: optimize touch_time() tooAndi Kleen2009-09-241-20/+23
| | | | | | | | | | | | | | | | | | | | | Do a similar optimization as earlier for touch_atime. Getting the lock in mnt_get_write is relatively costly, so try all avenues to avoid it first. This patch is careful to still only update inode fields inside the lock region. This didn't show up in benchmarks, but it's easy enough to do. [akpm@linux-foundation.org: fix typo in comment] [hugh.dickins@tiscali.co.uk: fix inverted test of mnt_want_write_file()] Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Valerie Aurora <vaurora@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vfs: optimization for touch_atime()Andi Kleen2009-09-241-10/+10
| | | | | | | | | | | | | | | | | | | | | | Some benchmark testing shows touch_atime to be high up in profile logs for IO intensive workloads. Most likely that's due to the lock in mnt_want_write(). Unfortunately touch_atime first takes the lock, and then does all the other tests that could avoid atime updates (like noatime or relatime). Do it the other way round -- first try to avoid the update and only then if that didn't succeed take the lock. That works because none of the atime avoidance tests rely on locking. This also eliminates a goto. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Christoph Hellwig <hch@infradead.org> Reviewed-by: Valerie Aurora <vaurora@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vfs: split generic_forget_inode() so that hugetlbfs does not have to copy itJan Kara2009-09-243-31/+24
| | | | | | | | | | | | | | | Hugetlbfs needs to do special things instead of truncate_inode_pages(). Currently, it copied generic_forget_inode() except for truncate_inode_pages() call which is asking for trouble (the code there isn't trivial). So create a separate function generic_detach_inode() which does all the list magic done in generic_forget_inode() and call it from hugetlbfs_forget_inode(). Signed-off-by: Jan Kara <jack@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs/inode.c: add dev-id and inode number for debugging in init_special_inode()Manish Katiyar2009-09-241-2/+3
| | | | | | | | | | | | | | | | Add device-id and inode number for better debugging. This was suggested by Andreas in one of the threads http://article.gmane.org/gmane.comp.file-systems.ext4/12062 . "If anyone has a chance, fixing this error message to be not-useless would be good... Including the device name and the inode number would help track down the source of the problem." Signed-off-by: Manish Katiyar <mkatiyar@gmail.com> Cc: Andreas Dilger <adilger@sun.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* libfs: make simple_read_from_buffer conventionalSteven Rostedt2009-09-241-2/+6
| | | | | | | | | | | | | | | | | | | | | Impact: have simple_read_from_buffer conform to standards It was brought to my attention by Andrew Morton, Theodore Tso, and H. Peter Anvin that a read from userspace should only return -EFAULT if nothing was actually read. Looking at the simple_read_from_buffer I noticed that this function does not conform to that rule. This patch fixes that function. [akpm@linux-foundation.org: simplification suggested by hpa] [hpa@zytor.com: fix count==0 handling] Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linusLinus Torvalds2009-09-2377-724/+403
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus: (39 commits) cpumask: Move deprecated functions to end of header. cpumask: remove unused deprecated functions, avoid accusations of insanity cpumask: use new-style cpumask ops in mm/quicklist. cpumask: use mm_cpumask() wrapper: x86 cpumask: use mm_cpumask() wrapper: um cpumask: use mm_cpumask() wrapper: mips cpumask: use mm_cpumask() wrapper: mn10300 cpumask: use mm_cpumask() wrapper: m32r cpumask: use mm_cpumask() wrapper: arm cpumask: Use accessors for cpu_*_mask: um cpumask: Use accessors for cpu_*_mask: powerpc cpumask: Use accessors for cpu_*_mask: mips cpumask: Use accessors for cpu_*_mask: m32r cpumask: remove arch_send_call_function_ipi cpumask: arch_send_call_function_ipi_mask: s390 cpumask: arch_send_call_function_ipi_mask: powerpc cpumask: arch_send_call_function_ipi_mask: mips cpumask: arch_send_call_function_ipi_mask: m32r cpumask: arch_send_call_function_ipi_mask: alpha cpumask: remove obsolete topology_core_siblings and topology_thread_siblings: ia64 ...
| * cpumask: Move deprecated functions to end of header.Rusty Russell2009-09-241-341/+252
| | | | | | | | | | | | | | | | The new ones have pretty kerneldoc. Move the old ones to the end to avoid confusing people. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: benh@kernel.crashing.org
| * cpumask: remove unused deprecated functions, avoid accusations of insanityRusty Russell2009-09-241-111/+1
| | | | | | | | | | | | | | | | | | | | | | | | We're not forcing removal of the old cpu_ functions, but we might as well delete the now-unused ones. Especially CPUMASK_ALLOC and friends. I actually got a phone call (!) from a hacker who thought I had introduced them as the new cpumask API. He seemed bewildered that I had lost all taste. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: benh@kernel.crashing.org
| * cpumask: use new-style cpumask ops in mm/quicklist.Rusty Russell2009-09-241-2/+1
| | | | | | | | | | | | | | This slipped past the previous sweeps. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Christoph Lameter <cl@linux-foundation.org>
| * cpumask: use mm_cpumask() wrapper: x86Rusty Russell2009-09-244-14/+15
| | | | | | | | | | | | | | | | | | Makes code futureproof against the impending change to mm->cpu_vm_mask (to be a pointer). It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: use mm_cpumask() wrapper: umRusty Russell2009-09-241-2/+2
| | | | | | | | | | | | | | | | | | Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: use mm_cpumask() wrapper: mipsRusty Russell2009-09-242-6/+6
| | | | | | | | | | | | | | | | | | Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: use mm_cpumask() wrapper: mn10300Rusty Russell2009-09-241-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | Makes code futureproof against the impending change to mm->cpu_vm_mask (to be a pointer). It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Also change the actual arg name here to "mm" (which it is), not "task". Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: use mm_cpumask() wrapper: m32rRusty Russell2009-09-242-6/+6
| | | | | | | | | | | | | | | | | | | | Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Hirokazu Takata <takata@linux-m32r.org> (fixes)
| * cpumask: use mm_cpumask() wrapper: armRusty Russell2009-09-246-20/+21
| | | | | | | | | | | | | | | | | | Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: Use accessors for cpu_*_mask: umRusty Russell2009-09-241-1/+1
| | | | | | | | | | | | | | | | Use the accessors rather than frobbing bits directly (the new versions are const). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
| * cpumask: Use accessors for cpu_*_mask: powerpcRusty Russell2009-09-244-13/+13
| | | | | | | | | | | | | | | | Use the accessors rather than frobbing bits directly (the new versions are const). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
| * cpumask: Use accessors for cpu_*_mask: mipsRusty Russell2009-09-244-8/+8
| | | | | | | | | | | | | | | | Use the accessors rather than frobbing bits directly (the new versions are const). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
| * cpumask: Use accessors for cpu_*_mask: m32rRusty Russell2009-09-241-1/+1
| | | | | | | | | | | | | | | | Use the accessors rather than frobbing bits directly (the new versions are const). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
| * cpumask: remove arch_send_call_function_ipiRusty Russell2009-09-2412-18/+0
| | | | | | | | | | | | | | Now everyone is converted to arch_send_call_function_ipi_mask, remove the shim and the #defines. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: arch_send_call_function_ipi_mask: s390Rusty Russell2009-09-242-3/+4
| | | | | | | | | | | | | | We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: arch_send_call_function_ipi_mask: powerpcRusty Russell2009-09-242-3/+4
| | | | | | | | | | | | | | | | We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(), and by defining it, the old arch_send_call_function_ipi is defined by the core code. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: arch_send_call_function_ipi_mask: mipsRusty Russell2009-09-2412-20/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(), and by defining it, the old arch_send_call_function_ipi is defined by the core code. We also take the chance to wean the implementations off the obsolescent for_each_cpu_mask(): making send_ipi_mask take the pointer seemed the most natural way to ensure all implementations used for_each_cpu. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: arch_send_call_function_ipi_mask: m32rRusty Russell2009-09-242-12/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(), and by defining it, the old arch_send_call_function_ipi is defined by the core code. We also take the chance to wean the implementations off the obsolescent for_each_cpu_mask(): making send_ipi_mask take the pointer seemed the most natural way to ensure all implementations used for_each_cpu. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: arch_send_call_function_ipi_mask: alphaRusty Russell2009-09-242-8/+9
| | | | | | | | | | | | | | | | | | | | | | We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(). We also take the chance to wean the send_ipi_message off the obsolescent for_each_cpu_mask(): making it take a pointer seemed the most natural way to do this. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove obsolete topology_core_siblings and ↵Rusty Russell2009-09-241-2/+0
| | | | | | | | | | | | | | | | topology_thread_siblings: ia64 There were replaced by topology_core_cpumask and topology_thread_cpumask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove obsolete topology_core_siblings and ↵Rusty Russell2009-09-241-2/+0
| | | | | | | | | | | | | | | | topology_thread_siblings: powerpc There were replaced by topology_core_cpumask and topology_thread_cpumask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove obsolete topology_core_siblings and ↵Rusty Russell2009-09-241-1/+0
| | | | | | | | | | | | | | | | topology_thread_siblings: s390 There were replaced by topology_core_cpumask and topology_thread_cpumask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove obsolete topology_core_siblings and ↵Rusty Russell2009-09-241-2/+0
| | | | | | | | | | | | | | | | topology_thread_siblings: sparc There were replaced by topology_core_cpumask and topology_thread_cpumask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove obsolete topology_core_siblings and ↵Rusty Russell2009-09-241-6/+0
| | | | | | | | | | | | | | | | topology_thread_siblings: core There were replaced by topology_core_cpumask and topology_thread_cpumask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove the deprecated smp_call_function_mask()Rusty Russell2009-09-241-11/+0
| | | | | | | | | | | | Everyone is now using smp_call_function_many(). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * ia64: convert last user of smp_call_function_maskRusty Russell2009-09-241-1/+1
| | | | | | | | | | | | | | smp_call_function_many is the new version: it takes a pointer. Also, use mm accessor macro while we're changing this. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: don't define set_cpus_allowed() if CONFIG_CPUMASK_OFFSTACK=yRusty Russell2009-09-241-0/+3
| | | | | | | | | | | | You're not supposed to pass cpumasks on the stack in that case. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * ACPI: remove cpumask_t usageBjorn Helgaas2009-09-241-1/+1
| | | | | | | | | | | | | | | | | | | | set_cpus_allowed() is on the way out; replace it with set_cpus_allowed_ptr(). Reference: http://lkml.org/lkml/2008/11/6/448 Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: Remove mask field from commentsNobuhiro Iwamatsu2009-09-241-1/+0
| | | | | | | | | | | | | | | | | | By 7be23e278f, mask field was deleted by irqaction. However, it was not deleted from comment. Signed-off-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com> CC: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove unused mask field from struct irqaction.Rusty Russell2009-09-241-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | Up until 1.1.83, the primitive human tribes used struct sigaction for interrupts. The sa_mask field was overloaded to hold a pointer to the name. When someone created the new "struct irqaction" they carried across the "mask" field as a kind of ancestor worship: the fact that it was unused makes clear its spiritual significance. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove last assignment to mask field of struct irqaction.Rusty Russell2009-09-241-1/+0
| | | | | | | | | | | | | | This snuck in after the patch which removed all the others. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Ingo Molnar <mingo@elte.hu>
| * cpumask: remove unused cpu_mask_allRusty Russell2009-09-242-8/+0
| | | | | | | | | | | | | | It's only defined for NR_CPUS > BITS_PER_LONG; cpu_all_mask is always defined (and const). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
| * cpumask: remove dangerous CPU_MASK_ALL_PTR, &CPU_MASK_ALL.: mipsRusty Russell2009-09-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (Thanks to Al Viro for reminding me of this, via Ingo) CPU_MASK_ALL is the (deprecated) "all bits set" cpumask, defined as so: #define CPU_MASK_ALL (cpumask_t) { { ... } } Taking the address of such a temporary is questionable at best, unfortunately 321a8e9d (cpumask: add CPU_MASK_ALL_PTR macro) added CPU_MASK_ALL_PTR: #define CPU_MASK_ALL_PTR (&CPU_MASK_ALL) Which formalizes this practice. One day gcc could bite us over this usage (though we seem to have gotten away with it so far). So replace everywhere which used &CPU_MASK_ALL or CPU_MASK_ALL_PTR with the modern "cpu_all_mask" (a real struct cpumask *), and remove CPU_MASK_ALL_PTR altogether. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Ingo Molnar <mingo@elte.hu> Reported-by: Al Viro <viro@zeniv.linux.org.uk> Cc: Mike Travis <travis@sgi.com>
OpenPOWER on IntegriCloud