summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'for-linus' of git://git.kernel.dk/linux-blockLinus Torvalds2012-05-193-5/+12
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull block layer fixes from Jens Axboe: "A few small, but important fixes. Most of them are marked for stable as well - Fix failure to release a semaphore on error path in mtip32xx. - Fix crashable condition in bio_get_nr_vecs(). - Don't mark end-of-disk buffers as mapped, limit it to i_size. - Fix for build problem with CONFIG_BLOCK=n on arm at least. - Fix for a buffer overlow on UUID partition printing. - Trivial removal of unused variables in dac960." * 'for-linus' of git://git.kernel.dk/linux-block: block: fix buffer overflow when printing partition UUIDs Fix blkdev.h build errors when BLOCK=n bio allocation failure due to bio_get_nr_vecs() block: don't mark buffers beyond end of disk as mapped mtip32xx: release the semaphore on an error path dac960: Remove unused variables from DAC960_CreateProcEntries()
| * bio allocation failure due to bio_get_nr_vecs()Bernd Schubert2012-05-111-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The number of bio_get_nr_vecs() is passed down via bio_alloc() to bvec_alloc_bs(), which fails the bio allocation if nr_iovecs > BIO_MAX_PAGES. For the underlying caller this causes an unexpected bio allocation failure. Limiting to queue_max_segments() is not sufficient, as max_segments also might be very large. bvec_alloc_bs(gfp_mask, nr_iovecs, ) => NULL when nr_iovecs > BIO_MAX_PAGES bio_alloc_bioset(gfp_mask, nr_iovecs, ...) bio_alloc(GFP_NOIO, nvecs) xfs_alloc_ioend_bio() Signed-off-by: Bernd Schubert <bernd.schubert@itwm.fraunhofer.de> Cc: stable@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * block: don't mark buffers beyond end of disk as mappedJeff Moyer2012-05-112-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hi, We have a bug report open where a squashfs image mounted on ppc64 would exhibit errors due to trying to read beyond the end of the disk. It can easily be reproduced by doing the following: [root@ibm-p750e-02-lp3 ~]# ls -l install.img -rw-r--r-- 1 root root 142032896 Apr 30 16:46 install.img [root@ibm-p750e-02-lp3 ~]# mount -o loop ./install.img /mnt/test [root@ibm-p750e-02-lp3 ~]# dd if=/dev/loop0 of=/dev/null dd: reading `/dev/loop0': Input/output error 277376+0 records in 277376+0 records out 142016512 bytes (142 MB) copied, 0.9465 s, 150 MB/s In dmesg, you'll find the following: squashfs: version 4.0 (2009/01/31) Phillip Lougher [ 43.106012] attempt to access beyond end of device [ 43.106029] loop0: rw=0, want=277410, limit=277408 [ 43.106039] Buffer I/O error on device loop0, logical block 138704 [ 43.106053] attempt to access beyond end of device [ 43.106057] loop0: rw=0, want=277412, limit=277408 [ 43.106061] Buffer I/O error on device loop0, logical block 138705 [ 43.106066] attempt to access beyond end of device [ 43.106070] loop0: rw=0, want=277414, limit=277408 [ 43.106073] Buffer I/O error on device loop0, logical block 138706 [ 43.106078] attempt to access beyond end of device [ 43.106081] loop0: rw=0, want=277416, limit=277408 [ 43.106085] Buffer I/O error on device loop0, logical block 138707 [ 43.106089] attempt to access beyond end of device [ 43.106093] loop0: rw=0, want=277418, limit=277408 [ 43.106096] Buffer I/O error on device loop0, logical block 138708 [ 43.106101] attempt to access beyond end of device [ 43.106104] loop0: rw=0, want=277420, limit=277408 [ 43.106108] Buffer I/O error on device loop0, logical block 138709 [ 43.106112] attempt to access beyond end of device [ 43.106116] loop0: rw=0, want=277422, limit=277408 [ 43.106120] Buffer I/O error on device loop0, logical block 138710 [ 43.106124] attempt to access beyond end of device [ 43.106128] loop0: rw=0, want=277424, limit=277408 [ 43.106131] Buffer I/O error on device loop0, logical block 138711 [ 43.106135] attempt to access beyond end of device [ 43.106139] loop0: rw=0, want=277426, limit=277408 [ 43.106143] Buffer I/O error on device loop0, logical block 138712 [ 43.106147] attempt to access beyond end of device [ 43.106151] loop0: rw=0, want=277428, limit=277408 [ 43.106154] Buffer I/O error on device loop0, logical block 138713 [ 43.106158] attempt to access beyond end of device [ 43.106162] loop0: rw=0, want=277430, limit=277408 [ 43.106166] attempt to access beyond end of device [ 43.106169] loop0: rw=0, want=277432, limit=277408 ... [ 43.106307] attempt to access beyond end of device [ 43.106311] loop0: rw=0, want=277470, limit=2774 Squashfs manages to read in the end block(s) of the disk during the mount operation. Then, when dd reads the block device, it leads to block_read_full_page being called with buffers that are beyond end of disk, but are marked as mapped. Thus, it would end up submitting read I/O against them, resulting in the errors mentioned above. I fixed the problem by modifying init_page_buffers to only set the buffer mapped if it fell inside of i_size. Cheers, Jeff Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Acked-by: Nick Piggin <npiggin@kernel.dk> -- Changes from v1->v2: re-used max_block, as suggested by Nick Piggin. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | Merge branch 'akpm' (Andrew's patch-bomb)Linus Torvalds2012-05-181-12/+8
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | Merge misc fixes from Andrew Morton. * emailed from Andrew Morton <akpm@linux-foundation.org>: (4 patches) frv: delete incorrect task prototypes causing compile fail slub: missing test for partial pages flush work in flush_all() fs, proc: fix ABBA deadlock in case of execution attempt of map_files/ entries drivers/rtc/rtc-pl031.c: configure correct wday for 2000-01-01
| * | fs, proc: fix ABBA deadlock in case of execution attempt of map_files/ entriesCyrill Gorcunov2012-05-171-12/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | map_files/ entries are never supposed to be executed, still curious minds might try to run them, which leads to the following deadlock ====================================================== [ INFO: possible circular locking dependency detected ] 3.4.0-rc4-24406-g841e6a6 #121 Not tainted ------------------------------------------------------- bash/1556 is trying to acquire lock: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: do_lookup+0x267/0x2b1 but task is already holding lock: (&sig->cred_guard_mutex){+.+.+.}, at: prepare_bprm_creds+0x2d/0x69 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&sig->cred_guard_mutex){+.+.+.}: validate_chain+0x444/0x4f4 __lock_acquire+0x387/0x3f8 lock_acquire+0x12b/0x158 __mutex_lock_common+0x56/0x3a9 mutex_lock_killable_nested+0x40/0x45 lock_trace+0x24/0x59 proc_map_files_lookup+0x5a/0x165 __lookup_hash+0x52/0x73 do_lookup+0x276/0x2b1 walk_component+0x3d/0x114 do_last+0xfc/0x540 path_openat+0xd3/0x306 do_filp_open+0x3d/0x89 do_sys_open+0x74/0x106 sys_open+0x21/0x23 tracesys+0xdd/0xe2 -> #0 (&sb->s_type->i_mutex_key#8){+.+.+.}: check_prev_add+0x6a/0x1ef validate_chain+0x444/0x4f4 __lock_acquire+0x387/0x3f8 lock_acquire+0x12b/0x158 __mutex_lock_common+0x56/0x3a9 mutex_lock_nested+0x40/0x45 do_lookup+0x267/0x2b1 walk_component+0x3d/0x114 link_path_walk+0x1f9/0x48f path_openat+0xb6/0x306 do_filp_open+0x3d/0x89 open_exec+0x25/0xa0 do_execve_common+0xea/0x2f9 do_execve+0x43/0x45 sys_execve+0x43/0x5a stub_execve+0x6c/0xc0 This is because prepare_bprm_creds grabs task->signal->cred_guard_mutex and when do_lookup happens we try to grab task->signal->cred_guard_mutex again in lock_trace. Fix it using plain ptrace_may_access() helper in proc_map_files_lookup() and in proc_map_files_readdir() instead of lock_trace(), the caller must be CAP_SYS_ADMIN granted anyway. Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org> Reported-by: Sasha Levin <levinsasha928@gmail.com> Cc: Konstantin Khlebnikov <khlebnikov@openvz.org> Cc: Pavel Emelyanov <xemul@openvz.org> Cc: Dave Jones <davej@redhat.com> Cc: Vasiliy Kulikov <segoon@openwall.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | proc: move fd symlink i_mode calculations into tid_fd_revalidate()Linus Torvalds2012-05-181-29/+14
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of doing the i_mode calculations at proc_fd_instantiate() time, move them into tid_fd_revalidate(), which is where the other inode state (notably uid/gid information) is updated too. Otherwise we'll end up with stale i_mode information if an fd is re-used while the dentry still hangs around. Not that anything really *cares* (symlink permissions don't really matter), but Tetsuo Handa noticed that the owner read/write bits don't always match the state of the readability of the file descriptor, and we _used_ to get this right a long time ago in a galaxy far, far away. Besides, aside from fixing an ugly detail (that has apparently been this way since commit 61a28784028e: "proc: Remove the hard coded inode numbers" in 2006), this removes more lines of code than it adds. And it just makes sense to update i_mode in the same place we update i_uid/gid. Al Viro correctly points out that we could just do the inode fill in the inode iops ->getattr() function instead. However, that does require somewhat slightly more invasive changes, and adds yet *another* lookup of the file descriptor. We need to do the revalidate() for other reasons anyway, and have the file descriptor handy, so we might as well fill in the information at this point. Reported-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Cc: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Eric Biederman <ebiederm@xmission.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge git://git.samba.org/sfrench/cifs-2.6Linus Torvalds2012-05-161-1/+2
|\ \ | | | | | | | | | | | | | | | | | | Pull CIFS fix from Jeff Layton * git://git.samba.org/sfrench/cifs-2.6: cifs: fix misspelling of "forcedirectio"
| * | cifs: fix misspelling of "forcedirectio"Jeff Layton2012-05-161-1/+2
| |/ | | | | | | | | | | | | | | | | ...and add a "directio" synonym since that's what the manpage has always advertised. Acked-by: Sachin Prabhu <sprabhu@redhat.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* | Merge tag 'for-linus-3.4-20120513' of git://git.infradead.org/linux-mtdLinus Torvalds2012-05-131-1/+1
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | Pull three MTD fixes from David Woodhouse: - Fix a lock ordering deadlock in JFFS2 - Fix an oops in the dataflash driver, triggered by a dummy call to test whether it has OTP functionality. - Fix request_mem_region() failure on amsdelta NAND driver. * tag 'for-linus-3.4-20120513' of git://git.infradead.org/linux-mtd: mtd: ams-delta: fix request_mem_region() failure jffs2: Fix lock acquisition order bug in gc path mtd: fix oops in dataflash driver
| * jffs2: Fix lock acquisition order bug in gc pathJosh Cartwright2012-05-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The locking policy is such that the erase_complete_block spinlock is nested within the alloc_sem mutex. This fixes a case in which the acquisition order was erroneously reversed. This issue was caught by the following lockdep splat: ======================================================= [ INFO: possible circular locking dependency detected ] 3.0.5 #1 ------------------------------------------------------- jffs2_gcd_mtd6/299 is trying to acquire lock: (&c->alloc_sem){+.+.+.}, at: [<c01f7714>] jffs2_garbage_collect_pass+0x314/0x890 but task is already holding lock: (&(&c->erase_completion_lock)->rlock){+.+...}, at: [<c01f7708>] jffs2_garbage_collect_pass+0x308/0x890 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&(&c->erase_completion_lock)->rlock){+.+...}: [<c008bec4>] validate_chain+0xe6c/0x10bc [<c008c660>] __lock_acquire+0x54c/0xba4 [<c008d240>] lock_acquire+0xa4/0x114 [<c046780c>] _raw_spin_lock+0x3c/0x4c [<c01f744c>] jffs2_garbage_collect_pass+0x4c/0x890 [<c01f937c>] jffs2_garbage_collect_thread+0x1b4/0x1cc [<c0071a68>] kthread+0x98/0xa0 [<c000f264>] kernel_thread_exit+0x0/0x8 -> #0 (&c->alloc_sem){+.+.+.}: [<c008ad2c>] print_circular_bug+0x70/0x2c4 [<c008c08c>] validate_chain+0x1034/0x10bc [<c008c660>] __lock_acquire+0x54c/0xba4 [<c008d240>] lock_acquire+0xa4/0x114 [<c0466628>] mutex_lock_nested+0x74/0x33c [<c01f7714>] jffs2_garbage_collect_pass+0x314/0x890 [<c01f937c>] jffs2_garbage_collect_thread+0x1b4/0x1cc [<c0071a68>] kthread+0x98/0xa0 [<c000f264>] kernel_thread_exit+0x0/0x8 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&(&c->erase_completion_lock)->rlock); lock(&c->alloc_sem); lock(&(&c->erase_completion_lock)->rlock); lock(&c->alloc_sem); *** DEADLOCK *** 1 lock held by jffs2_gcd_mtd6/299: #0: (&(&c->erase_completion_lock)->rlock){+.+...}, at: [<c01f7708>] jffs2_garbage_collect_pass+0x308/0x890 stack backtrace: [<c00155dc>] (unwind_backtrace+0x0/0x100) from [<c0463dc0>] (dump_stack+0x20/0x24) [<c0463dc0>] (dump_stack+0x20/0x24) from [<c008ae84>] (print_circular_bug+0x1c8/0x2c4) [<c008ae84>] (print_circular_bug+0x1c8/0x2c4) from [<c008c08c>] (validate_chain+0x1034/0x10bc) [<c008c08c>] (validate_chain+0x1034/0x10bc) from [<c008c660>] (__lock_acquire+0x54c/0xba4) [<c008c660>] (__lock_acquire+0x54c/0xba4) from [<c008d240>] (lock_acquire+0xa4/0x114) [<c008d240>] (lock_acquire+0xa4/0x114) from [<c0466628>] (mutex_lock_nested+0x74/0x33c) [<c0466628>] (mutex_lock_nested+0x74/0x33c) from [<c01f7714>] (jffs2_garbage_collect_pass+0x314/0x890) [<c01f7714>] (jffs2_garbage_collect_pass+0x314/0x890) from [<c01f937c>] (jffs2_garbage_collect_thread+0x1b4/0x1cc) [<c01f937c>] (jffs2_garbage_collect_thread+0x1b4/0x1cc) from [<c0071a68>] (kthread+0x98/0xa0) [<c0071a68>] (kthread+0x98/0xa0) from [<c000f264>] (kernel_thread_exit+0x0/0x8) This was introduce in '81cfc9f jffs2: Fix serious write stall due to erase'. Cc: stable@kernel.org [2.6.37+] Signed-off-by: Josh Cartwright <joshc@linux.com> Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* | Merge branch 'akpm' (Andrew's patch-bomb)Linus Torvalds2012-05-101-2/+10
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merge misc fixes from Andrew Morton. * emailed from Andrew Morton <akpm@linux-foundation.org>: (8 patches) MAINTAINERS: add maintainer for LED subsystem mm: nobootmem: fix sign extend problem in __free_pages_memory() drivers/leds: correct __devexit annotations memcg: free spare array to avoid memory leak namespaces, pid_ns: fix leakage on fork() failure hugetlb: prevent BUG_ON in hugetlb_fault() -> hugetlb_cow() mm: fix division by 0 in percpu_pagelist_fraction() proc/pid/pagemap: correctly report non-present ptes and holes between vmas
| * | proc/pid/pagemap: correctly report non-present ptes and holes between vmasKonstantin Khlebnikov2012-05-101-2/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reset the current pagemap-entry if the current pte isn't present, or if current vma is over. Otherwise pagemap reports last entry again and again. Non-present pte reporting was broken in commit 092b50bacd1c ("pagemap: introduce data structure for pagemap entry") Reporting for holes was broken in commit 5aaabe831eb5 ("pagemap: avoid splitting thp when reading /proc/pid/pagemap") Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Reported-by: Pavel Emelyanov <xemul@parallels.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | cifs: fix revalidation test in cifs_llseek()Dan Carpenter2012-05-091-1/+1
|/ / | | | | | | | | | | | | | | | | | | This test is always true so it means we revalidate the length every time, which generates more network traffic. When it is SEEK_SET or SEEK_CUR, then we don't need to revalidate. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* | Merge branch 'for-linus' of ↵Linus Torvalds2012-05-068-21/+47
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs Pull btrfs fixes from Chris Mason: "The big ones here are a memory leak we introduced in rc1, and a scheduling while atomic if the transid on disk doesn't match the transid we expected. This happens for corrupt blocks, or out of date disks. It also fixes up the ioctl definition for our ioctl to resolve logical inode numbers. The __u32 was a merging error and doesn't match what we ship in the progs." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: Btrfs: avoid sleeping in verify_parent_transid while atomic Btrfs: fix crash in scrub repair code when device is missing btrfs: Fix mismatching struct members in ioctl.h Btrfs: fix page leak when allocing extent buffers Btrfs: Add properly locking around add_root_to_dirty_list
| * | Btrfs: avoid sleeping in verify_parent_transid while atomicChris Mason2012-05-065-17/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | verify_parent_transid needs to lock the extent range to make sure no IO is underway, and so it can safely clear the uptodate bits if our checks fail. But, a few callers are using it with spinlocks held. Most of the time, the generation numbers are going to match, and we don't want to switch to a blocking lock just for the error case. This adds an atomic flag to verify_parent_transid, and changes it to return EAGAIN if it needs to block to properly verifiy things. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | Btrfs: fix crash in scrub repair code when device is missingStefan Behrens2012-05-041-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix that when scrub tries to repair an I/O or checksum error and one of the devices containing the mirror is missing, it crashes in bio_add_page because the bdev is a NULL pointer for missing devices. Reported-by: Marco L. Crociani <marco.crociani@gmail.com> Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | btrfs: Fix mismatching struct members in ioctl.hAlexander Block2012-05-041-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Fix the size members of btrfs_ioctl_ino_path_args and btrfs_ioctl_logical_ino_args. The user space btrfs-progs utilities used __u64 and the kernel headers used __u32 before. Signed-off-by: Alexander Block <ablock84@googlemail.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | Btrfs: fix page leak when allocing extent buffersJosef Bacik2012-05-041-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we happen to alloc a extent buffer and then alloc a page and notice that page is already attached to an extent buffer, we will only unlock it and free our existing eb. Any pages currently attached to that eb will be properly freed, but we don't do the page_cache_release() on the page where we noticed the other extent buffer which can cause us to leak pages and I hope cause the weird issues we've been seeing in this area. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | Btrfs: Add properly locking around add_root_to_dirty_listChris Mason2012-05-041-0/+2
| | | | | | | | | | | | | | | | | | | | | add_root_to_dirty_list happens once at the very beginning of the transaction, but it is still racey. Signed-off-by: Chris Mason <chris.mason@oracle.com>
* | | hfsplus: Fix potential buffer overflowsGreg Kroah-Hartman2012-05-042-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit ec81aecb2966 ("hfs: fix a potential buffer overflow") fixed a few potential buffer overflows in the hfs filesystem. But as Timo Warns pointed out, these changes also need to be made on the hfsplus filesystem as well. Reported-by: Timo Warns <warns@pre-sense.de> Acked-by: WANG Cong <amwang@redhat.com> Cc: Alexey Khoroshilov <khoroshilov@ispras.ru> Cc: Miklos Szeredi <mszeredi@suse.cz> Cc: Sage Weil <sage@newdream.net> Cc: Eugene Teo <eteo@redhat.com> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Dave Anderson <anderson@redhat.com> Cc: stable <stable@vger.kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | Merge git://git.samba.org/sfrench/cifs-2.6Linus Torvalds2012-05-045-25/+23
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull CIFS fixes from Steve French. * git://git.samba.org/sfrench/cifs-2.6: fs/cifs: fix parsing of dfs referrals cifs: make sure we ignore the credentials= and cred= options [CIFS] Update cifs version to 1.78 cifs - check S_AUTOMOUNT in revalidate cifs: add missing initialization of server->req_lock cifs: don't cap ra_pages at the same level as default_backing_dev_info CIFS: Fix indentation in cifs_show_options
| * | | fs/cifs: fix parsing of dfs referralsStefan Metzmacher2012-05-031-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The problem was that the first referral was parsed more than once and so the caller tried the same referrals multiple times. The problem was introduced partly by commit 066ce6899484d9026acd6ba3a8dbbedb33d7ae1b, where 'ref += le16_to_cpu(ref->Size);' got lost, but that was also wrong... Cc: <stable@vger.kernel.org> Signed-off-by: Stefan Metzmacher <metze@samba.org> Tested-by: Björn Jacke <bj@sernet.de> Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | cifs: make sure we ignore the credentials= and cred= optionsJeff Layton2012-05-031-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Older mount.cifs programs passed this on to the kernel after parsing the file. Make sure the kernel ignores that option. Should fix: https://bugzilla.kernel.org/show_bug.cgi?id=43195 Cc: Sachin Prabhu <sprabhu@redhat.com> Reported-by: Ronald <ronald645@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | [CIFS] Update cifs version to 1.78Steve French2012-05-031-1/+1
| | | | | | | | | | | | | | | | Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | cifs - check S_AUTOMOUNT in revalidateIan Kent2012-05-031-5/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When revalidating a dentry, if the inode wasn't known to be a dfs entry when the dentry was instantiated, such as when created via ->readdir(), the DCACHE_NEED_AUTOMOUNT flag needs to be set on the dentry in ->d_revalidate(). The false return from cifs_d_revalidate(), due to the inode now being marked with the S_AUTOMOUNT flag, might not invalidate the dentry if there is a concurrent unlazy path walk. This is because the dentry reference count will be at least 2 in this case causing d_invalidate() to return EBUSY. So the asumption that the dentry will be discarded then correctly instantiated via ->lookup() might not hold. Signed-off-by: Ian Kent <raven@themaw.net> Reviewed-by: Jeff Layton <jlayton@redhat.com> Cc: Steve French <smfrench@gmail.com> Cc: linux-cifs@vger.kernel.org Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | cifs: add missing initialization of server->req_lockJeff Layton2012-05-011-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | Cc: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | cifs: don't cap ra_pages at the same level as default_backing_dev_infoJeff Layton2012-05-011-17/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While testing, I've found that even when we are able to negotiate a much larger rsize with the server, on-the-wire reads often end up being capped at 128k because of ra_pages being capped at that level. Lifting this restriction gave almost a twofold increase in sequential read performance on my craptactular KVM test rig with a 1M rsize. I think this is safe since the actual ra_pages that the VM requests is run through max_sane_readahead() prior to submitting the I/O. Under memory pressure we should end up with large readahead requests being suppressed anyway. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | CIFS: Fix indentation in cifs_show_optionsSachin Prabhu2012-05-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Trivial patch which fixes a misplaced tab in cifs_show_options(). Signed-off-by: Sachin Prabhu <sprabhu@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* | | | vfs: make word-at-a-time accesses handle a non-existing pageLinus Torvalds2012-05-032-6/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It turns out that there are more cases than CONFIG_DEBUG_PAGEALLOC that can have holes in the kernel address space: it seems to happen easily with Xen, and it looks like the AMD gart64 code will also punch holes dynamically. Actually hitting that case is still very unlikely, so just do the access, and take an exception and fix it up for the very unlikely case of it being a page-crosser with no next page. And hey, this abstraction might even help other architectures that have other issues with unaligned word accesses than the possible missing next page. IOW, this could do the byte order magic too. Peter Anvin fixed a thinko in the shifting for the exception case. Reported-and-tested-by: Jana Saout <jana@saout.de> Cc: Peter Anvin <hpa@zytor.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | Merge tag 'nfs-for-3.4-4' of git://git.linux-nfs.org/projects/trondmy/linux-nfsLinus Torvalds2012-05-0213-139/+267
|\ \ \ \ | |/ / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull NFS client bugfixes from Trond Myklebust: - Fixes for the NFSv4 security negotiation - Use the correct hostname when mounting from a private namespace - NFS net namespace bugfixes for the pipefs filesystem - NFSv4 GETACL bugfixes - IPv6 bugfix for NFSv4 referrals * tag 'nfs-for-3.4-4' of git://git.linux-nfs.org/projects/trondmy/linux-nfs: NFSv4.1: Use the correct hostname in the client identifier string SUNRPC: RPC client must use the current utsname hostname string NFS: get module in idmap PipeFS notifier callback NFS: Remove unused function nfs_lookup_with_sec() NFS: Honor the authflavor set in the clone mount data NFS: Fix following referral mount points with different security NFS: Do secinfo as part of lookup NFS: Handle exceptions coming out of nfs4_proc_fs_locations() NFS: Fix SECINFO_NO_NAME SUNRPC: traverse clients tree on PipeFS event SUNRPC: set per-net PipeFS superblock before notification SUNRPC: skip clients with program without PipeFS entries SUNRPC: skip dead but not buried clients on PipeFS events Avoid beyond bounds copy while caching ACL Avoid reading past buffer when calling GETACL fix page number calculation bug for block layout decode buffer NFSv4.1 fix page number calculation bug for filelayout decode buffers pnfs-obj: Remove unused variable from objlayout_get_deviceinfo() nfs4: fix referrals on mounts that use IPv6 addrs
| * | | NFSv4.1: Use the correct hostname in the client identifier stringTrond Myklebust2012-04-301-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to use the hostname of the process that created the nfs_client. That hostname is now stored in the rpc_client->cl_nodename. Also remove the utsname()->domainname component. There is no reason to include the NIS/YP domainname in a client identifier string. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: get module in idmap PipeFS notifier callbackStanislav Kinsbursky2012-04-281-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is bug fix. Notifier callback is called from SUNRPC module. So before dereferencing NFS module we have to make sure, that it's alive. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Remove unused function nfs_lookup_with_sec()Bryan Schumaker2012-04-271-62/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a compiler warning. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Honor the authflavor set in the clone mount dataBryan Schumaker2012-04-274-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The authflavor is set in an nfs_clone_mount structure and passed to the xdev_mount() functions where it was promptly ignored. Instead, use it to initialize an rpc_clnt for the cloned server. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Fix following referral mount points with different securityBryan Schumaker2012-04-275-26/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I create a new proc_lookup_mountpoint() to use when submounting an NFS v4 share. This function returns an rpc_clnt to use for performing an fs_locations() call on a referral's mountpoint. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Do secinfo as part of lookupBryan Schumaker2012-04-275-20/+103
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Whenever lookup sees wrongsec do a secinfo and retry the lookup to find attributes of the file or directory, such as "is this a referral mountpoint?". This also allows me to remove handling -NFS4ERR_WRONSEC as part of getattr xdr decoding. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Handle exceptions coming out of nfs4_proc_fs_locations()Bryan Schumaker2012-04-271-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We don't want to return -NFS4ERR_WRONGSEC to the VFS because it could cause the kernel to oops. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Fix SECINFO_NO_NAMEBryan Schumaker2012-04-271-5/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I was using the same decoder function for SECINFO and SECINFO_NO_NAME, so it was returning an error when it tried to decode an OP_SECINFO_NO_NAME header as OP_SECINFO. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | Avoid beyond bounds copy while caching ACLSachin Prabhu2012-04-272-8/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When attempting to cache ACLs returned from the server, if the bitmap size + the ACL size is greater than a PAGE_SIZE but the ACL size itself is smaller than a PAGE_SIZE, we can read past the buffer page boundary. Signed-off-by: Sachin Prabhu <sprabhu@redhat.com> Reported-by: Jian Li <jiali@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | Avoid reading past buffer when calling GETACLSachin Prabhu2012-04-272-13/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bug noticed in commit bf118a342f10dafe44b14451a1392c3254629a1f When calling GETACL, if the size of the bitmap array, the length attribute and the acl returned by the server is greater than the allocated buffer(args.acl_len), we can Oops with a General Protection fault at _copy_from_pages() when we attempt to read past the pages allocated. This patch allocates an extra PAGE for the bitmap and checks to see that the bitmap + attribute_length + ACLs don't exceed the buffer space allocated to it. Signed-off-by: Sachin Prabhu <sprabhu@redhat.com> Reported-by: Jian Li <jiali@redhat.com> [Trond: Fixed a size_t vs unsigned int printk() warning] Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | fix page number calculation bug for block layout decode bufferJim Rees2012-04-261-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Jim Rees <rees@umich.edu> Suggested-by: Andy Adamson <andros@netapp.com> Suggested-by: Fred Isaman <iisaman@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFSv4.1 fix page number calculation bug for filelayout decode buffersAndy Adamson2012-04-262-2/+2
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | pnfs-obj: Remove unused variable from objlayout_get_deviceinfo()Sachin Bhamare2012-04-261-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Local variable 'sb' was not being used in objlayout_get_deviceinfo(). Signed-off-by: Sachin Bhamare <sbhamare@panasas.com> Signed-off-by: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | nfs4: fix referrals on mounts that use IPv6 addrsWeston Andros Adamson2012-04-261-3/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All referrals (IPv4 addr, IPv6 addr, and DNS) are broken on mounts of IPv6 addresses, because validation code uses a path that is parsed from the dev_name ("<server>:<path>") by splitting on the first colon and colons are used in IPv6 addrs. This patch ignores colons within IPv6 addresses that are escaped by '[' and ']'. Signed-off-by: Weston Andros Adamson <dros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | | nfsd: fix nfs4recover.c printk format warningRandy Dunlap2012-04-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix printk format warnings -- both items are size_t, so use %zu to print them. fs/nfsd/nfs4recover.c:580:3: warning: format '%lu' expects type 'long unsigned int', but argument 3 has type 'size_t' fs/nfsd/nfs4recover.c:580:3: warning: format '%lu' expects type 'long unsigned int', but argument 4 has type 'unsigned int' Signed-off-by: Randy Dunlap <rdunlap@xenotime.net> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: linux-nfs@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | autofs: make the autofsv5 packet file descriptor use a packetized pipeLinus Torvalds2012-04-293-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The autofs packet size has had a very unfortunate size problem on x86: because the alignment of 'u64' differs in 32-bit and 64-bit modes, and because the packet data was not 8-byte aligned, the size of the autofsv5 packet structure differed between 32-bit and 64-bit modes despite looking otherwise identical (300 vs 304 bytes respectively). We first fixed that up by making the 64-bit compat mode know about this problem in commit a32744d4abae ("autofs: work around unhappy compat problem on x86-64"), and that made a 32-bit 'systemd' work happily on a 64-bit kernel because everything then worked the same way as on a 32-bit kernel. But it turned out that 'automount' had actually known and worked around this problem in user space, so fixing the kernel to do the proper 32-bit compatibility handling actually *broke* 32-bit automount on a 64-bit kernel, because it knew that the packet sizes were wrong and expected those incorrect sizes. As a result, we ended up reverting that compatibility mode fix, and thus breaking systemd again, in commit fcbf94b9dedd. With both automount and systemd doing a single read() system call, and verifying that they get *exactly* the size they expect but using different sizes, it seemed that fixing one of them inevitably seemed to break the other. At one point, a patch I seriously considered applying from Michael Tokarev did a "strcmp()" to see if it was automount that was doing the operation. Ugly, ugly. However, a prettier solution exists now thanks to the packetized pipe mode. By marking the communication pipe as being packetized (by simply setting the O_DIRECT flag), we can always just write the bigger packet size, and if user-space does a smaller read, it will just get that partial end result and the extra alignment padding will simply be thrown away. This makes both automount and systemd happy, since they now get the size they asked for, and the kernel side of autofs simply no longer needs to care - it could pad out the packet arbitrarily. Of course, if there is some *other* user of autofs (please, please, please tell me it ain't so - and we haven't heard of any) that tries to read the packets with multiple writes, that other user will now be broken - the whole point of the packetized mode is that one system call gets exactly one packet, and you cannot read a packet in pieces. Tested-by: Michael Tokarev <mjt@tls.msk.ru> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: David Miller <davem@davemloft.net> Cc: Ian Kent <raven@themaw.net> Cc: Thomas Meyer <thomas@m3y3r.de> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | pipes: add a "packetized pipe" mode for writingLinus Torvalds2012-04-291-2/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The actual internal pipe implementation is already really about individual packets (called "pipe buffers"), and this simply exposes that as a special packetized mode. When we are in the packetized mode (marked by O_DIRECT as suggested by Alan Cox), a write() on a pipe will not merge the new data with previous writes, so each write will get a pipe buffer of its own. The pipe buffer is then marked with the PIPE_BUF_FLAG_PACKET flag, which in turn will tell the reader side to break the read at that boundary (and throw away any partial packet contents that do not fit in the read buffer). End result: as long as you do writes less than PIPE_BUF in size (so that the pipe doesn't have to split them up), you can now treat the pipe as a packet interface, where each read() system call will read one packet at a time. You can just use a sufficiently big read buffer (PIPE_BUF is sufficient, since bigger than that doesn't guarantee atomicity anyway), and the return value of the read() will naturally give you the size of the packet. NOTE! We do not support zero-sized packets, and zero-sized reads and writes to a pipe continue to be no-ops. Also note that big packets will currently be split at write time, but that the size at which that happens is not really specified (except that it's bigger than PIPE_BUF). Currently that limit is the system page size, but we might want to explicitly support bigger packets some day. The main user for this is going to be the autofs packet interface, allowing us to stop having to care so deeply about exact packet sizes (which have had bugs with 32/64-bit compatibility modes). But user space can create packetized pipes with "pipe2(fd, O_DIRECT)", which will fail with an EINVAL on kernels that do not support this interface. Tested-by: Michael Tokarev <mjt@tls.msk.ru> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: David Miller <davem@davemloft.net> Cc: Ian Kent <raven@themaw.net> Cc: Thomas Meyer <thomas@m3y3r.de> Cc: stable@kernel.org # needed for systemd/autofs interaction fix Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | Merge branch 'for-linus' of ↵Linus Torvalds2012-04-2815-139/+148
|\ \ \ \ | | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs Pull btrfs fixes from Chris Mason: "This has our collection of bug fixes. I missed the last rc because I thought our patches were making NFS crash during my xfs test runs. Turns out it was an NFS client bug fixed by someone else while I tried to bisect it. All of these fixes are small, but some are fairly high impact. The biggest are fixes for our mount -o remount handling, a deadlock due to GFP_KERNEL allocations in readdir, and a RAID10 error handling bug. This was tested against both 3.3 and Linus' master as of this morning." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: (26 commits) Btrfs: reduce lock contention during extent insertion Btrfs: avoid deadlocks from GFP_KERNEL allocations during btrfs_real_readdir Btrfs: Fix space checking during fs resize Btrfs: fix block_rsv and space_info lock ordering Btrfs: Prevent root_list corruption Btrfs: fix repair code for RAID10 Btrfs: do not start delalloc inodes during sync Btrfs: fix that check_int_data mount option was ignored Btrfs: don't count CRC or header errors twice while scrubbing Btrfs: fix btrfs_ioctl_dev_info() crash on missing device btrfs: don't return EINTR Btrfs: double unlock bug in error handling Btrfs: always store the mirror we read the eb from fs/btrfs/volumes.c: add missing free_fs_devices btrfs: fix early abort in 'remount' Btrfs: fix max chunk size check in chunk allocator Btrfs: add missing read locks in backref.c Btrfs: don't call free_extent_buffer twice in iterate_irefs Btrfs: Make free_ipath() deal gracefully with NULL pointers Btrfs: avoid possible use-after-free in clear_extent_bit() ...
| * | | Btrfs: reduce lock contention during extent insertionChris Mason2012-04-271-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We're spending huge amounts of time on lock contention during end_io processing because we unconditionally assume we are overwriting an existing extent in the file for each IO. This checks to see if we are outside i_size, and if so, it uses a less expensive readonly search of the btree to look for existing extents. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: avoid deadlocks from GFP_KERNEL allocations during btrfs_real_readdirChris Mason2012-04-271-29/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Btrfs has an optimization where it will preallocate dentries during readdir to fill in enough information to open the inode without an extra lookup. But, we're calling d_alloc, which is doing GFP_KERNEL allocations, and that leads to deadlocks because our readdir code has tree locks held. For now, disable this optimization. We'll fix the gfp mask in the next merge window. Signed-off-by: Chris Mason <chris.mason@oracle.com>
OpenPOWER on IntegriCloud