summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
...
| * | ocfs2: Don't repeat ocfs2_xattr_block_find()Joel Becker2008-11-101-30/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | ocfs2_xattr_block_get() looks up the xattr in a startlingly familiar way; it's identical to the function ocfs2_xattr_block_find(). Let's just use the later in the former. Signed-off-by: Joel Becker <joel.becker@oracle.com> Signed-off-by: Mark Fasheh <mfasheh@suse.com>
| * | ocfs2: Specify appropriate journal access for new xattr buckets.Joel Becker2008-11-101-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are a couple places that get an xattr bucket that may be reading an existing one or may be allocating a new one. They should specify the correct journal access mode depending. Signed-off-by: Joel Becker <joel.becker@oracle.com> Signed-off-by: Mark Fasheh <mfasheh@suse.com>
| * | ocfs2: Check errors from ocfs2_xattr_update_xattr_search()Joel Becker2008-11-101-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | The ocfs2_xattr_update_xattr_search() function can return an error when trying to read blocks off of disk. The caller needs to check this error before using those (possibly invalid) blocks. Signed-off-by: Joel Becker <joel.becker@oracle.com> Signed-off-by: Mark Fasheh <mfasheh@suse.com>
| * | ocfs2: Don't return -EFAULT from a corrupt xattr entry.Joel Becker2008-11-101-1/+1
| | | | | | | | | | | | | | | | | | | | | If the xattr disk structures are corrupt, return -EIO, not -EFAULT. Signed-off-by: Joel Becker <joel.becker@oracle.com> Signed-off-by: Mark Fasheh <mfasheh@suse.com>
| * | ocfs2: Check xattr block signatures properly.Joel Becker2008-11-102-22/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The xattr.c code is currently memcmp()ing naking buffer pointers. Create the OCFS2_IS_VALID_XATTR_BLOCK() macro to match its peers and use that. In addition, failed signature checks were returning -EFAULT, which is completely wrong. Return -EIO. Signed-off-by: Joel Becker <joel.becker@oracle.com> Signed-off-by: Mark Fasheh <mfasheh@suse.com>
| * | ocfs2: add handler_map array bounds checkingTiger Yang2008-11-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make the handler_map array as large as the possible value range to avoid a fencepost error. [ Utilize alternate method -- Joel ] Signed-off-by: Tiger Yang <tiger.yang@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com> Signed-off-by: Mark Fasheh <mfasheh@suse.com>
| * | ocfs2: remove duplicate definition in xattrTiger Yang2008-11-101-9/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Include/linux/xattr.h already has the definition about xattr prefix, so remove the duplicate definitions in xattr.c. Signed-off-by: Tiger Yang <tiger.yang@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com> Signed-off-by: Mark Fasheh <mfasheh@suse.com>
| * | ocfs2: fix function declaration and definition in xattrTiger Yang2008-11-102-27/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | Because we merged the xattr sources into one file, some functions no longer belong in the header file. Signed-off-by: Tiger Yang <tiger.yang@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com> Signed-off-by: Mark Fasheh <mfasheh@suse.com>
| * | ocfs2: fix license in xattrTiger Yang2008-11-102-19/+6
| | | | | | | | | | | | | | | | | | | | | | | | This patch fixes the license in xattr.c and xattr.h. Signed-off-by: Tiger Yang <tiger.yang@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com> Signed-off-by: Mark Fasheh <mfasheh@suse.com>
| * | Merge branch 'for-2.6.28' of git://linux-nfs.org/~bfields/linuxLinus Torvalds2008-11-091-4/+1
| |\ \ | | | | | | | | | | | | | | | | * 'for-2.6.28' of git://linux-nfs.org/~bfields/linux: Fix nfsd truncation of readdir results
| | * | Fix nfsd truncation of readdir resultsDoug Nazar2008-11-091-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 8d7c4203 "nfsd: fix failure to set eof in readdir in some situations" introduced a bug: on a directory in an exported ext3 filesystem with dir_index unset, a READDIR will only return about 250 entries, even if the directory was larger. Bisected it back to this commit; reverting it fixes the problem. It turns out that in this case ext3 reads a block at a time, then returns from readdir, which means we can end up with buf.full==0 but with more entries in the directory still to be read. Before 8d7c4203 (but after c002a6c797 "Optimise NFS readdir hack slightly"), this would cause us to return the READDIR result immediately, but with the eof bit unset. That could cause a performance regression (because the client would need more roundtrips to the server to read the whole directory), but no loss in correctness, since the cleared eof bit caused the client to send another readdir. After 8d7c4203, the setting of the eof bit made this a correctness problem. So, move nfserr_eof into the loop and remove the buf.full check so that we loop until buf.used==0. The following seems to do the right thing and reduces the network traffic since we don't return a READDIR result until the buffer is full. Tested on an empty directory & large directory; eof is properly sent and there are no more short buffers. Signed-off-by: Doug Nazar <nazard@dragoninc.ca> Cc: David Woodhouse <David.Woodhouse@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
| * | | Merge branch 'for_linus' of ↵Linus Torvalds2008-11-077-30/+69
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4 * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: add checksum calculation when clearing UNINIT flag in ext4_new_inode ext4: Mark the buffer_heads as dirty and uptodate after prepare_write ext4: calculate journal credits correctly ext4: wait on all pending commits in ext4_sync_fs() ext4: Convert to host order before using the values. ext4: fix missing ext4_unlock_group in error path jbd2: deregister proc on failure in jbd2_journal_init_inode jbd2: don't give up looking for space so easily in __jbd2_log_wait_for_space jbd: don't give up looking for space so easily in __log_wait_for_space
| | * | | ext4: add checksum calculation when clearing UNINIT flag in ext4_new_inodeFrederic Bohe2008-11-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When initializing an uninitialized block group in ext4_new_inode(), its block group checksum must be re-calculated. This fixes a race when several threads try to allocate a new inode in an UNINIT'd group. There is some question whether we need to be initializing the block bitmap in ext4_new_inode() at all, but for now, if we are going to init the block group, let's eliminate the race. Signed-off-by: Frederic Bohe <frederic.bohe@bull.net> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
| | * | | ext4: Mark the buffer_heads as dirty and uptodate after prepare_writeAneesh Kumar K.V2008-11-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to make sure we mark the buffer_heads as dirty and uptodate so that block_write_full_page write them correctly. This fixes mmap corruptions that can occur in low memory situations. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
| | * | | ext4: calculate journal credits correctlyTheodore Ts'o2008-11-061-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a 2.6.27 regression which was introduced in commit a02908f1. We weren't passing the chunk parameter down to the two subections, ext4_indirect_trans_blocks() and ext4_ext_index_trans_blocks(), with the result that massively overestimate the amount of credits needed by ext4_da_writepages, especially in the non-extents case. This causes failures especially on /boot partitions, which tend to be small and non-extent using since GRUB doesn't handle extents. This patch fixes the bug reported by Joseph Fannin at: http://bugzilla.kernel.org/show_bug.cgi?id=11964 Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
| | * | | ext4: wait on all pending commits in ext4_sync_fs()Theodore Ts'o2008-11-031-11/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In ext4_sync_fs, we only wait for a commit to finish if we started it, but there may be one already in progress which will not be synced. In the case of a data=ordered umount with pending long symlinks which are delayed due to a long list of other I/O on the backing block device, this causes the buffer associated with the long symlinks to not be moved to the inode dirty list in the second phase of fsync_super. Then, before they can be dirtied again, kjournald exits, seeing the UMOUNT flag and the dirty pages are never written to the backing block device, causing long symlink corruption and exposing new or previously freed block data to userspace. To ensure all commits are synced, we flush all journal commits now when sync_fs'ing ext4. Signed-off-by: Arthur Jones <ajones@riverbed.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Eric Sandeen <sandeen@redhat.com> Cc: <linux-ext4@vger.kernel.org>
| | * | | ext4: Convert to host order before using the values.Aneesh Kumar K.V2008-11-041-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use le16_to_cpu to read the s_reserved_gdt_blocks values from super block. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
| | * | | ext4: fix missing ext4_unlock_group in error pathAneesh Kumar K.V2008-11-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we try to free a block which is already freed, the code was returning without first unlocking the group. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
| | * | | jbd2: deregister proc on failure in jbd2_journal_init_inodeSami Liedes2008-11-021-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | jbd2_journal_init_inode() does not call jbd2_stats_proc_exit() on all failure paths after calling jbd2_stats_proc_init(). This leaves dangling references to the fs in proc. This patch fixes a bug reported by Sami Leides at: http://bugzilla.kernel.org/show_bug.cgi?id=11493 Signed-off-by: Sami Liedes <sliedes@cc.hut.fi> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
| | * | | jbd2: don't give up looking for space so easily in __jbd2_log_wait_for_spaceTheodore Ts'o2008-11-061-7/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 23f8b79e introducd a regression because it assumed that if there were no transactions ready to be checkpointed, that no progress could be made on making space available in the journal, and so the journal should be aborted. This assumption is false; it could be the case that simply calling jbd2_cleanup_journal_tail() will recover the necessary space, or, for small journals, the currently committing transaction could be responsible for chewing up the required space in the log, so we need to wait for the currently committing transaction to finish before trying to force a checkpoint operation. This patch fixes a bug reported by Mihai Harpau at: https://bugzilla.redhat.com/show_bug.cgi?id=469582 This patch fixes a bug reported by François Valenduc at: http://bugzilla.kernel.org/show_bug.cgi?id=11840 Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Duane Griffin <duaneg@dghda.com> Cc: Toshiyuki Okajima <toshi.okajima@jp.fujitsu.com>
| | * | | jbd: don't give up looking for space so easily in __log_wait_for_spaceTheodore Ts'o2008-11-061-7/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit be07c4ed introducd a regression because it assumed that if there were no transactions ready to be checkpointed, that no progress could be made on making space available in the journal, and so the journal should be aborted. This assumption is false; it could be the case that simply calling cleanup_journal_tail() will recover the necessary space, or, for small journals, the currently committing transaction could be responsible for chewing up the required space in the log, so we need to wait for the currently committing transaction to finish before trying to force a checkpoint operation. This patch fixes the bug reported by Meelis Roos at: http://bugzilla.kernel.org/show_bug.cgi?id=11937 Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Duane Griffin <duaneg@dghda.com> Cc: Toshiyuki Okajima <toshi.okajima@jp.fujitsu.com>
* | | | | Merge branch 'master' of ↵David S. Miller2008-11-0640-395/+925
|\ \ \ \ \ | |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 Conflicts: drivers/net/wireless/ath5k/base.c net/8021q/vlan_core.c
| * | | | Merge branch 'for-linus' of git://git.kernel.dk/linux-2.6-blockLinus Torvalds2008-11-061-12/+11
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://git.kernel.dk/linux-2.6-block: Block: use round_jiffies_up() Add round_jiffies_up and related routines block: fix __blkdev_get() for removable devices generic-ipi: fix the smp_mb() placement blk: move blk_delete_timer call in end_that_request_last block: add timer on blkdev_dequeue_request() not elv_next_request() bio: define __BIOVEC_PHYS_MERGEABLE block: remove unused ll_new_mergeable()
| | * | | | block: fix __blkdev_get() for removable devicesTejun Heo2008-11-061-12/+11
| | | |_|/ | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 0762b8bde9729f10f8e6249809660ff2ec3ad735 moved disk_get_part() in front of recursive get on the whole disk, which caused removable devices to try disk_get_part() before rescanning after a new media is inserted, which might fail legit open attempts or give the old partition. This patch fixes the problem by moving disk_get_part() after __blkdev_get() on the whole disk. This problem was spotted by Borislav Petkov. Signed-off-by: Tejun Heo <tj@kernel.org> Tested-by: Borislav Petkov <petkovbb@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * | | | Merge git://git.infradead.org/mtd-2.6Linus Torvalds2008-11-063-11/+16
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.infradead.org/mtd-2.6: [JFFS2] fix race condition in jffs2_lzo_compress() [MTD] [NOR] Fix cfi_send_gen_cmd handling of x16 devices in x8 mode (v4) [JFFS2] Fix lack of locking in thread_should_wake() [JFFS2] Fix build failure with !CONFIG_JFFS2_FS_WRITEBUFFER [MTD] [NAND] OMAP2: remove duplicated #include
| | * | | | [JFFS2] fix race condition in jffs2_lzo_compress()Geert Uytterhoeven2008-11-051-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | deflate_mutex protects the globals lzo_mem and lzo_compress_buf. However, jffs2_lzo_compress() unlocks deflate_mutex _before_ it has copied out the compressed data from lzo_compress_buf. Correct this by moving the mutex unlock after the copy. In addition, document what deflate_mutex actually protects. Cc: stable@kernel.org Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Acked-by: Richard Purdie <rpurdie@openedhand.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| | * | | | [JFFS2] Fix lack of locking in thread_should_wake()David Woodhouse2008-10-311-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The thread_should_wake() function trawls through the list of 'very dirty' eraseblocks, determining whether the background GC thread should wake. Doing this without holding the appropriate locks is a bad idea. OLPC Trac #8615 Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Cc: stable@kernel.org
| | * | | | [JFFS2] Fix build failure with !CONFIG_JFFS2_FS_WRITEBUFFERSteve Glendinning2008-10-211-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Build failure introduced by 5bf1723723487ddb0b9c9641b6559da96b27cc93 [JFFS2] Write buffer offset adjustment for NOR-ECC (Sibley) flash Signed-off-by: Steve Glendinning <steve.glendinning@smsc.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * | | | | fat: i_blocks warning fixOGAWA Hirofumi2008-11-064-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | blkcnt_t type depends on CONFIG_LSF. Use unsigned long long always for printk(). But lazy to type it, so add "llu" and use it. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: ->i_pos race fixOGAWA Hirofumi2008-11-061-2/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | i_pos is 64bits value, hence it's not atomic to update. Important place is fat_write_inode() only, other places without lock are just for printk(). This adds lock for "BITS_PER_LONG == 32" kernel. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: mmu_private race fixOGAWA Hirofumi2008-11-064-10/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mmu_private is 64bits value, hence it's not atomic to update. So, the access rule for mmu_private is we must hold ->i_mutex. But, fat_get_block() path doesn't follow the rule on non-allocation path. This fixes by using i_size instead if non-allocation path. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: Add printf attribute to fat_fs_panic()OGAWA Hirofumi2008-11-061-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: Fix _fat_bmap() raceOGAWA Hirofumi2008-11-061-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fat_get_cluster() assumes the requested blocknr isn't truncated during read. _fat_bmap() doesn't follow this rule. This protects it by ->i_mutex. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: Fix ATTR_RO for directoryOGAWA Hirofumi2008-11-063-12/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FAT has the ATTR_RO (read-only) attribute. But on Windows, the ATTR_RO of the directory will be just ignored actually, and is used by only applications as flag. E.g. it's setted for the customized folder by Explorer. http://msdn2.microsoft.com/en-us/library/aa969337.aspx This adds "rodir" option. If user specified it, ATTR_RO is used as read-only flag even if it's the directory. Otherwise, inode->i_mode is not used to hold ATTR_RO (i.e. fat_mode_can_save_ro() returns 0). Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: Fix ATTR_RO in the case of (~umask & S_WUGO) == 0OGAWA Hirofumi2008-11-062-5/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If inode->i_mode doesn't have S_WUGO, current code assumes it means ATTR_RO. However, if (~[ufd]mask & S_WUGO) == 0, inode->i_mode can't hold S_WUGO. Therefore the updated directory entry will always have ATTR_RO. This adds fat_mode_can_hold_ro() to check it. And if inode->i_mode can't hold, uses -i_attrs to hold ATTR_RO instead. With this, we don't set ATTR_RO unless users change it via ioctl() if (~[ufd]mask & S_WUGO) == 0. And on FAT_IOCTL_GET_ATTRIBUTES path, this adds ->i_mutex to it for not returning the partially updated attributes by FAT_IOCTL_SET_ATTRIBUTES to userland. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: Cleanup FAT attribute stuffOGAWA Hirofumi2008-11-063-31/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds three helpers: fat_make_attrs() - makes FAT attributes from inode. fat_make_mode() - makes mode_t from FAT attributes. fat_save_attrs() - saves FAT attributes to inode. Then this replaces: MSDOS_MKMODE() by fat_make_mode(), fat_attr() by fat_make_attrs(), ->i_attrs = attr & ATTR_UNUSED by fat_save_attrs(). And for root inode, those is used with ATTR_DIR instead of bogus ATTR_NONE. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: Cleanup msdos_lookup()OGAWA Hirofumi2008-11-061-17/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use same style with vfat_lookup(). Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: Kill d_invalidate() in vfat_lookup()OGAWA Hirofumi2008-11-061-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | d_invalidate() for positive dentry doesn't work in some cases (vfsmount, nfsd, and maybe others). shrink_dcache_parent() by d_invalidate() is pointless for vfat usage at all. So, this kills it, and intead of it uses d_move(). To save old behavior, this returns alias simply for directory (don't change pwd, etc..). the directory lookup shouldn't be important for performance. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: Fix/Cleanup dcache handling for vfatOGAWA Hirofumi2008-11-061-44/+80
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Add comments for handling dcache of vfat. - Separate case-sensitive case and case-insensitive to vfat_revalidate() and vfat_ci_revalidate(). vfat_revalidate() doesn't need to drop case-insensitive negative dentry on creation path. - Current code is missing to set ->d_revalidate to the negative dentry created by unlink/etc.. This sets ->d_revalidate always, and returns 1 for positive dentry. Now, we don't need to change ->d_op dynamically anymore, so this just uses sb->s_root->d_op to set ->d_op. - d_find_alias() may return DCACHE_DISCONNECTED dentry. It's not the interesting dentry there. This checks it. - Add missing LOOKUP_PARENT check. We don't need to drop the valid negative dentry for (LOOKUP_CREATE | LOOKUP_PARENT) lookup. - For consistent filename on creation path, this drops negative dentry if we can't see intent. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | vfat: Fix vfat_find() error path in vfat_lookup()OGAWA Hirofumi2008-11-061-5/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current vfat_lookup() creates negetive dentry blindly if vfat_find() returned a error. It's wrong. If the error isn't -ENOENT, just return error. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: use fat_detach() in fat_clear_inode()OGAWA Hirofumi2008-11-061-6/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use fat_detach() instead of opencoding it. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: Fix fat_ent_update_ptr() for FAT12OGAWA Hirofumi2008-11-061-4/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes the missing update for bhs/nr_bhs in case the caller accessed from block boundary to first block of boundary. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: improve fat_hash()OGAWA Hirofumi2008-11-062-12/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fat_hash() is using the algorithm known as bad. Instead of it, this uses hash_32(). The following is the summary of test. old hash: hash func (1000 times): 33489 cycles total inodes in hash table: 70926 largest bucket contains: 696 smallest bucket contains: 54 new hash: hash func (1000 times): 33129 cycles total inodes in hash table: 70926 largest bucket contains: 315 smallest bucket contains: 236 Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: cleanup fat_parse_long() error handlingDarren Jenkins2008-11-061-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Coverity CID 2332 & 2333 RESOURCE_LEAK In fat_search_long() if fat_parse_long() returns a -ve value we return without first freeing unicode. This patch free's them on this error path. The above was false positive on current tree, but this change is more clean, so apply as cleanup. [hirofumi@mail.parknet.co.jp: fix coding style] Signed-off-by: Darren Jenkins <darrenrjenkins@gmail.com> Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: use generic_file_llseek() for directoryOGAWA Hirofumi2008-11-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since fat_dir_ioctl() was already fixed (i.e. called under ->i_mutex), and __fat_readdir() doesn't take BKL anymore. So, BKL for ->llseek() is pointless, and we have to use generic_file_llseek(). Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: Fix and cleanup timestamp conversionOGAWA Hirofumi2008-11-066-72/+130
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This cleans date_dos2unix()/fat_date_unix2dos() up. New code should be much more readable. And this fixes those old functions. Those doesn't handle 2100 correctly. 2100 isn't leap year, but old one handles it as leap year. Also, with this, centi sec is handled and is fixed. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: split include/msdos_fs.hOGAWA Hirofumi2008-11-069-8/+282
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This splits __KERNEL__ stuff in include/msdos_fs.h into fs/fat/fat.h. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | fat: move fs/vfat/* and fs/msdos/* to fs/fatOGAWA Hirofumi2008-11-066-17/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This just moves those files, but change link order from MSDOS, VFAT to VFAT, MSDOS. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | ext3: wait on all pending commits in ext3_sync_fsArthur Jones2008-11-061-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In ext3_sync_fs, we only wait for a commit to finish if we started it, but there may be one already in progress which will not be synced. In the case of a data=ordered umount with pending long symlinks which are delayed due to a long list of other I/O on the backing block device, this causes the buffer associated with the long symlinks to not be moved to the inode dirty list in the second phase of fsync_super. Then, before they can be dirtied again, kjournald exits, seeing the UMOUNT flag and the dirty pages are never written to the backing block device, causing long symlink corruption and exposing new or previously freed block data to userspace. This can be reproduced with a script created by Eric Sandeen <sandeen@redhat.com>: #!/bin/bash umount /mnt/test2 mount /dev/sdb4 /mnt/test2 rm -f /mnt/test2/* dd if=/dev/zero of=/mnt/test2/bigfile bs=1M count=512 touch /mnt/test2/thisisveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryverylongfilename ln -s /mnt/test2/thisisveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryverylongfilename /mnt/test2/link umount /mnt/test2 mount /dev/sdb4 /mnt/test2 ls /mnt/test2/ umount /mnt/test2 To ensure all commits are synced, we flush all journal commits now when sync_fs'ing ext3. Signed-off-by: Arthur Jones <ajones@riverbed.com> Cc: Eric Sandeen <sandeen@redhat.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: <linux-ext4@vger.kernel.org> Cc: <stable@kernel.org> [2.6.everything] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | | autofs4: collect version check returnIan Kent2008-11-061-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function check_dev_ioctl_version() returns an error code upon fail but it isn't captured and returned in validate_dev_ioctl() as it should be. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Ian Kent <raven@themaw.net> Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
OpenPOWER on IntegriCloud