summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
* lockd: reorganize nlm_host_rebootedJ. Bruce Fields2010-12-161-24/+34
| | | | | | | | | | Minor reorganization; no change in behavior. This will save some duplicated code after we split the client and server host caches. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu> [ cel: Forward-ported to 2.6.37 ] Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* lockd: define host_for_each{_safe} macrosJ. Bruce Fields2010-12-161-52/+55
| | | | | | | | | | We've got a lot of loops like this, and I find them a little easier to read with the macros. More such loops are coming. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu> [ cel: Forward-ported to 2.6.37 ] Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* SUNRPC: New xdr_streams XDR decoder APIChuck Lever2010-12-168-530/+468
| | | | | | | | | | | | | Now that all client-side XDR decoder routines use xdr_streams, there should be no need to support the legacy calling sequence [rpc_rqst *, __be32 *, RPC res *] anywhere. We can construct an xdr_stream in the generic RPC code, instead of in each decoder function. This is a refactoring change. It should not cause different behavior. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* SUNRPC: New xdr_streams XDR encoder APIChuck Lever2010-12-168-759/+542
| | | | | | | | | | | | | | | | | Now that all client-side XDR encoder routines use xdr_streams, there should be no need to support the legacy calling sequence [rpc_rqst *, __be32 *, RPC arg *] anywhere. We can construct an xdr_stream in the generic RPC code, instead of in each encoder function. Also, all the client-side encoder functions return 0 now, making a return value superfluous. Take this opportunity to convert them to return void instead. This is a refactoring change. It should not cause different behavior. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Remove unused UMNT response data structureChuck Lever2010-12-161-2/+0
| | | | | | | | | | Clean up. The UMNT request has a NULL response. There's no need to set up a mountres structure for it. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Avoid return code checking in mount XDR encoder functionsChuck Lever2010-12-161-20/+15
| | | | | | | | | | | | | | | | | | | Clean up. The trend in the other XDR encoder functions is to BUG() when encoding problems occur, since a problem here is always due to a local coding error. Then, instead of a status, zero is unconditionally returned. Update the mount client XDR encoders to behave this way. To finish the update, use the new-style be32_to_cpup() and cpu_to_be32() macros, and compute the buffer sizes using raw integers instead of sizeof(). This matches the conventions used in other XDR functions. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NSM: Avoid return code checking in NSM XDR encoder functionsChuck Lever2010-12-161-43/+25
| | | | | | | | | | | | | | | | | | | Clean up. The trend in the other XDR encoder functions is to BUG() when encoding problems occur, since a problem here is always due to a local coding error. Then, instead of a status, zero is unconditionally returned. Update the NSM XDR encoders to behave this way. To finish the update, use the new-style be32_to_cpup() and cpu_to_be32() macros, and compute the buffer sizes using raw integers instead of sizeof(). This matches the conventions used in other XDR functions Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Squelch compiler warning in decode_getdeviceinfo()Chuck Lever2010-12-161-1/+1
| | | | | | | | | | | Clean up. .../linux/nfs-2.6/fs/nfs/nfs4xdr.c: In function ‘decode_getdeviceinfo’: .../linux/nfs-2.6/fs/nfs/nfs4xdr.c:5008: warning: comparison between signed and unsigned integer expressions Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Simplify ->decode_dirent() calling sequenceChuck Lever2010-12-166-41/+59
| | | | | | | | | | | | | | | | | | | | | | Clean up. The pointer returned by ->decode_dirent() is no longer used as a pointer. The only call site (xdr_decode() in fs/nfs/dir.c) simply extracts the errno value encoded in the pointer. Replace the returned pointer with a standard integer errno return value. Also, pass the "server" argument as part of the nfs_entry instead of as a separate parameter. It's faster to derive "server" in nfs_readdir_xdr_to_array() since we already have the directory's inode handy. "server" ought to be invariant for a set of entries in the same directory, right? The legacy versions of decode_dirent() don't use "server" anyway, so it's wasted work for them to derive and pass "server" for each entry. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Fix hdrlen calculation in NFSv4's decode_read()Chuck Lever2010-12-161-1/+1
| | | | | | | | | When computing the length of the header, be sure to include the four octets consumed by "count". Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* lockd: Move nlmdbg_cookie2a() to svclock.cChuck Lever2010-12-162-29/+30
| | | | | | | | Clean up. nlmdbg_cookie2a() is used only in svclock.c. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Repair whitespace damage in NFS PROC macroChuck Lever2010-12-163-86/+86
| | | | | | | | | | | | Clean up. When I was making other changes in this area, checkscript.pl complained about the use of leading blanks in the PROC macros in the xdr files. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFSD: Update XDR decoders in NFSv4 callback clientChuck Lever2010-12-161-176/+239
| | | | | | | | | | | | | | Clean up. Remove old-style NFSv4 XDR macros in favor of the style now used in fs/nfs/nfs4xdr.c. These were forgotten during the recent nfs4xdr.c rewrite. Additional whitespace cleanup adds to the size of this patch. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFSD: Update XDR encoders in NFSv4 callback clientChuck Lever2010-12-161-77/+178
| | | | | | | | | | | | Clean up. Remove old-style NFSv4 XDR macros in favor of the style now used in fs/nfs/nfs4xdr.c. These were forgotten during the recent nfs4xdr.c rewrite. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* lockd: Introduce new-style XDR functions for NLMv4Chuck Lever2010-12-163-256/+622
| | | | | | | | | | | | | | | We'd like to prevent local buffer overflows caused by malicious or broken servers. New xdr_stream style decoders can do that. For efficiency, we also want to be able to pass xdr_streams from call_encode() to all XDR encoding functions, rather than building an xdr_stream in every XDR encoding function in the kernel. Same idea as the NLM v3 XDR overhaul. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Move and update xdr_decode_foo() functions that we're keepingChuck Lever2010-12-161-69/+67
| | | | | | | | | | | | | | Clean up. Move the timestamp decoder to match the placement and naming conventions of the other helpers. Fold xdr_decode_fattr() into decode_fattr3(), which is now it's only user. Fold xdr_decode_wcc_attr() into decode_wcc_attr(), which is now it's only user. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Remove unused old NFSv3 decoder functionsChuck Lever2010-12-161-508/+4
| | | | | | | | | Clean up. Remove unused legacy result decoder functions, and any now unused decoder helper functions. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Switch in new NFSv3 decoder functionsChuck Lever2010-12-161-34/+30
| | | | | | | | | | | The naming scheme of the new decoder functions, which follows the NFSv4 XDR decoder functions, is slightly different than the scheme used for the old functions. Rename the functions as a separate step to keep the patches clean. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Introduce new-style XDR decoding functions for NFSv2Chuck Lever2010-12-161-79/+1445
| | | | | | | | | | | | | | | | | We'd like to prevent local buffer overflows caused by malicious or broken servers. New xdr_stream style decoders can do that. For efficiency, we also eventually want to be able to pass xdr_streams from call_decode() to all XDR decoding functions, rather than building an xdr_stream in every XDR decoding function in the kernel. Static helper functions are left without the "inline" directive. This allows the compiler to choose automatically how to optimize these for size or speed. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Update xdr_encode_foo() functions that we're keepingChuck Lever2010-12-161-56/+55
| | | | | | | | | | Clean up. Move the timestamp and the sattr encoder to match the placement convention of the other helpers, update their coding style, and refresh their documenting comments. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Remove unused old NFSv3 encoder functionsChuck Lever2010-12-161-324/+4
| | | | | | | | | Clean up. Remove unused legacy argument encoder functions, and any now unused encoder helper functions. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Replace old NFSv3 encoder functions with xdr_stream-based onesChuck Lever2010-12-161-28/+30
| | | | | | | | | | | The naming scheme of the new encoder functions, which follows the NFSv4 XDR encoder functions, is slightly different than the scheme used for the old functions. Rename the functions as a separate step to keep the patches clean. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Introduce new-style XDR encoding functions for NFSv3Chuck Lever2010-12-161-3/+830
| | | | | | | | | | | | | | | | | | | | | | | | We're interested in taking advantage of the safety benefits of xdr_streams. These data structures allow more careful checking for buffer overflow while encoding. More careful type checking is also introduced in the new functions. For efficiency, we also eventually want to be able to pass xdr_streams from call_encode() to all XDR encoding functions, rather than building an xdr_stream in every XDR encoding function in the kernel. To do this means all encoders must be ready to handle a passed-in xdr_stream. The new encoders follow the modern paradigm for XDR encoders: BUG on error, and always return a zero status code. Static helper functions are left without the "inline" directive. This allows the compiler to choose automatically how to optimize these for size or speed. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* lockd: Introduce new-style XDR functions for NLMv3Chuck Lever2010-12-163-260/+645
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We'd like to prevent local buffer overflows caused by malicious or broken servers. New xdr_stream style decoders can do that. For efficiency, we also eventually want to be able to pass xdr_streams from call_encode() and call_decode() to all XDR encoding functions, rather than building an xdr_stream in every XDR encoding and decoding function in the kernel. To do all of this, rewrite the XDR encoding and decoding functions in fs/lockd/xdr.c to use xdr_streams. This makes them more or less incompatible with server-side XDR helper functions, so break them out into a separate source file. Static helper functions are left without the "inline" directive. This allows the compiler to choose automatically how to optimize these for size or speed. SHARE-related functionality doesn't seem to be used, as those functions are hiding behind a #define that isn't set anywhere that I can find. And, they've been in there forever (at least as far back as the kernel's git history goes), yet remain unused. Let's take the opportunity to bin them. It should be easy enough for someone to introduce proper XDR functions if at some point SHARE-related NLM functionality is desired. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Move and update xdr_decode_foo() functions that we're keepingChuck Lever2010-12-161-41/+56
| | | | | | | | | | | | Clean up. Move the timestamp decoder to match the placement and naming conventions of the other helpers. Fold xdr_decode_fattr() into decode_fattr(), which is now it's only user. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Replace old NFSv2 decoder functions with xdr_stream-based onesChuck Lever2010-12-161-249/+4
| | | | | | | | | Clean up. Remove unused legacy result decoder functions, and any now unused decoder helper functions. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Introduce new-style XDR decoding functions for NFSv2Chuck Lever2010-12-163-10/+558
| | | | | | | | | | | | | | | | | | | | We'd like to prevent local buffer overflows caused by malicious or broken servers. New xdr_stream style decoders can do that. For efficiency, we also eventually want to be able to pass xdr_streams from call_decode() to all XDR decoding functions, rather than building an xdr_stream in every XDR decoding function in the kernel. nfs_decode_dirent() is renamed to follow the naming convention of the other two dirent decoders. Static helper functions are left without the "inline" directive. This allows the compiler to choose automatically how to optimize these for size or speed. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Use the "nfs_stat" enum for nfs_stat_to_errno()'s argumentChuck Lever2010-12-162-9/+11
| | | | | | | | | | | | | | | Clean up. To distinguish more clearly between the on-the-wire NFSERR_ value and our local errno values, use the proper type for the argument of nfs_stat_to_errno(). Add a documenting comment appropriate for a global function shared outside this source file. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Update xdr_encode_foo() functions that we're keepingChuck Lever2010-12-161-26/+33
| | | | | | | | | | | | Clean up. The new helper functions are kept in order by section of RFC 1094. Move the two timestamp encoders we're keeping, update their coding style, and refresh their documenting comments. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Remove old NFSv2 encoder functionsChuck Lever2010-12-161-245/+4
| | | | | | | | | Clean up: Remove unused legacy argument encoder functions, and any now unused encoder helper functions. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: Introduce new-style XDR encoding functions for NFSv2Chuck Lever2010-12-161-3/+403
| | | | | | | | | | | | | | | | | | | | | | | | We're interested in taking advantage of the safety benefits of xdr_streams. These data structures allow more careful checking for buffer overflow while encoding. More careful type checking is also introduced in the new functions. For efficiency, we also eventually want to be able to pass xdr_streams from call_encode() to all XDR encoding functions, rather than building an xdr_stream in every XDR encoding function in the kernel. To do this means all encoders must be ready to handle a passed-in xdr_stream. The new encoders follow the modern paradigm for XDR encoders: BUG on any error, and always return a zero status code. Static helper functions are left without the "inline" directive. This allows the compiler to choose automatically how to optimize these for size or speed. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* nilfs2: fix regression of garbage collection ioctlRyusuke Konishi2010-12-162-9/+12
| | | | | | | | | | | | | | | | | | On 2.6.37-rc1, garbage collection ioctl of nilfs was broken due to the commit 263d90cefc7d82a0 ("nilfs2: remove own inode hash used for GC"), and leading to filesystem corruption. The patch doesn't queue gc-inodes for log writer if they are reused through the vfs inode cache. Here, gc-inode is the inode which buffers blocks to be relocated on GC. That patch queues gc-inodes in nilfs_init_gcinode() function, but this function is not called when they don't have I_NEW flag. Thus, some of live blocks are wrongly overrode without being moved to new logs. This resolves the problem by moving the gc-inode queueing to an outer function to ensure it's done right. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* Merge branch 'for_linus' of ↵Linus Torvalds2010-12-154-4/+18
|\ | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4 * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: fix typo which broke '..' detection in ext4_find_entry() ext4: Turn off multiple page-io submission by default
| * ext4: fix typo which broke '..' detection in ext4_find_entry()Aaro Koskinen2010-12-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | There should be a check for the NUL character instead of '0'. Fortunately the only thing that cares about this is NFS serving, which is why we didn't notice this in the merge window testing. Reported-by: Phil Carmody <ext-phil.2.carmody@nokia.com> Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
| * ext4: Turn off multiple page-io submission by defaultTheodore Ts'o2010-12-143-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jon Nelson has found a test case which causes postgresql to fail with the error: psql:t.sql:4: ERROR: invalid page header in block 38269 of relation base/16384/16581 Under memory pressure, it looks like part of a file can end up getting replaced by zero's. Until we can figure out the cause, we'll roll back the change and use block_write_full_page() instead of ext4_bio_write_page(). The new, more efficient writing function can be used via the mount option mblk_io_submit, so we can test and fix the new page I/O code. To reproduce the problem, install postgres 8.4 or 9.0, and pin enough memory such that the system just at the end of triggering writeback before running the following sql script: begin; create temporary table foo as select x as a, ARRAY[x] as b FROM generate_series(1, 10000000 ) AS x; create index foo_a_idx on foo (a); create index foo_b_idx on foo USING GIN (b); rollback; If the temporary table is created on a hard drive partition which is encrypted using dm_crypt, then under memory pressure, approximately 30-40% of the time, pgsql will issue the above failure. This patch should fix this problem, and the problem will come back if the file system is mounted with the mblk_io_submit mount option. Reported-by: Jon Nelson <jnelson@jamponi.net> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
* | install_special_mapping skips security_file_mmap check.Tavis Ormandy2010-12-151-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The install_special_mapping routine (used, for example, to setup the vdso) skips the security check before insert_vm_struct, allowing a local attacker to bypass the mmap_min_addr security restriction by limiting the available pages for special mappings. bprm_mm_init() also skips the check, and although I don't think this can be used to bypass any restrictions, I don't see any reason not to have the security check. $ uname -m x86_64 $ cat /proc/sys/vm/mmap_min_addr 65536 $ cat install_special_mapping.s section .bss resb BSS_SIZE section .text global _start _start: mov eax, __NR_pause int 0x80 $ nasm -D__NR_pause=29 -DBSS_SIZE=0xfffed000 -f elf -o install_special_mapping.o install_special_mapping.s $ ld -m elf_i386 -Ttext=0x10000 -Tbss=0x11000 -o install_special_mapping install_special_mapping.o $ ./install_special_mapping & [1] 14303 $ cat /proc/14303/maps 0000f000-00010000 r-xp 00000000 00:00 0 [vdso] 00010000-00011000 r-xp 00001000 00:19 2453665 /home/taviso/install_special_mapping 00011000-ffffe000 rwxp 00000000 00:00 0 [stack] It's worth noting that Red Hat are shipping with mmap_min_addr set to 4096. Signed-off-by: Tavis Ormandy <taviso@google.com> Acked-by: Kees Cook <kees@ubuntu.com> Acked-by: Robert Swiecki <swiecki@google.com> [ Changed to not drop the error code - akpm ] Reviewed-by: James Morris <jmorris@namei.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'for-2.6.37' of git://linux-nfs.org/~bfields/linuxLinus Torvalds2010-12-142-13/+14
|\ \ | | | | | | | | | | | | | | | * 'for-2.6.37' of git://linux-nfs.org/~bfields/linux: nfsd: Fix possible BUG_ON firing in set_change_info sunrpc: prevent use-after-free on clearing XPT_BUSY
| * | nfsd: Fix possible BUG_ON firing in set_change_infoNeil Brown2010-12-082-13/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If vfs_getattr in fill_post_wcc returns an error, we don't set fh_post_change. For NFSv4, this can result in set_change_info triggering a BUG_ON. i.e. fh_post_saved being zero isn't really a bug. So: - instead of BUGging when fh_post_saved is zero, just clear ->atomic. - if vfs_getattr fails in fill_post_wcc, take a copy of i_ctime anyway. This will be used i seg_change_info, but not overly trusted. - While we are there, remove the pointless 'if' statements in set_change_info. There is no harm setting all the values. Signed-off-by: NeilBrown <neilb@suse.de> Cc: stable@kernel.org Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* | | Merge git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstableLinus Torvalds2010-12-1411-94/+207
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable: Btrfs: prevent RAID level downgrades when space is low Btrfs: account for missing devices in RAID allocation profiles Btrfs: EIO when we fail to read tree roots Btrfs: fix compiler warnings Btrfs: Make async snapshot ioctl more generic Btrfs: pwrite blocked when writing from the mmaped buffer of the same page Btrfs: Fix a crash when mounting a subvolume Btrfs: fix sync subvol/snapshot creation Btrfs: Fix page leak in compressed writeback path Btrfs: do not BUG if we fail to remove the orphan item for dead snapshots Btrfs: fixup return code for btrfs_del_orphan_item Btrfs: do not do fast caching if we are allocating blocks for tree_root Btrfs: deal with space cache errors better Btrfs: fix use after free in O_DIRECT
| * | | Btrfs: prevent RAID level downgrades when space is lowChris Mason2010-12-131-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The extent allocator has code that allows us to fill allocations from any available block group, even if it doesn't match the raid level we've requested. This was put in because adding a new drive to a filesystem made with the default mkfs options actually upgrades the metadata from single spindle dup to full RAID1. But, the code also allows us to allocate from a raid0 chunk when we really want a raid1 or raid10 chunk. This can cause big trouble because mkfs creates a small (4MB) raid0 chunk for data and metadata which then goes unused for raid1/raid10 installs. The allocator will happily wander in and allocate from that chunk when things get tight, which is not correct. The fix here is to make sure that we provide duplication when the caller has asked for it. It does all the dups to be any raid level, which preserves the dup->raid1 upgrade abilities. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: account for missing devices in RAID allocation profilesChris Mason2010-12-133-3/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we mount in RAID degraded mode without adding a new device to replace the failed one, we can end up using the wrong RAID flags for allocations. This results in strange combinations of block groups (raid1 in a raid10 filesystem) and corruptions when we try to allocate blocks from single spindle chunks on drives that are actually missing. The first device has two small 4MB chunks in it that mkfs creates and these are usually unused in a raid1 or raid10 setup. But, in -o degraded, the allocator will fall back to these because the mask of desired raid groups isn't correct. The fix here is to count the missing devices as we build up the list of devices in the system. This count is used when picking the raid level to make sure we continue using the same levels that were in place before we lost a drive. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: EIO when we fail to read tree rootsChris Mason2010-12-131-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we just get a plain IO error when we read tree roots, the code wasn't properly sending that error up the chain. This allowed mounts to continue when they should failed, and allowed operations on partially setup root structs. The end result was usually oopsen on spinlocks that hadn't been spun up correctly. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: fix compiler warningsJan Beulich2010-12-102-7/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... regarding an unused function when !MIGRATION, and regarding a printk() format string vs argument mismatch. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: Make async snapshot ioctl more genericLi Zefan2010-12-102-22/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we had reserved some bytes in struct btrfs_ioctl_vol_args, we wouldn't have to create a new structure for async snapshot creation. Here we convert async snapshot ioctl to use a more generic ABI, as we'll add more ioctls for snapshots/subvolumes in the future, readonly snapshots for example. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: pwrite blocked when writing from the mmaped buffer of the same pageXin Zhong2010-12-101-32/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This problem is found in meego testing: http://bugs.meego.com/show_bug.cgi?id=6672 A file in btrfs is mmaped and the mmaped buffer is passed to pwrite to write to the same page of the same file. In btrfs_file_aio_write(), the pages is locked by prepare_pages(). So when btrfs_copy_from_user() is called, page fault happens and the same page needs to be locked again in filemap_fault(). The fix is to move iov_iter_fault_in_readable() before prepage_pages() to make page fault happen before pages are locked. And also disable page fault in critical region in btrfs_copy_from_user(). Reviewed-by: Yan, Zheng<zheng.z.yan@intel.com> Signed-off-by: Zhong, Xin <xin.zhong@intel.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: Fix a crash when mounting a subvolumeLi Zefan2010-12-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We should drop dentry before deactivating the superblock, otherwise we can hit this bug: BUG: Dentry f349a690{i=100,n=/} still in use (1) [unmount of btrfs loop1] ... Steps to reproduce the bug: # mount /dev/loop1 /mnt # mkdir save # btrfs subvolume snapshot /mnt save/snap1 # umount /mnt # mount -o subvol=save/snap1 /dev/loop1 /mnt (crash) Reported-by: Michael Niederle <mniederle@gmx.at> Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: fix sync subvol/snapshot creationSage Weil2010-12-101-9/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were incorrectly taking the async path even for the sync ioctls by passing in &transid unconditionally. There's ample room for further cleanup here, but this keeps the fix simple. Signed-off-by: Sage Weil <sage@newdream.net> Reviewed-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: Fix page leak in compressed writeback pathYan, Zheng2010-12-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "start + num_bytes >= actual_end" can happen when compressed page writeback races with file truncation. In that case we need unlock and release pages past the end of file. Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: do not BUG if we fail to remove the orphan item for dead snapshotsJosef Bacik2010-12-101-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Not being able to delete an orphan item isn't a horrible thing. The worst that happens is the next time around we try and do the orphan cleanup and we can't find the referenced object and just delete the item and move on. Signed-off-by: Josef Bacik <josef@redhat.com>
| * | | Btrfs: fixup return code for btrfs_del_orphan_itemJosef Bacik2010-12-091-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If the orphan item doesn't exist, we return 1, which doesn't make any sense to the callers. Instead return -ENOENT if we didn't find the item. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
OpenPOWER on IntegriCloud