summaryrefslogtreecommitdiffstats
path: root/fs/gfs2/ops_address.c
Commit message (Collapse)AuthorAgeFilesLines
* [GFS2] Remove function gfs2_get_blockBob Peterson2008-01-251-23/+7
| | | | | | | | | | | | This patch is just a cleanup. Function gfs2_get_block() just calls function gfs2_block_map reversing the last two parameters. By reversing the parameters, gfs2_block_map() may be called directly and function gfs2_get_block may be eliminated altogether. Since this function is done for every block operation, this streamlines the code and makes it a little bit more efficient. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Use correct include file in ops_address.cSteven Whitehouse2008-01-251-1/+1
| | | | | | | Something changed in the upstream kernel, and it needs this one-liner to allow ops_address.c to build. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Don't hold page lock when starting transactionSteven Whitehouse2008-01-251-26/+25
| | | | | | | | | This is an addendum to the new AOPs work which moves the point at which we take the page lock so that we don't get it until the last possible moment. This resolves a conflict between starting transactions and the page lock. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Add writepages for GFS2 jdataSteven Whitehouse2008-01-251-8/+205
| | | | | | | | | | This patch resolves a lock ordering issue where we had been getting a transaction lock in the wrong order with respect to the page lock. By using writepages rather than just writepage, it is then possible to start a transaction before locking the page, and thus matching the locking order elsewhere in the code. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Split gfs2_writepage into three casesSteven Whitehouse2008-01-251-21/+91
| | | | | | | | | | | | This patch splits gfs2_writepage into separate functions for each of the three cases: writeback, ordered and journalled. As a result it becomes a lot easier to see what each one is doing. The common code is moved into gfs2_writepage_common. This fixes a performance bug where we were doing more work than strictly required in the ordered write case. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Introduce gfs2_set_aops()Steven Whitehouse2008-01-251-28/+50
| | | | | | | | | | Just like ext3 we now have three sets of address space operations to cover the cases of writeback, ordered and journalled data writes. This means that the individual operations can now become less complicated as we are able to remove some of the tests for file data mode from the code. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Add gfs2_is_writeback()Steven Whitehouse2008-01-251-6/+4
| | | | | | | | | | This adds a function "gfs2_is_writeback()" along the lines of the existing "gfs2_is_jdata()" in order to clean up the code and make the various tests for the inode mode more obvious. It also fixes the PageChecked() logic where we were resetting the flag too early in the case of an error path. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Remove useless i_cache from inodesSteven Whitehouse2008-01-251-1/+0
| | | | | | | | | | | | | | | The i_cache was designed to keep references to the indirect blocks used during block mapping so that they didn't have to be looked up continually. The idea failed because there are too many places where the i_cache needs to be freed, and this has in the past been the cause of many bugs. In addition there was no performance benefit being gained since the disk blocks in question were cached anyway. So this patch removes it in order to simplify the code to prepare for other changes which would otherwise have had to add further support for this feature. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Use ->page_mkwrite() for mmap()Steven Whitehouse2008-01-251-37/+8
| | | | | | | | | | | | | | | | This cleans up the mmap() code path for GFS2 by implementing the page_mkwrite function for GFS2. We are thus able to use the generic filemap_fault function for our ->fault() implementation. This now means that shared writable mappings will be much more efficiently shared across the cluster if there is a reasonable proportion of read activity (the greater proportion, the better). As a side effect, it also reduces the size of the code, removes special cases from readpage and readpages, and makes the code path easier to follow. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Clean up internal read functionSteven Whitehouse2008-01-251-50/+101
| | | | | | | | | | | | | As requested by Christoph, this patch cleans up GFS2's internal read function so that it no longer uses the do_generic_mapping_read function. This function is obsolete and GFS2 is the last user of it. As a side effect the internal read code gets smaller and easier to read and gfs2_readpage is split into two. One function has the locking and the other function has the rest of the logic. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> Cc: Christoph Hellwig <hch@infradead.org>
* gfs2: convert to new aopsSteven Whitehouse2007-10-161-84/+127
| | | | | | | Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Steven Whitehouse <swhiteho@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [GFS2] Don't try to remove buffers that don't existSteven Whitehouse2007-10-101-2/+1
| | | | Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Data corruption fixWendy Cheng2007-10-101-1/+1
| | | | | | | | | | | | | | | | * GFS2 has been using i_cache array to store its indirect meta blocks. Its flush routine doesn't correctly clean up all the entries. The problem would show while multiple nodes do simultaneous writes to the same file. Upon glock exclusive lock transfer, if the file is a sparse file with large file size where the indirect meta blocks span multiple array entries with "zero" entries in between. The flush routine prematurely stops the flushing that leaves old (stale) entries around. This leads to several nasty issues, including data corruption. * Fix gfs2_get_block_noalloc checking to correctly return EIO upon unmapped buffer. Signed-off-by: Wendy Cheng <wcheng@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Clean up journaled data writingSteven Whitehouse2007-10-101-5/+6
| | | | | | | | | | | This patch cleans up the code for writing journaled data into the log. It also removes the need to allocate a small "tag" structure for each block written into the log. Instead we just keep count of the outstanding I/O so that we can be sure that its all been written at the correct time. Another result of this patch is that a number of ll_rw_block() calls have become submit_bh() calls, closing some races at the same time. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Clean up ordered write codeSteven Whitehouse2007-10-101-4/+46
| | | | | | | | | | | | | | The following patch removes the ordered write processing from databuf_lo_before_commit() and moves it to log.c. This has the effect of greatly simplyfying databuf_lo_before_commit() and well as potentially making the ordered write code more efficient. As a side effect of this, its now possible to remove ordered buffers from the ordered buffer list at any time, so we now make use of this in invalidatepage and releasepage to ensure timely release of these buffers. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Clean up invalidatepage/releasepageSteven Whitehouse2007-10-101-117/+15
| | | | | | | | | | | | This patch fixes some bugs relating to journaled data files by cleaning up the gfs2_invalidatepage() and gfs2_releasepage() functions. We now never block during gfs2_releasepage(), instead we always either release or refuse to release depending on the status of the buffers. This fixes Red Hat bugzillas #248969 and #252392. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> Cc: Bob Peterson <rpeterso@redhat.com>
* [GFS2] Fix incorrect error path in prepare_write()Steven Whitehouse2007-08-141-1/+2
| | | | | | | | The error path in prepare_write() was incorrect in the (very rare) event that the transaction fails to start. The following prevents a NULL pointer dereference, Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* mm: merge populate and nopage into fault (fixes nonlinear)Nick Piggin2007-07-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nonlinear mappings are (AFAIKS) simply a virtual memory concept that encodes the virtual address -> file offset differently from linear mappings. ->populate is a layering violation because the filesystem/pagecache code should need to know anything about the virtual memory mapping. The hitch here is that the ->nopage handler didn't pass down enough information (ie. pgoff). But it is more logical to pass pgoff rather than have the ->nopage function calculate it itself anyway (because that's a similar layering violation). Having the populate handler install the pte itself is likewise a nasty thing to be doing. This patch introduces a new fault handler that replaces ->nopage and ->populate and (later) ->nopfn. Most of the old mechanism is still in place so there is a lot of duplication and nice cleanups that can be removed if everyone switches over. The rationale for doing this in the first place is that nonlinear mappings are subject to the pagefault vs invalidate/truncate race too, and it seemed stupid to duplicate the synchronisation logic rather than just consolidate the two. After this patch, MAP_NONBLOCK no longer sets up ptes for pages present in pagecache. Seems like a fringe functionality anyway. NOPAGE_REFAULT is removed. This should be implemented with ->fault, and no users have hit mainline yet. [akpm@linux-foundation.org: cleanup] [randy.dunlap@oracle.com: doc. fixes for readahead] [akpm@linux-foundation.org: build fix] Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Mark Fasheh <mark.fasheh@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [GFS2] Use zero_user_page() in stuffed_readpage()Steven Whitehouse2007-07-091-5/+1
| | | | | | | As suggested by Robert P. J. Day <rpjday@mindspring.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> Cc: Robert P. J. Day <rpjday@mindspring.com>
* [GFS2] Journaled file write/unstuff bugRobert Peterson2007-07-091-1/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch is for bugzilla bug 283162, which uncovered a number of bugs pertaining to writing to files that have the journaled bit on. These bugs happen most often when writing to the meta_fs because the files are always journaled. So operations like gfs2_grow were particularly vulnerable, although many of the problems could be recreated with normal files after setting the journaled bit on. The problems fixed are: -GFS2 wasn't ever writing unstuffed journaled data blocks to their in-place location on disk. Now it does. -If you unmounted too quickly after doing IO to a journaled file, GFS2 was crashing because you would discard a buffer whose bufdata was still on the active items list. GFS2 now deals with this gracefully. -GFS2 was losing track of the bufdata for journaled data blocks, and it wasn't getting freed, causing an error when you tried to unmount the module. GFS2 now frees all the bufdata structures. -There was a memory corruption occurring because GFS2 wrote twice as many log entries for journaled buffers. -It was occasionally trying to write journal headers in buffers that weren't currently mapped. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] fix jdata issuesBenjamin Marzinski2007-07-091-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | This is a patch for the first three issues of RHBZ #238162 The first issue is that when you allocate a new page for a file, it will not start off uptodate. This makes sense, since you haven't written anything to that part of the file yet. Unfortunately, gfs2_pin() checks to make sure that the buffers are uptodate. The solution to this is to mark the buffers uptodate in gfs2_commit_write(), after they have been zeroed out and have the data written into them. I'm pretty confident with this fix, although it's not completely obvious that there is no problem with marking the buffers uptodate here. The second issue is simply that you can try to pin a data buffer that is already on the incore log, and thus, already pinned. This patch checks to see if this buffer is already on the log, and exits databuf_lo_add() if it is, just like buf_lo_add() does. The third issue is that gfs2_log_flush() doesn't do it's block accounting correctly. Both metadata and journaled data are logged, but gfs2_log_flush() only compares the number of metadata blocks with the number of blocks to commit to the ondisk journal. This patch also counts the journaled data blocks. Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Clean up inode number handlingSteven Whitehouse2007-07-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | This patch cleans up the inode number handling code. The main difference is that instead of looking up the inodes using a struct gfs2_inum_host we now use just the no_addr member of this structure. The tests relating to no_formal_ino can then be done by the calling code. This has advantages in that we want to do different things in different code paths if the no_formal_ino doesn't match. In the NFS patch we want to return -ESTALE, but in the ->lookup() path, its a bug in the fs if the no_formal_ino doesn't match and thus we can withdraw in this case. In order to later fix bz #201012, we need to be able to look up an inode without knowing no_formal_ino, as the only information that is known to us is the on-disk location of the inode in question. This patch will also help us to fix bz #236099 at a later date by cleaning up a lot of the code in that area. There are no user visible changes as a result of this patch and there are no changes to the on-disk format either. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Addendum patch 2 for gfs2_growRobert Peterson2007-07-091-0/+1
| | | | | | | | | | | | | | | | | | | This addendum patch 2 corrects three things: 1. It fixes a stupid mistake in the previous addendum that broke gfs2. Ref: https://www.redhat.com/archives/cluster-devel/2007-May/msg00162.html 2. It fixes a problem that Dave Teigland pointed out regarding the external declarations in ops_address.h being in the wrong place. 3. It recasts a couple more %llu printks to (unsigned long long) as requested by Steve Whitehouse. I would have loved to put this all in one revised patch, but there was a rush to get some patches for RHEL5. Therefore, the previous patches were applied to the git tree "as is" and therefore, I'm posting another addendum. Sorry. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Kernel changes to support new gfs2_grow command (part 2)Robert Peterson2007-07-091-1/+2
| | | | | | | | | | | | | | | | | | To avoid code redundancy, I separated out the operational "guts" into a new function called read_rindex_entry. Then I made two functions: the closer-to-original gfs2_ri_update (without the special condition checks) and gfs2_ri_update_special that's designed with that condition in mind. (I don't like the name, but if you have a suggestion, I'm all ears). Oh, and there's an added benefit: we don't need all the ugly gotos anymore. ;) This patch has been tested with gfs2_fsck_hellfire (which runs for three and a half hours, btw). Signed-off-By: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] kernel changes to support new gfs2_grow commandRobert Peterson2007-07-091-1/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is another revision of my gfs2 kernel patch that allows gfs2_grow to function properly. Steve Whitehouse expressed some concerns about the previous patch and I restructured it based on his comments. The previous patch was doing the statfs_change at file close time, under its own transaction. The current patch does the statfs_change inside the gfs2_commit_write function, which keeps it under the umbrella of the inode transaction. I can't call ri_update to re-read the rindex file during the transaction because the transaction may have outstanding unwritten buffers attached to the rgrps that would be otherwise blown away. So instead, I created a new function, gfs2_ri_total, that will re-read the rindex file just to total the file system space for the sake of the statfs_change. The ri_update will happen later, when gfs2 realizes the version number has changed, as it happened before my patch. Since the statfs_change is happening at write_commit time and there may be multiple writes to the rindex file for one grow operation. So one consequence of this restructuring is that instead of getting one kernel message to indicate the change, you may see several. For example, before when you did a gfs2_grow, you'd get a single message like: GFS2: File system extended by 247876 blocks (968MB) Now you get something like: GFS2: File system extended by 207896 blocks (812MB) GFS2: File system extended by 39980 blocks (156MB) This version has also been successfully run against the hours-long "gfs2_fsck_hellfire" test that does several gfs2_grow and gfs2_fsck while interjecting file system damage. It does this repeatedly under a variety Resource Group conditions. Signed-off-By: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Patch to fix mmap of stuffed filesSteven Whitehouse2007-05-011-3/+14
| | | | | | | | If a stuffed file is mmaped and a page fault is generated at some offset above the initial page, we need to create a zero page to hang the buffer heads off before we can unstuff the file. This is a fix for bz #236087 Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Fix bz 231380, unlock page before dequeing glocks in gfs2_commit_writeJosef Whiter2007-05-011-0/+4
| | | | | | | | | | | | | | | | | If we are writing a file, and in the middle of writing the file another node attempts to get a shared lock on that file (by doing a du for example) the process doing the writing will hang waiting on lock_page. The reason for this is because when we have waiters on a exclusive glock, we will go through and flush out all dirty pages associated with that inode and release the lock. The problem is that when we flush the dirty pages, we could hit a page that we have locked durring the generic_file_buffered_write part of this operation. This patch unlocks the page before we go to dequeue the lock and locks it immediatly afterwards, since generic_file_buffered_write needs the page locked when the commit_write is completed. This patch resolves the problem, however if somebody sees a better way to do this please don't hesistate to yell. Signed-off-by: Josef Whiter <jwhiter@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] fix hangup when multiple processes are trying to write to the same fileJosef Whiter2007-03-071-2/+5
| | | | | | | | | | | | | | | | | | This fixes a problem I encountered while running bonnie++. When you have one thread that opens a file and starts to write to it, and then another thread that tries to open and write to the same file, the second thread will loop forever trying to grab the inode lock for that inode. Basically we come in through generic_buffered_file_write, which calls gfs2_prepare_write, which then attempts to grab the glock. Because we don't own the lock, gfs2_prepare_write gets GLR_TRYFAILED, which returns AOP_TRUNCATED_PAGE to generic_buffered_file_write. At this point generic_buffered_file_write loops around again and immediately retries the prepare_write. This means that the second process never gets off of the processor in order to allow the process that holds the lock to finish its work and let go of the lock. This patch makes gfs2_glock_nq schedule() if it gets back a GLR_TRYFAILED, which resolves this problem. Signed-off-by: Josef Whiter <jwhiter@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] make gfs2_writepages() staticAdrian Bunk2007-02-071-1/+2
| | | | | | | | | | | | | | | | On Mon, Jan 29, 2007 at 08:45:28PM -0800, Andrew Morton wrote: >... > Changes since 2.6.20-rc6-mm2: >... > git-gfs2-nmw.patch >... > git trees >... This patch makes the needlessly global gfs2_writepages() static. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Unlock page on prepare_write try lock failureSteven Whitehouse2007-02-071-1/+3
| | | | | | | | When the try lock of the glock failed in prepare_write we were incorrectly exiting this function with the page still locked. This was resulting in further I/O to this page hanging. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Add writepages for "data=writeback" mountsSteven Whitehouse2007-02-051-0/+27
| | | | | | | | | | | | | | It occurred to me that although a gfs2 specific writepages for ordered writes and journaled data would be tricky, by hooking writepages only for "data=writeback" mounts we could take advantage of not needing buffer heads (we don't use them on the read side, nor have we for some time) and create much larger I/Os for the block layer. Using blktrace both before and after, its possible to see that for large I/Os, most of the requests generated through writepages are now 1024 sectors after this patch is applied as opposed to 8 sectors before. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Fail over to readpage for stuffed filesSteven Whitehouse2007-02-051-25/+3
| | | | | | | | | | | | | | | | | This is partially derrived from a patch written by Russell Cattelan. It fixes a bug where there is a race between readpages and truncate by ignoring readpages for stuffed files. This is ok because a stuffed file will never be more than one block (minus sizeof(struct gfs2_dinode)) in size and block size is always less than page size, so we do not lose anything efficiency-wise by not doing readahead for stuffed files. They will have already been "read ahead" by the action of reading the inode in, in the first place. This is the remaining part of the fix for Red Hat bugzilla #218966 which had not yet made it upstream. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> Cc: Russell Cattelan <cattelan@redhat.com>
* [GFS2] Fix DIO deadlockSteven Whitehouse2007-02-051-29/+45
| | | | | | | | | | | | | | | | | | | | | | | | This patch fixes Red Hat bugzilla #212627 in which a deadlock occurs due to trying to take the i_mutex while holding a glock. The correct locking order is defined as i_mutex -> glock in all cases. I've left dealing with allocating writes. I know that we need to do that, but for now this should do the trick. We don't need to take the i_mutex on write, because the VFS has already taken it for us. On read we don't need it since the glock is enough protection. The reason that I've made some of the checks into a separate function is that we'll need to do the checks again in the allocating write case eventually, so this is partly in preparation for this. Likewise the return value test of != 1 might look a bit odd and thats because we'll need a third return value in case of requiring an allocation. I've made the change to deferred mode on the glock to ensure flushing read caches on other nodes. I notice that (using blktrace to look at whats going on) we appear to do a better job of large I/Os than ext3 after this patch (in terms of not splitting up the I/Os). Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> Cc: Wendy Cheng <wcheng@redhat.com>
* [GFS2] mark_inode_dirty after write to stuffed fileSteven Whitehouse2006-11-301-1/+3
| | | | | | Writes to stuffed files were not being marked dirty correctly. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Remove unused function from inode.cSteven Whitehouse2006-11-301-4/+4
| | | | | | | | The gfs2_glock_nq_m_atime function is unused in so far as its only ever called with num_gh = 1, and this falls through to the gfs2_glock_nq_atime function, so we might as well call that directly. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Remove unused zero_readpage from stuffed_readpageRussell Cattelan2006-11-301-16/+4
| | | | | | | | | | | Stuffed files only consist of a maximum of (gfs2 block size - sizeof(struct gfs2_dinode)) bytes. Since the gfs2 block size is always less than page size, we will never see a call to stuffed_readpage for anything other than the first page in the file. Signed-off-by: Russell Cattelan <cattelan@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Fix page lock/glock deadlockSteven Whitehouse2006-11-301-4/+9
| | | | | | | | | | | | This fixes a race between the glock and the page lock encountered during truncate in gfs2_readpage and gfs2_prepare_write. The gfs2_readpages function doesn't need the same fix since it only uses a try lock anyway, so it will fail back to gfs2_readpage in the case of a potential deadlock. This bug was spotted by Russell Cattelan. Cc: Russell Cattelan <cattelan@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Shrink gfs2_inode (6) - di_atime/di_mtime/di_ctimeSteven Whitehouse2006-11-301-4/+0
| | | | | | | Remove the di_[amc]time fields and use inode->i_[amc]time fields instead. This saves 24 bytes from the gfs2_inode. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Shrink gfs2_inode (4) - di_uid/di_gidSteven Whitehouse2006-11-301-1/+1
| | | | | | | Remove duplicate di_uid/di_gid fields in favour of using inode->i_uid/inode->i_gid instead. This saves 8 bytes. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Shrink gfs2_inode (3) - di_modeSteven Whitehouse2006-11-301-1/+0
| | | | | | | This removes the duplicate di_mode field in favour of using the inode->i_mode field. This saves 4 bytes. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [PATCH] gfs2: ->readpages() fixesOGAWA Hirofumi2006-11-031-7/+0
| | | | | | | | | | | | This just ignore the remaining pages, and remove unneeded unlock_pages(). Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Cc: Steven French <sfrench@us.ibm.com> Cc: Miklos Szeredi <miklos@szeredi.hu> Acked-by: Steven Whitehouse <swhiteho@redhat.com> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [GFS2] Fix bmap to map extents properlySteven Whitehouse2006-10-201-3/+3
| | | | | | | | | This fix means that bmap will map extents of the length requested by the VFS rather than guessing at it, or just mapping one block at a time. The other callers of gfs2_block_map are audited to ensure they send the correct max extent lengths (i.e. set bh->b_size correctly). Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Pass the correct value to kunmap_atomicRussell Cattelan2006-10-121-3/+3
| | | | | | | Pass kaddr rather than (incorrect) struct page to kunmap_atomic. Signed-off-by: Russell Cattelan <cattelan@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Fix uninitialised variableSteven Whitehouse2006-10-121-0/+1
| | | | | | | | This fixes a bug where, in certain cases an uninitialised variable could cause a dereference of a NULL pointer in gfs2_commit_write(). Also a typo in a comment is fixed at the same time. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Fix a size calculation errorRussell Cattelan2006-10-121-2/+4
| | | | | | | | | | | | | | Fix a size calculation error. The size was incorrect being computed as a negative length and then being passed to an unsigned parameter. This in turn would cause the allocator to think it needed enough meta data to store a gigabyte file for every file created. Signed-off-by: Russell Cattelan <cattelan@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Remove uneeded endian conversionSteven Whitehouse2006-10-021-3/+11
| | | | | | | | | | | | | In many places GFS2 was calling the endian conversion routines for an inode even when only a single field, or a few fields might have changed. As a result we were copying lots of data needlessly. This patch replaces those calls with conversion of just the required fields in each case. This should be faster and easier to understand. There are still other places which suffer from this problem, but this is a start in the right direction. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2/DLM] Fix trailing whitespaceSteven Whitehouse2006-09-251-2/+2
| | | | | | | As per Andrew Morton's request, removed trailing whitespace. Cc: Andrew Morton <akpm@osdl.org> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Tidy up meta_io codeSteven Whitehouse2006-09-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a bug in the directory reading code, where we might have dereferenced a NULL pointer in case of OOM. Updated the directory code to use the new & improved version of gfs2_meta_ra() which now returns the first block that was being read. Previously it was releasing it requiring following code to grab the block again at each point it was called. Also turned off readahead on directory lookups since we are reading a hash table, and therefore reading the entries in order is very unlikely. Readahead is still used for all other calls to the directory reading function (e.g. when growing the hash table). Removed the DIO_START constant. Everywhere this was used, it was used to unconditionally start i/o aside from a couple of places, so I've removed it and made the couple of exceptions to this rule into separate functions. Also hunted through the other DIO flags and removed them as arguments from functions which were always called with the same combination of arguments. Updated gfs2_meta_indirect_buffer to be a bit more efficient and hopefully also be a bit easier to read. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Remove unused constantsSteven Whitehouse2006-09-201-2/+2
| | | | | | Three of the DIO constants were not being used, so remove them. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* [GFS2] Export lm_interface to kernel headersFabio Massimo Di Nitto2006-09-191-1/+1
| | | | | | | | | | | | | | | lm_interface.h has a few out of the tree clients such as GFS1 and userland tools. Right now, these clients keeps a copy of the file in their build tree that can go out of sync. Move lm_interface.h to include/linux, export it to userland and clean up fs/gfs2 to use the new location. Signed-off-by: Fabio M. Di Nitto <fabbione@ubuntu.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
OpenPOWER on IntegriCloud