summaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorJeff Layton <jlayton@redhat.com>2013-06-21 08:58:19 -0400
committerAl Viro <viro@zeniv.linux.org.uk>2013-06-29 12:57:45 +0400
commit3999e49364193f7dbbba66e2be655fe91ba1fced (patch)
tree5971637ac3b15d5d72797d4050ee35216b9fede1 /Documentation
parent48f74186546cd5929397856eab209ebcb5692d11 (diff)
downloadop-kernel-dev-3999e49364193f7dbbba66e2be655fe91ba1fced.zip
op-kernel-dev-3999e49364193f7dbbba66e2be655fe91ba1fced.tar.gz
locks: add a new "lm_owner_key" lock operation
Currently, the hashing that the locking code uses to add these values to the blocked_hash is simply calculated using fl_owner field. That's valid in most cases except for server-side lockd, which validates the owner of a lock based on fl_owner and fl_pid. In the case where you have a small number of NFS clients doing a lot of locking between different processes, you could end up with all the blocked requests sitting in a very small number of hash buckets. Add a new lm_owner_key operation to the lock_manager_operations that will generate an unsigned long to use as the key in the hashtable. That function is only implemented for server-side lockd, and simply XORs the fl_owner and fl_pid. Signed-off-by: Jeff Layton <jlayton@redhat.com> Acked-by: J. Bruce Fields <bfields@fieldses.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/filesystems/Locking16
1 files changed, 11 insertions, 5 deletions
diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking
index c2963a7..2db7c9e 100644
--- a/Documentation/filesystems/Locking
+++ b/Documentation/filesystems/Locking
@@ -349,6 +349,7 @@ fl_release_private: maybe no
----------------------- lock_manager_operations ---------------------------
prototypes:
int (*lm_compare_owner)(struct file_lock *, struct file_lock *);
+ unsigned long (*lm_owner_key)(struct file_lock *);
void (*lm_notify)(struct file_lock *); /* unblock callback */
int (*lm_grant)(struct file_lock *, struct file_lock *, int);
void (*lm_break)(struct file_lock *); /* break_lease callback */
@@ -358,16 +359,21 @@ locking rules:
inode->i_lock file_lock_lock may block
lm_compare_owner: yes[1] maybe no
+lm_owner_key yes[1] yes no
lm_notify: yes yes no
lm_grant: no no no
lm_break: yes no no
lm_change yes no no
-[1]: ->lm_compare_owner is generally called with *an* inode->i_lock held. It
-may not be the i_lock of the inode for either file_lock being compared! This is
-the case with deadlock detection, since the code has to chase down the owners
-of locks that may be entirely unrelated to the one on which the lock is being
-acquired. When doing a search for deadlocks, the file_lock_lock is also held.
+[1]: ->lm_compare_owner and ->lm_owner_key are generally called with
+*an* inode->i_lock held. It may not be the i_lock of the inode
+associated with either file_lock argument! This is the case with deadlock
+detection, since the code has to chase down the owners of locks that may
+be entirely unrelated to the one on which the lock is being acquired.
+For deadlock detection however, the file_lock_lock is also held. The
+fact that these locks are held ensures that the file_locks do not
+disappear out from under you while doing the comparison or generating an
+owner key.
--------------------------- buffer_head -----------------------------------
prototypes:
OpenPOWER on IntegriCloud