| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
This driver supports the Maxim MAX6650 and MAX6651 fan speed
monitoring and control chips.
Signed-off-by: Hans J. Koch <hjk@linutronix.de>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
|
|
|
|
|
|
|
| |
This lets us get rid of macro-generated functions and shrinks the
driver size by about 7%.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
|
|
|
|
|
|
| |
Also use pr_info instead of printk.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
|
|
|
|
|
|
|
| |
Convert the smsc47m1 driver from the nonsensical i2c-isa hack to a
regular platform driver.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
|
|
|
|
| |
Signed-off-by: Jean Delvare <khali@linux-fr.org>
|
|
|
|
|
|
|
| |
Convert the w83627hf driver from the nonsensical i2c-isa hack to a
regular platform driver.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some preliminary cleanups to the w83627hf hardware monitoring driver,
to make its conversion to a platform driver easier:
* Add missing include ioport.h
* Drop unused enum value any_chip
* Group module parameters
* Define and use DRVNAME
* Drop unused struct member lm75
* Move the handling of force_addr and device activation to
w83627hf_find
* Consistently use local type in w83627hf_init_client
Signed-off-by: Jean Delvare <khali@linux-fr.org>
|
|
|
|
|
|
|
|
| |
Update the VID type for certain VIA processors and remove
the Itanium entries.
Signed-off-by: Rudolf Marek <r.marek@assembler.cz>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
|
|
|
|
|
|
|
|
|
|
|
| |
Some hardware monitoring drivers create the VID/VRM interface files
conditionally depending on the chip model or configuration. We should
only call vid_which_vrm() when we are actually going to create the
files. Not only it is more logical and efficient that way, but it also
prevents printing unnecessary warnings such as the one reported here:
http://lists.lm-sensors.org/pipermail/lm-sensors/2007-February/018954.html
Signed-off-by: Jean Delvare <khali@linux-fr.org>
|
|
|
|
|
|
|
|
|
| |
The smsc47m1 driver uses a mutex to protect the accesses to the
hardware registers. It really doesn't need any protection, as the
register space is flat. Get rid of that mutex for a smaller and
faster driver.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
|
|
|
|
|
|
|
|
| |
The new SMSC LPC47M292 Super-I/O chip is a bit different from the
previous ones, it supports a 3rd fan, but unfortunately the pin
configuration registers are different.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
My understanding of the resource management in the Linux 2.6 device
driver model is that the devices should declare their resources, and
then when a driver attaches to a device, it should request the
resources it will be using, so as to mark them busy. This is how the
PCI and PNP subsystems work, you can clearly see the two levels of
resources (declaration and request) in /proc/ioports for these
devices.
So I believe that our platform hardware monitoring drivers should
follow the same logic. At the moment, we only declare the resources
but we do not request them. This patch adds the I/O region request
and release calls.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Acked-by: Juerg Haefliger <juergh@gmail.com>
|
|
|
|
|
|
|
|
| |
The new SMSC LPC47M292 Super-I/O chip includes a hardware monitoring
block which is compatible with those of the LPC47M192.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Cc: Hartmut Rick <linux@rick.claranet.de>
|
|\
| |
| |
| |
| |
| | |
* master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6:
[SCSI] esp_scsi: Fix section mismatch warnings.
[VIDEO] sunxvr2500: Fix PCI device ID table.
|
| |
| |
| |
| |
| | |
Signed-off-by: Martin Habets <errandir_news@mph.eclipse.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
| |
| |
| |
| |
| |
| | |
Noticed by Meelis Roos.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|/
|
|
|
|
|
| |
More fallout from the removal of "struct subsystem" from the core device
model.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6:
[IA64] update memory attribute aliasing documentation & test cases
[IA64] fail mmaps that span areas with incompatible attributes
[IA64] allow WB /sys/.../legacy_mem mmaps
[IA64] make ioremap avoid unsupported attributes
[IA64] rename ioremap variables to match i386
[IA64] relax per-cpu TLB requirement to DTC
[IA64] remove per-cpu ia64_phys_stacked_size_p8
[IA64] Fix example error injection program
[IA64] Itanium MC Error Injection Tool: pal_mc_error_inject() interface
[IA64] Itanium MC Error Injection Tool: Makefile changes
[IA64] Itanium MC Error Injection Tool: Driver sysfs interface
[IA64] Itanium MC Error Injection Tool: Doc and sample application
[IA64] Itanium MC Error Injection Tool: Kernel configuration
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Updates documentation and adds some test cases.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Example memory map (from HP sx1000 with VGA enabled):
0x00000 - 0x9FFFF supports only WB (cacheable) access
0xA0000 - 0xBFFFF supports only UC (uncacheable) access
0xC0000 - 0xFFFFF supports only WB (cacheable) access
Some versions of X map the entire 0x00000-0xFFFFF area at once. With the
example above, this mmap must fail because there's no memory attribute that's
safe for the entire area.
Prior to this patch, we performed the mmap with a UC mapping. When X
accessed the WB memory at 0xC0000, it caused an MCA. The crash can happen
when mapping 0xC0000 from either /dev/mem or a /sys/.../legacy_mem file.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Allow cacheable mmaps of legacy_mem if WB access is supported for the region.
The "legacy_mem" file often contains a shadow option ROM, and some versions of
X depend on this.
Tim Yamin <plasm@roo.me.uk> reported that this change fixes X on a Dell
PowerEdge 3250.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Example memory map (from HP sx1000 with VGA enabled):
0x00000 - 0x9FFFF supports only WB (cacheable) access
0xA0000 - 0xBFFFF supports only UC (uncacheable) access
0xC0000 - 0xFFFFF supports only WB (cacheable) access
pci_read_rom() indirectly uses ioremap(0xC0000) to read the shadow VGA option
ROM. ioremap() used to default to a 16MB or 64MB UC kernel identity mapping,
which would cause an MCA when reading 0xC0000 since only WB is supported there.
X uses reads the option ROM to initialize devices. A smaller test case is:
# echo 1 > /sys/bus/pci/devices/0000:aa:03.0/rom
# cp /sys/bus/pci/devices/0000:aa:03.0/rom x
To avoid this, we can use the same ioremap_page_range() strategy that most
architectures use for all ioremaps. These page table mappings come out of the
vmalloc area. On ia64, these are in region 5 (0xA... addresses) and typically
use 16KB or 64KB mappings instead of 16MB or 64MB mappings. The smaller
mappings give more flexibility to use the correct attributes.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
No functional change, just use the same names as i386.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Instead of pinning per-cpu TLB into a DTR, use DTC. This will free up
one TLB entry for application, or even kernel if access pattern to
per-cpu data area has high temporal locality.
Since per-cpu is mapped at the top of region 7 address, we just need to
add special case in alt_dtlb_miss. The physical address of per-cpu data
is already conveniently stored in IA64_KR(PER_CPU_DATA). Latency for
alt_dtlb_miss is not affected as we can hide all the latency. It was
measured that alt_dtlb_miss handler has 23 cycles latency before and
after the patch.
The performance effect is massive for applications that put lots of tlb
pressure on CPU. Workload environment like database online transaction
processing or application uses tera-byte of memory would benefit the most.
Measurement with industry standard database benchmark shown an upward
of 1.6% gain. While smaller workloads like cpu, java also showing small
improvement.
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It's not efficient to use a per-cpu variable just to store
how many physical stack register a cpu has. Ever since the
incarnation of ia64 up till upcoming Montecito processor, that
variable has "glued" to 96. Having a variable in memory means
that the kernel is burning an extra cacheline access on every
syscall and kernel exit path. Such "static" value is better
served with the instruction patching utility exists today.
Convert ia64_phys_stacked_size_p8 into dynamic insn patching.
This also has a pleasant side effect of eliminating access to
per-cpu area while psr.ic=0 in the kernel exit path. (fixable
for per-cpu DTC work, but why bother?)
There are some concerns with the default value that the instruc-
tion encoded in the kernel image. It shouldn't be concerned.
The reasons are:
(1) cpu_init() is called at CPU initialization. In there, we
find out physical stack register size from PAL and patch
two instructions in kernel exit code. The code in question
can not be executed before the patching is done.
(2) current implementation stores zero in ia64_phys_stacked_size_p8,
and that's what the current kernel exit path loads the value with.
With the new code, it is equivalent that we store reg size 96
in ia64_phys_stacked_size_p8, thus creating a better safety net.
Given (1) above can never fail, having (2) is just a bonus.
All in all, this patch allow one less memory reference in the kernel
exit path, thus reducing syscall and interrupt return latency; and
avoid polluting potential useful data in the CPU cache.
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| |\ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Progam accessed using /sys/devices/system/node/node0/cpu%d/err_inject/
This path only exists for CONFIG_NUMA=y systems. Better to use
/sys/devices/system/cpu/cpu%d/err_inject/ which is available on all
systems.
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This patch implements pal_mc_error_inject() interface in kernel. Both physical
mode and virtual mode are supported.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This patch has Makefile changes.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This kernel driver patch provides sysfs interface for user application to
call pal_mc_error_inject() procedure.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This patch contains a documention and sample application. Since the sample
application has ~1000 lines of code, it might not be suitable in a kernel
documention in kenrel tree. If you think this is not good place to hold
the sample application, please let me know and I'm open to other choices
e.g. sourceforge etc.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This patch has kenrel configuration changes for the MC Error Injection
Tool.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* 'server-cluster-locking-api' of git://linux-nfs.org/~bfields/linux:
gfs2: nfs lock support for gfs2
lockd: add code to handle deferred lock requests
lockd: always preallocate block in nlmsvc_lock()
lockd: handle test_lock deferrals
lockd: pass cookie in nlmsvc_testlock
lockd: handle fl_grant callbacks
lockd: save lock state on deferral
locks: add fl_grant callback for asynchronous lock return
nfsd4: Convert NFSv4 to new lock interface
locks: add lock cancel command
locks: allow {vfs,posix}_lock_file to return conflicting lock
locks: factor out generic/filesystem switch from setlock code
locks: factor out generic/filesystem switch from test_lock
locks: give posix_test_lock same interface as ->lock
locks: make ->lock release private data before returning in GETLK case
locks: create posix-to-flock helper functions
locks: trivial removal of unnecessary parentheses
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Add NFS lock support to GFS2.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Acked-by: Steven Whitehouse <swhiteho@redhat.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Rewrite nlmsvc_lock() to use the asynchronous interface.
As with testlock, we answer nlm requests in nlmsvc_lock by first looking up
the block and then using the results we find in the block if B_QUEUED is
set, and calling vfs_lock_file() otherwise.
If this a new lock request and we get -EINPROGRESS return on a non-blocking
request then we defer the request.
Also modify nlmsvc_unlock() to call the filesystem method if appropriate.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Normally we could skip ever having to allocate a block in the case where
the client asks for a non-blocking lock, or asks for a blocking lock that
succeeds immediately.
However we're going to want to always look up a block first in order to
check whether we're revisiting a deferred lock call, and to be prepared to
handle the case where the filesystem returns -EINPROGRESS--in that case we
want to make sure the lock we've given the filesystem is the one embedded
in the block that we'll use to track the deferred request.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Rewrite nlmsvc_testlock() to use the new asynchronous interface: instead of
immediately doing a posix_test_lock(), we first look for a matching block.
If the subsequent test_lock returns anything other than -EINPROGRESS, we
then remove the block we've found and return the results.
If it returns -EINPROGRESS, then we defer the lock request.
In the case where the block we find in the first step has B_QUEUED set,
we bypass the vfs_test_lock entirely, instead using the block to decide how
to respond:
with nlm_lck_denied if B_TIMED_OUT is set.
with nlm_granted if B_GOT_CALLBACK is set.
by dropping if neither B_TIMED_OUT nor B_GOT_CALLBACK is set
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Change NLM internal interface to pass more information for test lock; we
need this to make sure the cookie information is pushed down to the place
where we do request deferral, which is handled for testlock by the
following patch.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Add code to handle file system callback when the lock is finally granted.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
We need to keep some state for a pending asynchronous lock request, so this
patch adds that state to struct nlm_block.
This also adds a function which defers the request, by calling
rqstp->rq_chandle.defer and storing the resulting deferred request in a
nlm_block structure which we insert into lockd's global block list. That
new function isn't called yet, so it's dead code until a later patch.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Acquiring a lock on a cluster filesystem may require communication with
remote hosts, and to avoid blocking lockd or nfsd threads during such
communication, we allow the results to be returned asynchronously.
When a ->lock() call needs to block, the file system will return
-EINPROGRESS, and then later return the results with a call to the
routine in the fl_grant field of the lock_manager_operations struct.
This differs from the case when ->lock returns -EAGAIN to a blocking
lock request; in that case, the filesystem calls fl_notify when the lock
is granted, and the caller retries the original lock. So while
fl_notify is merely a hint to the caller that it should retry, fl_grant
actually communicates the final result of the lock operation (with the
lock already acquired in the succesful case).
Therefore fl_grant takes a lock, a status and, for the test lock case, a
conflicting lock. We also allow fl_grant to return an error to the
filesystem, to handle the case where the fl_grant requests arrives after
the lock manager has already given up waiting for it.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Convert NFSv4 to the new lock interface. We don't define any callback for now,
so we're not taking advantage of the asynchronous feature--that's less critical
for the multi-threaded nfsd then it is for the single-threaded lockd. But this
does allow a cluster filesystems to export cluster-coherent locking to NFS.
Note that it's cluster filesystems that are the issue--of the filesystems that
define lock methods (nfs, cifs, etc.), most are not exportable by nfsd.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Lock managers need to be able to cancel pending lock requests. In the case
where the exported filesystem manages its own locks, it's not sufficient just
to call posix_unblock_lock(); we need to let the filesystem know what's
happening too.
We do this by adding a new fcntl lock command: FL_CANCELLK. Some day this
might also be made available to userspace applications that could benefit from
an asynchronous locking api.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The nfsv4 protocol's lock operation, in the case of a conflict, returns
information about the conflicting lock.
It's unclear how clients can use this, so for now we're not going so far as to
add a filesystem method that can return a conflicting lock, but we may as well
return something in the local case when it's easy to.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Factor out the code that switches between generic and filesystem-specific lock
methods; eventually we want to call this from lock managers (lockd and nfsd)
too; currently they only call the generic methods.
This patch does that for all the setlk code.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Factor out the code that switches between generic and filesystem-specific lock
methods; eventually we want to call this from lock managers (lockd and nfsd)
too; currently they only call the generic methods.
This patch does that for test_lock.
Note that this hasn't been necessary until recently, because the few
filesystems that define ->lock() (nfs, cifs...) aren't exportable via NFS.
However GFS (and, in the future, other cluster filesystems) need to implement
their own locking to get cluster-coherent locking, and also want to be able to
export locking to NFS (lockd and NFSv4).
So we accomplish this by factoring out code such as this and exporting it for
the use of lockd and nfsd.
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
posix_test_lock() and ->lock() do the same job but have gratuitously
different interfaces. Modify posix_test_lock() so the two agree,
simplifying some code in the process.
Signed-off-by: Marc Eshel <eshel@almaden.ibm.com>
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The file_lock argument to ->lock is used to return the conflicting lock
when found. There's no reason for the filesystem to return any private
information with this conflicting lock, but nfsv4 is.
Fix nfsv4 client, and modify locks.c to stop calling fl_release_private
for it in this case.
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
Cc: "Trond Myklebust" <Trond.Myklebust@netapp.com>"
|