summaryrefslogtreecommitdiffstats
path: root/drivers/block
Commit message (Collapse)AuthorAgeFilesLines
* floppy/scsi: fix setting of BIO flagsMuthu Kumar2012-03-051-1/+1
| | | | | | | | | | Fix setting bio flags in drivers (sd_dif/floppy). Signed-off-by: Muthukumar R <muthur@gmail.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* asm-generic: architecture independent readq/writeq for 32bit environmentHitoshi Mitake2012-02-211-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This provides unified readq()/writeq() helper functions for 32-bit drivers. For some cases, readq/writeq without atomicity is harmful, and order of io access has to be specified explicitly. So in this patch, new two header files which contain non-atomic readq/writeq are added. - <asm-generic/io-64-nonatomic-lo-hi.h> provides non-atomic readq/ writeq with the order of lower address -> higher address - <asm-generic/io-64-nonatomic-hi-lo.h> provides non-atomic readq/ writeq with reversed order This allows us to remove some readq()s that were added drivers when the default non-atomic ones were removed in commit dbee8a0affd5 ("x86: remove 32-bit versions of readq()/writeq()") The drivers which need readq/writeq but can do with the non-atomic ones must add the line: #include <asm-generic/io-64-nonatomic-lo-hi.h> /* or hi-lo.h */ But this will be nop in 64-bit environments, and no other #ifdefs are required. So I believe that this patch can solve the problem of 1. driver-specific readq/writeq 2. atomicity and order of io access This patch is tested with building allyesconfig and allmodconfig as ARCH=x86 and ARCH=i386 on top of tip/master. Cc: Kashyap Desai <Kashyap.Desai@lsi.com> Cc: Len Brown <lenb@kernel.org> Cc: Ravi Anand <ravi.anand@qlogic.com> Cc: Vikas Chaudhary <vikas.chaudhary@qlogic.com> Cc: Matthew Garrett <mjg@redhat.com> Cc: Jason Uhlenkott <juhlenko@akamai.com> Cc: James Bottomley <James.Bottomley@parallels.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Roland Dreier <roland@purestorage.com> Cc: James Bottomley <jbottomley@parallels.com> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Hitoshi Mitake <h.mitake@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'for-linus' of git://git.kernel.dk/linux-blockLinus Torvalds2012-02-114-23/+34
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Says Jens: "Time to push off some of the pending items. I really wanted to wait until we had the regression nailed, but alas it's not quite there yet. But I'm very confident that it's "just" a missing expire on exit, so fix from Tejun should be fairly trivial. I'm headed out for a week on the slopes. - Killing the barrier part of mtip32xx. It doesn't really support barriers, and it doesn't need them (writes are fully ordered). - A few fixes from Dan Carpenter, preventing overflows of integer multiplication. - A fixup for loop, fixing a previous commit that didn't quite solve the partial read problem from Dave Young. - A bio integer overflow fix from Kent Overstreet. - Improvement/fix of the door "keep locked" part of the cdrom shared code from Paolo Benzini. - A few cfq fixes from Shaohua Li. - A fix for bsg sysfs warning when removing a file it did not create from Stanislaw Gruszka. - Two fixes for floppy from Vivek, preventing a crash. - A few block core fixes from Tejun. One killing the over-optimized ioc exit path, cleaning that up nicely. Two others fixing an oops on elevator switch, due to calling into the scheduler merge check code without holding the queue lock." * 'for-linus' of git://git.kernel.dk/linux-block: block: fix lockdep warning on io_context release put_io_context() relay: prevent integer overflow in relay_open() loop: zero fill bio instead of return -EIO for partial read bio: don't overflow in bio_get_nr_vecs() floppy: Fix a crash during rmmod floppy: Cleanup disk->queue before caling put_disk() if add_disk() was never called cdrom: move shared static to cdrom_device_info bsg: fix sysfs link remove warning block: don't call elevator callbacks for plug merges block: separate out blk_rq_merge_ok() and blk_try_merge() from elevator functions mtip32xx: removed the irrelevant argument of mtip_hw_submit_io() and the unused member of struct driver_data block: strip out locking optimization in put_io_context() cdrom: use copy_to_user() without the underscores block: fix ioc locking warning block: fix NULL icq_cache reference block,cfq: change code order
| * loop: zero fill bio instead of return -EIO for partial readDave Young2012-02-081-11/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 8268f5a741 ("deny partial write for loop dev fd") tried to fix the loop device partial read information leak problem. But it changed the semantics of read behavior. When we read beyond the end of the device we should get 0 bytes, which is normal behavior, we should not just return -EIO Instead of returning -EIO, zero out the bio to avoid information leak in case of partail read. Signed-off-by: Dave Young <dyoung@redhat.com> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Tested-by: Jeff Moyer <jmoyer@redhat.com> Cc: Dmitry Monakhov <dmonakhov@sw.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * floppy: Fix a crash during rmmodVivek Goyal2012-02-081-0/+9
| | | | | | | | | | | | | | | | | | | | | | floppy driver does not call add_disk() on all the drives hence we don't take gendisk reference on request queue for these drives. Don't call put_disk() with disk->queue set, otherwise we try to put the reference we never took. Reported-and-tested-by: Dirk Gouders <gouders@et.bocholt.fh-gelsenkirchen.de> Signed-off-by: Vivek Goyal<vgoyal@redhat.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * floppy: Cleanup disk->queue before caling put_disk() if add_disk() was never ↵Vivek Goyal2012-02-081-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | called add_disk() takes gendisk reference on request queue. If driver failed during initialization and never called add_disk() then that extra reference is not taken. That reference is put in put_disk(). floppy driver allocates the disk, allocates queue, sets disk->queue and then relizes that floppy controller is not present. It tries to tear down everything and tries to put a reference down in put_disk() which was never taken. In such error cases cleanup disk->queue before calling put_disk() so that we never try to put down a reference which was never taken in first place. Reported-and-tested-by: Suresh Jayaraman <sjayaraman@suse.com> Tested-by: Dirk Gouders <gouders@et.bocholt.fh-gelsenkirchen.de> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * mtip32xx: removed the irrelevant argument of mtip_hw_submit_io() and the ↵Asai Thambi S P2012-02-072-11/+5
| | | | | | | | | | | | | | | | | | | | | | | | unused member of struct driver_data Removed the following: * irrelevant argument 'barrier' of mtip_hw_submit_io() * unused member 'eh_active' of struct driver_data Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Sam Bradshaw <sbradshaw@micron.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | Merge branch 'for-linus' of ↵Linus Torvalds2012-02-021-2/+5
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client: rbd: fix safety of rbd_put_client() rbd: fix a memory leak in rbd_get_client() ceph: create a new session lock to avoid lock inversion ceph: fix length validation in parse_reply_info() ceph: initialize client debugfs outside of monc->mutex ceph: change "ceph.layout" xattr to be "ceph.file.layout"
| * rbd: fix safety of rbd_put_client()Alex Elder2012-02-021-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rbd_client structure uses a kref to arrange for cleaning up and freeing an instance when its last reference is dropped. The cleanup routine is rbd_client_release(), and one of the things it does is delete the rbd_client from rbd_client_list. It acquires node_lock to do so, but the way it is done is still not safe. The problem is that when attempting to reuse an existing rbd_client, the structure found might already be in the process of getting destroyed and cleaned up. Here's the scenario, with "CLIENT" representing an existing rbd_client that's involved in the race: Thread on CPU A | Thread on CPU B --------------- | --------------- rbd_put_client(CLIENT) | rbd_get_client() kref_put() | (acquires node_lock) kref->refcount becomes 0 | __rbd_client_find() returns CLIENT calls rbd_client_release() | kref_get(&CLIENT->kref); | (releases node_lock) (acquires node_lock) | deletes CLIENT from list | ...and starts using CLIENT... (releases node_lock) | and frees CLIENT | <-- but CLIENT gets freed here Fix this by having rbd_put_client() acquire node_lock. The result could still be improved, but at least it avoids this problem. Signed-off-by: Alex Elder <elder@dreamhost.com> Signed-off-by: Sage Weil <sage@newdream.net>
| * rbd: fix a memory leak in rbd_get_client()Alex Elder2012-02-021-0/+1
| | | | | | | | | | | | | | | | | | If an existing rbd client is found to be suitable for use in rbd_get_client(), the rbd_options structure is not being freed as it should. Fix that. Signed-off-by: Alex Elder <elder@dreamhost.com> Signed-off-by: Sage Weil <sage@newdream.net>
* | nvme: fix merge error due to change of 'make_request_fn' fn typeLinus Torvalds2012-01-181-7/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The type of 'make_request_fn' changed in 5a7bbad27a4 ("block: remove support for bio remapping from ->make_request"), but the merge of the nvme driver didn't take that into account, and as a result the driver would compile with a warning: drivers/block/nvme.c: In function 'nvme_alloc_ns': drivers/block/nvme.c:1336:2: warning: passing argument 2 of 'blk_queue_make_request' from incompatible pointer type [enabled by default] include/linux/blkdev.h:830:13: note: expected 'void (*)(struct request_queue *, struct bio *)' but argument is of type 'int (*)(struct request_queue *, struct bio *)' It's benign, but the warning is annoying. Reported-by: Stephen Rothwell <sfr@canb.auug.org> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge git://git.infradead.org/users/willy/linux-nvmeLinus Torvalds2012-01-183-0/+1757
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.infradead.org/users/willy/linux-nvme: (105 commits) NVMe: Set number of queues correctly NVMe: Version 0.8 NVMe: Set queue flags correctly NVMe: Simplify nvme_unmap_user_pages NVMe: Mark the end of the sg list NVMe: Fix DMA mapping for admin commands NVMe: Rename IO_TIMEOUT to NVME_IO_TIMEOUT NVMe: Merge the nvme_bio and nvme_prp data structures NVMe: Change nvme_completion_fn to take a dev NVMe: Change get_nvmeq to take a dev instead of a namespace NVMe: Simplify completion handling NVMe: Update Identify Controller data structure NVMe: Implement doorbell stride capability NVMe: Version 0.7 NVMe: Don't probe namespace 0 Fix calculation of number of pages in a PRP List NVMe: Create nvme_identify and nvme_get_features functions NVMe: Fix memory leak in nvme_dev_add() NVMe: Fix calls to dma_unmap_sg NVMe: Correct sg list setup in nvme_map_user_pages ...
| * | NVMe: Set number of queues correctlyMatthew Wilcox2012-01-111-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | The number of submission & completion queues should be set by calling Set Features, not Get Features. Reported-by: Kwok Kong <Kwok.Kong@idt.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Version 0.8Matthew Wilcox2012-01-101-1/+1
| | | | | | | | | | | | Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Set queue flags correctlyMatthew Wilcox2012-01-101-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | QUEUE_FLAG_* are flags (other than QUEUE_FLAG_DEFAULT), so they cannot be ORed together. Set the queue flags using queue_flag_set_unlocked(). Reported-by: Donald Wood <donald.e.wood@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Simplify nvme_unmap_user_pagesMatthew Wilcox2012-01-101-10/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | By using the iod->nents field (the same way other I/O paths do), we can avoid recalculating the number of sg entries at unmap time, and make nvme_unmap_user_pages() easier to call. Also, use the 'write' parameter instead of assuming DMA_FROM_DEVICE. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Mark the end of the sg listMatthew Wilcox2012-01-101-0/+1
| | | | | | | | | | | | | | | | | | | | | For user I/O and admin commands, we were forgetting to mark the end of the SG list. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Fix DMA mapping for admin commandsMatthew Wilcox2012-01-101-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | We were always mapping as DMA_FROM_DEVICE then unmapping with DMA_TO_DEVICE which was clearly not correct. Follow the same pattern as nvme_submit_io() and key off the bottom bit of the opcode to determine whether this is a read or a write. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Rename IO_TIMEOUT to NVME_IO_TIMEOUTMatthew Wilcox2012-01-101-4/+4
| | | | | | | | | | | | | | | | | | | | | IO_TIMEOUT is a little too generic and might be used by other parts of the kernel in the future. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Merge the nvme_bio and nvme_prp data structuresMatthew Wilcox2012-01-101-115/+124
| | | | | | | | | | | | | | | | | | | | | | | | The new merged data structure is called nvme_iod. This improves performance for mid-sized I/Os (in the 16k range) since we save a memory allocation. It is also a slightly simpler interface to use. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Change nvme_completion_fn to take a devMatthew Wilcox2012-01-101-18/+25
| | | | | | | | | | | | | | | | | | | | | The queue is only needed for some rare occasions, and it's more consistent to pass the device around. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Change get_nvmeq to take a dev instead of a namespaceMatthew Wilcox2012-01-101-4/+4
| | | | | | | | | | | | | | | | | | | | | Upcoming patches require calling get_nvmeq when we don't have a namespace. Some callers already have the device in a local variable anyway. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Simplify completion handlingMatthew Wilcox2012-01-101-86/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of encoding the handler type in the bottom two bits of the per-completion context pointer, store the handler function as well as the context pointer. This gives us more flexibility and the code is clearer. It comes at the cost of an extra 8k of memory per queue, but this feels like a reasonable price to pay. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Implement doorbell stride capabilityMatthew Wilcox2011-11-041-3/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | The doorbell stride allows devices to spread out their doorbells instead of packing them tightly. This feature was added as part of ECN 003. This patch also enables support for more than 512 queues :-) Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Version 0.7Matthew Wilcox2011-11-041-1/+1
| | | | | | | | | | | | Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Don't probe namespace 0Matthew Wilcox2011-11-041-1/+1
| | | | | | | | | | | | | | | | | | | | | ECN 001 documented that namespace 0 is not valid. Sending an Identify with CNS of 0 and Namespace of 0 is an undefined command. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | Fix calculation of number of pages in a PRP ListNisheeth Bhat2011-11-041-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing calculation underestimated the number of pages required as it did not take into account the pointer at the end of each page. The replacement calculation may overestimate the number of pages required if the last page in the PRP List is entirely full. By using ->npages as a counter as we fill in the pages, we ensure that we don't try to free a page that was never allocated. Signed-off-by: Nisheeth Bhat <nisheeth.bhat@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Create nvme_identify and nvme_get_features functionsMatthew Wilcox2011-11-041-33/+43
| | | | | | | | | | | | | | | | | | | | | | | | Instead of open-coding calls to nvme_submit_admin_cmd, these small wrappers are simpler to use (the patch removes 14 lines from nvme_dev_add() for example). Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Fix memory leak in nvme_dev_add()Matthew Wilcox2011-11-041-2/+2
| | | | | | | | | | | | | | | | | | The driver was allocating 8k of memory, then freeing 4k of it. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Fix calls to dma_unmap_sgNisheeth Bhat2011-11-041-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | dma_unmap_sg() must be called with the same 'nents' passed to dma_map_sg(), not the number returned from dma_map_sg(). Signed-off-by: Nisheeth Bhat <nisheeth.bhat@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Correct sg list setup in nvme_map_user_pagesMatthew Wilcox2011-11-041-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our SG list was constructed to always fill the entire first page, even if that was more than the length of the I/O. This is probably harmless, but some IOMMUs might do something bad. Correcting the first call to sg_set_page() made it look a lot closer to the sg_set_page() in the loop, so fold the first call to sg_set_page() into the loop. Reported-by: Nisheeth Bhat <nisheeth.bhat@intel.com> Signed-off-by: Matthew Wilcox <willy@linux.intel.com>
| * | Fix bug in NVME_IOCTL_SUBMIT_IOMatthew Wilcox2011-11-041-0/+1
| | | | | | | | | | | | | | | Missing 'break' in the switch statement meant that we'd fall through to the 'return -EINVAL' case.
| * | NVMe: Rework ioctlsMatthew Wilcox2011-11-041-88/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the special-purpose IDENTIFY, GET_RANGE_TYPE, DOWNLOAD_FIRMWARE and ACTIVATE_FIRMWARE commands. Replace them with a generic ADMIN_CMD ioctl that can submit any admin command. Add a new ID ioctl that returns the namespace ID of the queried device. It corresponds to the SCSI Idlun ioctl. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Add the nvme thread to the wait queue before waking it upMatthew Wilcox2011-11-041-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the I/O was not completed by a single NVMe command, we add the bio to the congestion list and wake up the kthread to resubmit it. But the kthread calls remove_wait_queue() unconditionally, which will oops if it's not on the wait queue. So add the kthread to the wait queue before waking it up. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Return real error from nvme_create_queueMatthew Wilcox2011-11-041-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | nvme_setup_io_queues() was assuming that a NULL return from nvme_create_queue() was an out-of-memory error. That's not necessarily true; the adapter might return -EIO, for example. Change the calling convention to return an ERR_PTR on failure instead of NULL. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Version 0.6Matthew Wilcox2011-11-041-1/+1
| | | | | | | | | | | | Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Add a few calling convention notesMatthew Wilcox2011-11-041-1/+10
| | | | | | | | | | | | | | | | | | | | | For the benefit of reviewers, add comments to a few functions describing their calling context Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Handle failures from memory allocations in nvme_setup_prpsMatthew Wilcox2011-11-041-15/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If any of the memory allocations in nvme_setup_prps fail, handle it by modifying the passed-in data length to reflect the number of bytes we are actually able to send. Also allow the caller to specify the GFP flags they need; for user-initiated commands, we can use GFP_KERNEL allocations. The various callers are updated to handle this possibility; the main I/O path is already prepared for this possibility (as it may happen due to nvme_map_bio being unable to map all the segments of the I/O). The other callers return -ENOMEM instead of doing partial I/Os. Reported-by: Andi Kleen <andi@firstfloor.org> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Use an IDA to allocate minor numbersMatthew Wilcox2011-11-041-4/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | The current approach of using the namespace ID as the minor number doesn't work when there are multiple adapters in the machine. Rather than statically partitioning the number of namespaces between adapters, dynamically allocate minor numbers to namespaces as they are detected. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Add include of delay.h for msleepMatthew Wilcox2011-11-041-0/+1
| | | | | | | | | | | | | | | | | | Previously it was being implicitly included through some other header file Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Add support for timing out I/OsMatthew Wilcox2011-11-041-6/+31
| | | | | | | | | | | | | | | | | | | | | In the kthread, walk the list of outstanding I/Os and check they've not hit the timeout. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Rename cancel_cmdid_data to cancel_cmdidMatthew Wilcox2011-11-041-2/+5
| | | | | | | | | | | | | | | | | | | | | The trailing '_data' on the end was annoying and inconsistent. Also, make it actually return the data since this is needed for timing out commands. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Fix bug in error handlingMatthew Wilcox2011-11-041-2/+2
| | | | | | | | | | | | | | | | | | | | | When an I/O completed with an error, we would call bio_endio twice (once with -EIO and once with 0). Found by inspection. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Time out initialisation after a few secondsMatthew Wilcox2011-11-041-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | THe device reports (in its capability register) how long it will take to initialise. If that time elapses before the ready bit becomes set, conclude the device is broken and refuse to initialise it. Log a nice error message so the user knows why we did nothing. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Fix warning in free_irqMatthew Wilcox2011-11-041-1/+3
| | | | | | | | | | | | | | | | | | | | | We need to clear the affinity mask before calling free_irq() Reported-by: Shane Michael Matthews <shane.matthews@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Correct the Controller Configuration settingsMatthew Wilcox2011-11-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | The arbitration field was extended by one bit, shifting the shutdown notification bits by one. Also, the SQ/CQ entry size was made configurable for future extensions. Reported-by: Paul Luse <paul.e.luse@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Version 0.5Matthew Wilcox2011-11-041-1/+1
| | | | | | | | | | | | Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Change the definition of nvme_user_ioMatthew Wilcox2011-11-041-10/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The read and write commands don't define a 'result', so there's no need to copy it back to userspace. Remove the ability of the ioctl to submit commands to a different namespace; it's just asking for trouble, and the use case I have in mind will be addressed througha different ioctl in the future. That removes the need for both the block_shift and nsid arguments. Check that the opcode is one of 'read' or 'write'. Future opcodes may be added in the future, but we will need a different structure definition for them. The nblocks field is redefined to be 0-based. This allows the user to request the full 65536 blocks. Don't byteswap the reftag, apptag and appmask. Martin Petersen tells me these are calculated in big-endian and are transmitted to the device in big-endian. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Add compat_ioctlMatthew Wilcox2011-11-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Make ioctls work for 32-bit applications on 64-bit kernels. The structures are defined to be the same for both 32- and 64-bit applications, so we can use the same handler for both. Reported-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
| * | NVMe: Simplify queue lookupMatthew Wilcox2011-11-041-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | Fill in all the num_possible_cpus() entries with duplicate pointers. This reduces the complexity of the frequently-called get_nvmeq(), as well as avoiding a bug in it when there are fewer queues than CPUs. Reported-by: Shane Michael Matthews <shane.matthews@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
OpenPOWER on IntegriCloud