summaryrefslogtreecommitdiffstats
path: root/drivers/block/nvme-core.c
Commit message (Collapse)AuthorAgeFilesLines
* NVMe: Use user defined admin ioctl timeoutKeith Busch2013-05-091-1/+5
| | | | | Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Only clear the enable bit when disabling controllerMatthew Wilcox2013-05-081-1/+4
| | | | | | | | | | Many of the bits in the Controller Configuration register may only be modified when the Enable bit is clear. Clearing them at the same time as the Enable bit might be OK, but let's play it safe and only touch the Enable bit. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com> Reviewed-by: Keith Busch <keith.busch@intel.com>
* NVMe: Wait for device to acknowledge shutdownMatthew Wilcox2013-05-081-19/+46
| | | | | | | | | | | A recent update to the specification makes it clear that the host is expected to wait for the device to acknowledge the Enable bit transitioning to 0 as well as waiting for the device to acknowledge a transition to 1. Reported-by: Khosrow Panah <Khosrow.Panah@idt.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com> Reviewed-by: Keith Busch <keith.busch@intel.com>
* NVMe: Schedule timeout for sync commandsKeith Busch2013-05-021-1/+1
| | | | | | | | | Schedule a timeout on sync commands in case the command times out and the device is not being polled for timeouts. This prevents device removal from hanging forever if the device has stopped responding. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Meta-data support in NVME_IOCTL_SUBMIT_IOKeith Busch2013-05-021-4/+67
| | | | | | | | | | This adds support for namespaces with separate meta-data formats in the submit io ioctl. The meta-data buffer has to be a contiguous, so such a buffer is allocated and the mapped user pages are copied to/from this buffer for write/read commands. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Device specific stripe size handlingKeith Busch2013-05-021-4/+15
| | | | | | | | | | | We have an nvme device that has a concept of a stripe size. IO requests that do not transfer data crossing a stripe boundary has greater performance compared to IO that does cross it. This patch sets the stripe size for the device if the device and vendor ids match one with this feature and splits IO requests that cross the stripe boundary. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Split non-mergeable bio requestsKeith Busch2013-05-021-30/+122
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It is possible a bio request can not be submitted as a single NVMe IO command if the bio_vec is not mergeable with the NVMe PRP alignement constraints. This condition was handled by submitting an IO for the mergeable portion then submitting a follow on IO for the remaining data after the previous IO completes. The remainder to be sent was tracked by manipulating the bio->bi_idx and bio->bi_sector. This patch splits the request as many times as necessary and submits the bios together. Since submitting the bio may cause it to be requeued on split, nvme_resubmit_bios had to be modified to remove the wait queue when the bio list is empty prior to submitting the bio since a split would have added the wait queue a second time, corrupting the wait queue head task list. There are a few other benefits from doing this: it fixes a potential issue with the previous handling of a non-mergeable bio as the requeuing method could would use an unlocked nvme_queue if the callback isn't invoked on the queue's associated cpu; it will be possible to retry a failed bio if desired at some later time since it does not manipulate the original bio; the bio integrity extensions require the bio to be in its original condition for the checks to work correctly if we implement the end-to-end data protection in the future. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Remove dead code in nvme_dev_addKeith Busch2013-05-021-9/+2
| | | | | | | | There is no situation that could occur where we could error out of this function and require cleaning up allocated namespaces. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Check for NULL memory in nvme_dev_addKeith Busch2013-05-021-0/+2
| | | | | Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Fix error clean-up on nvme_alloc_queueKeith Busch2013-05-021-1/+1
| | | | | | | | | The nvme_queue's depth is not set if we fail to allocate submission queue entries, which was being used to determine how much coherent memory to free on error. Use the depth variable instead. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Free admin queue on request_irq errorKeith Busch2013-05-021-4/+9
| | | | | | | Fixes a potential memory leak if requesting the admin queue irq fails. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Set TASK_INTERRUPTIBLE before processing queuesArjan van de Ven2013-05-011-2/+1
| | | | | | | | | | | | | The kthread has two tasks; handling timeouts (for which it runs once per second), and submitting queued BIOs. If a BIO happens to be queued after the thread has processed the queue but before it calls schedule_timeout(), the thread will sleep for a second before submitting it, which can cause performance problems in some rare cases (that will become more common in a subsequent patch). Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Tested-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Add a character device for each nvme deviceKeith Busch2013-04-161-10/+64
| | | | | | | | | | Registers a miscellaneous device for each nvme controller probed. This creates character device files as /dev/nvmeN, where N is the device instance, and supports nvme admin ioctl commands so devices without namespaces can be managed. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Fix endian-related problems in user I/O submission pathMatthew Wilcox2013-04-161-4/+4
| | | | | | | | | | | When constructing the command, dsmgmt needs to be treated as a 32-bit value, not a 16-bit value. reftag, apptag and appmask all need to be converted from native-endian to little-endian. Again, sparse's bitwise warnings caught this problem. Thanks to Keith for pointing out the correct way to fix the reftag. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com> Acked-by: Keith Busch <keith.busch@intel.com>
* NVMe: Fix I/O cancellation status on big-endian machinesMatthew Wilcox2013-04-161-1/+1
| | | | | | | The sparse bitwise checks pointed out that I needed to shift the status before changing its endianness, not after. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Don't fail initialisation unnecessarilyMatthew Wilcox2013-04-161-1/+7
| | | | | | | | | | | | | The nvme_dev_add() function currently returns the last error code that it saw, which (if everything else succeeds) happens to be the result of an optional command, so it can legitimately fail. Looking at the error path more closely reveals that we should return success from this function, even if no device namespaces are added. So once the queues are created and the device has responded to Identify, make sure that this function succeeds. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com> Acked-by: Keith Busch <keith.busch@intel.com>
* NVMe: Abstract out sector to block number conversionMatthew Wilcox2013-04-161-2/+2
| | | | | | | | | Introduce nvme_block_nr() to help convert sectors to block numbers. This fixes an integer overflow in the SCSI conversion layer, and it's slightly less typing than opencoding it. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com> Acked-by: Keith Busch <keith.busch@intel.com>
* NVMe: Use round_jiffies_relative() for the periodic, once-per-second timerArjan van de Ven2013-04-161-1/+1
| | | | | | | | | | | | | | | The nvme driver has a "once per second" event where the management kthread wakes up the system and then reschedules itself for 1 second later. For power efficiency reasons, I'd like this timer to happen together with other wakeups in the system. This patch makes the schedule_timeout() call in the kthread use round_jiffies_relative(), causing the wakeup to at least align with other "once per X seconds" events in the kernel. Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Tested-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Add nvme-scsi.cVishal Verma2013-03-281-17/+20
| | | | | | | | Translates SCSI commands in SG_IO ioctl to NVMe commands. Uses the scsi-nvme translation spec from nvmexpress.org as reference. Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Add definitions for format commandVishal Verma2013-03-271-0/+1
| | | | | | | | The SCSI emulation has the ability to send format commands, so we need to add the definition of the command. Also add a missing error code. Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Move structures & definitions to header fileVishal Verma2013-03-271-55/+0
| | | | | | | | | nvme-scsi.c uses several data structures and definitions that were previously private to nvme-core.c. Move the definitions to nvme.h, protected by __KERNEL__. Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
* NVMe: Rename nvme.c to nvme-core.cVishal Verma2013-03-261-0/+1865
In preparation for adding nvme-scsi.c It is preferable to retain the module name 'nvme' Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
OpenPOWER on IntegriCloud