diff options
author | ken <ken@FreeBSD.org> | 2015-12-16 19:01:14 +0000 |
---|---|---|
committer | ken <ken@FreeBSD.org> | 2015-12-16 19:01:14 +0000 |
commit | 5baa144ddf2321a8a1f340325c1d996e23754fb8 (patch) | |
tree | ed1ec8030fcea97c7f129bd31807f3aba2eab769 /sys/cam | |
parent | c643471e264c174b193170b92d29b88ef75b875d (diff) | |
download | FreeBSD-src-5baa144ddf2321a8a1f340325c1d996e23754fb8.zip FreeBSD-src-5baa144ddf2321a8a1f340325c1d996e23754fb8.tar.gz |
MFC r291716, r291724, r291741, r291742
In addition to those revisions, add this change to a file that is not in
head:
sys/ia64/include/bus.h:
Guard kernel-only parts of the ia64 machine/bus.h header with
#ifdef _KERNEL.
This allows userland programs to include <machine/bus.h> to get the
definition of bus_addr_t and bus_size_t.
------------------------------------------------------------------------
r291716 | ken | 2015-12-03 15:54:55 -0500 (Thu, 03 Dec 2015) | 257 lines
Add asynchronous command support to the pass(4) driver, and the new
camdd(8) utility.
CCBs may be queued to the driver via the new CAMIOQUEUE ioctl, and
completed CCBs may be retrieved via the CAMIOGET ioctl. User
processes can use poll(2) or kevent(2) to get notification when
I/O has completed.
While the existing CAMIOCOMMAND blocking ioctl interface only
supports user virtual data pointers in a CCB (generally only
one per CCB), the new CAMIOQUEUE ioctl supports user virtual and
physical address pointers, as well as user virtual and physical
scatter/gather lists. This allows user applications to have more
flexibility in their data handling operations.
Kernel memory for data transferred via the queued interface is
allocated from the zone allocator in MAXPHYS sized chunks, and user
data is copied in and out. This is likely faster than the
vmapbuf()/vunmapbuf() method used by the CAMIOCOMMAND ioctl in
configurations with many processors (there are more TLB shootdowns
caused by the mapping/unmapping operation) but may not be as fast
as running with unmapped I/O.
The new memory handling model for user requests also allows
applications to send CCBs with request sizes that are larger than
MAXPHYS. The pass(4) driver now limits queued requests to the I/O
size listed by the SIM driver in the maxio field in the Path
Inquiry (XPT_PATH_INQ) CCB.
There are some things things would be good to add:
1. Come up with a way to do unmapped I/O on multiple buffers.
Currently the unmapped I/O interface operates on a struct bio,
which includes only one address and length. It would be nice
to be able to send an unmapped scatter/gather list down to
busdma. This would allow eliminating the copy we currently do
for data.
2. Add an ioctl to list currently outstanding CCBs in the various
queues.
3. Add an ioctl to cancel a request, or use the XPT_ABORT CCB to do
that.
4. Test physical address support. Virtual pointers and scatter
gather lists have been tested, but I have not yet tested
physical addresses or scatter/gather lists.
5. Investigate multiple queue support. At the moment there is one
queue of commands per pass(4) device. If multiple processes
open the device, they will submit I/O into the same queue and
get events for the same completions. This is probably the right
model for most applications, but it is something that could be
changed later on.
Also, add a new utility, camdd(8) that uses the asynchronous pass(4)
driver interface.
This utility is intended to be a basic data transfer/copy utility,
a simple benchmark utility, and an example of how to use the
asynchronous pass(4) interface.
It can copy data to and from pass(4) devices using any target queue
depth, starting offset and blocksize for the input and ouptut devices.
It currently only supports SCSI devices, but could be easily extended
to support ATA devices.
It can also copy data to and from regular files, block devices, tape
devices, pipes, stdin, and stdout. It does not support queueing
multiple commands to any of those targets, since it uses the standard
read(2)/write(2)/writev(2)/readv(2) system calls.
The I/O is done by two threads, one for the reader and one for the
writer. The reader thread sends completed read requests to the
writer thread in strictly sequential order, even if they complete
out of order. That could be modified later on for random I/O patterns
or slightly out of order I/O.
camdd(8) uses kqueue(2)/kevent(2) to get I/O completion events from
the pass(4) driver and also to send request notifications internally.
For pass(4) devcies, camdd(8) uses a single buffer (CAM_DATA_VADDR)
per CAM CCB on the reading side, and a scatter/gather list
(CAM_DATA_SG) on the writing side. In addition to testing both
interfaces, this makes any potential reblocking of I/O easier. No
data is copied between the reader and the writer, but rather the
reader's buffers are split into multiple I/O requests or combined
into a single I/O request depending on the input and output blocksize.
For the file I/O path, camdd(8) also uses a single buffer (read(2),
write(2), pread(2) or pwrite(2)) on reads, and a scatter/gather list
(readv(2), writev(2), preadv(2), pwritev(2)) on writes.
Things that would be nice to do for camdd(8) eventually:
1. Add support for I/O pattern generation. Patterns like all
zeros, all ones, LBA-based patterns, random patterns, etc. Right
Now you can always use /dev/zero, /dev/random, etc.
2. Add support for a "sink" mode, so we do only reads with no
writes. Right now, you can use /dev/null.
3. Add support for automatic queue depth probing, so that we can
figure out the right queue depth on the input and output side
for maximum throughput. At the moment it defaults to 6.
4. Add support for SATA device passthrough I/O.
5. Add support for random LBAs and/or lengths on the input and
output sides.
6. Track average per-I/O latency and busy time. The busy time
and latency could also feed in to the automatic queue depth
determination.
sys/cam/scsi/scsi_pass.h:
Define two new ioctls, CAMIOQUEUE and CAMIOGET, that queue
and fetch asynchronous CAM CCBs respectively.
Although these ioctls do not have a declared argument, they
both take a union ccb pointer. If we declare a size here,
the ioctl code in sys/kern/sys_generic.c will malloc and free
a buffer for either the CCB or the CCB pointer (depending on
how it is declared). Since we have to keep a copy of the
CCB (which is fairly large) anyway, having the ioctl malloc
and free a CCB for each call is wasteful.
sys/cam/scsi/scsi_pass.c:
Add asynchronous CCB support.
Add two new ioctls, CAMIOQUEUE and CAMIOGET.
CAMIOQUEUE adds a CCB to the incoming queue. The CCB is
executed immediately (and moved to the active queue) if it
is an immediate CCB, but otherwise it will be executed
in passstart() when a CCB is available from the transport layer.
When CCBs are completed (because they are immediate or
passdone() if they are queued), they are put on the done
queue.
If we get the final close on the device before all pending
I/O is complete, all active I/O is moved to the abandoned
queue and we increment the peripheral reference count so
that the peripheral driver instance doesn't go away before
all pending I/O is done.
The new passcreatezone() function is called on the first
call to the CAMIOQUEUE ioctl on a given device to allocate
the UMA zones for I/O requests and S/G list buffers. This
may be good to move off to a taskqueue at some point.
The new passmemsetup() function allocates memory and
scatter/gather lists to hold the user's data, and copies
in any data that needs to be written. For virtual pointers
(CAM_DATA_VADDR), the kernel buffer is malloced from the
new pass(4) driver malloc bucket. For virtual
scatter/gather lists (CAM_DATA_SG), buffers are allocated
from a new per-pass(9) UMA zone in MAXPHYS-sized chunks.
Physical pointers are passed in unchanged. We have support
for up to 16 scatter/gather segments (for the user and
kernel S/G lists) in the default struct pass_io_req, so
requests with longer S/G lists require an extra kernel malloc.
The new passcopysglist() function copies a user scatter/gather
list to a kernel scatter/gather list. The number of elements
in each list may be different, but (obviously) the amount of data
stored has to be identical.
The new passmemdone() function copies data out for the
CAM_DATA_VADDR and CAM_DATA_SG cases.
The new passiocleanup() function restores data pointers in
user CCBs and frees memory.
Add new functions to support kqueue(2)/kevent(2):
passreadfilt() tells kevent whether or not the done
queue is empty.
passkqfilter() adds a knote to our list.
passreadfiltdetach() removes a knote from our list.
Add a new function, passpoll(), for poll(2)/select(2)
to use.
Add devstat(9) support for the queued CCB path.
sys/cam/ata/ata_da.c:
Add support for the BIO_VLIST bio type.
sys/cam/cam_ccb.h:
Add a new enumeration for the xflags field in the CCB header.
(This doesn't change the CCB header, just adds an enumeration to
use.)
sys/cam/cam_xpt.c:
Add a new function, xpt_setup_ccb_flags(), that allows specifying
CCB flags.
sys/cam/cam_xpt.h:
Add a prototype for xpt_setup_ccb_flags().
sys/cam/scsi/scsi_da.c:
Add support for BIO_VLIST.
sys/dev/md/md.c:
Add BIO_VLIST support to md(4).
sys/geom/geom_disk.c:
Add BIO_VLIST support to the GEOM disk class. Re-factor the I/O size
limiting code in g_disk_start() a bit.
sys/kern/subr_bus_dma.c:
Change _bus_dmamap_load_vlist() to take a starting offset and
length.
Add a new function, _bus_dmamap_load_pages(), that will load a list
of physical pages starting at an offset.
Update _bus_dmamap_load_bio() to allow loading BIO_VLIST bios.
Allow unmapped I/O to start at an offset.
sys/kern/subr_uio.c:
Add two new functions, physcopyin_vlist() and physcopyout_vlist().
sys/pc98/include/bus.h:
Guard kernel-only parts of the pc98 machine/bus.h header with
#ifdef _KERNEL.
This allows userland programs to include <machine/bus.h> to get the
definition of bus_addr_t and bus_size_t.
sys/sys/bio.h:
Add a new bio flag, BIO_VLIST.
sys/sys/uio.h:
Add prototypes for physcopyin_vlist() and physcopyout_vlist().
share/man/man4/pass.4:
Document the CAMIOQUEUE and CAMIOGET ioctls.
usr.sbin/Makefile:
Add camdd.
usr.sbin/camdd/Makefile:
Add a makefile for camdd(8).
usr.sbin/camdd/camdd.8:
Man page for camdd(8).
usr.sbin/camdd/camdd.c:
The new camdd(8) utility.
Sponsored by: Spectra Logic
------------------------------------------------------------------------
r291724 | ken | 2015-12-03 17:07:01 -0500 (Thu, 03 Dec 2015) | 6 lines
Fix typos in the camdd(8) usage() function output caused by an error in
my diff filter script.
Sponsored by: Spectra Logic
------------------------------------------------------------------------
r291741 | ken | 2015-12-03 22:38:35 -0500 (Thu, 03 Dec 2015) | 10 lines
Fix g_disk_vlist_limit() to work properly with deletes.
Add a new bp argument to g_disk_maxsegs(), and add a new function,
g_disk_maxsize() tha will properly determine the maximum I/O size for a
delete or non-delete bio.
Submitted by: will
Sponsored by: Spectra Logic
------------------------------------------------------------------------
------------------------------------------------------------------------
r291742 | ken | 2015-12-03 22:44:12 -0500 (Thu, 03 Dec 2015) | 5 lines
Fix a style issue in g_disk_limit().
Noticed by: bdrewery
------------------------------------------------------------------------
Sponsored by: Spectra Logic
Diffstat (limited to 'sys/cam')
-rw-r--r-- | sys/cam/ata/ata_da.c | 25 | ||||
-rw-r--r-- | sys/cam/cam_ccb.h | 3 | ||||
-rw-r--r-- | sys/cam/cam_xpt.c | 11 | ||||
-rw-r--r-- | sys/cam/cam_xpt.h | 4 | ||||
-rw-r--r-- | sys/cam/scsi/scsi_da.c | 29 | ||||
-rw-r--r-- | sys/cam/scsi/scsi_pass.c | 1604 | ||||
-rw-r--r-- | sys/cam/scsi/scsi_pass.h | 8 |
7 files changed, 1620 insertions, 64 deletions
diff --git a/sys/cam/ata/ata_da.c b/sys/cam/ata/ata_da.c index f88899e..005c684 100644 --- a/sys/cam/ata/ata_da.c +++ b/sys/cam/ata/ata_da.c @@ -1573,12 +1573,26 @@ adastart(struct cam_periph *periph, union ccb *start_ccb) } switch (bp->bio_cmd) { case BIO_WRITE: - softc->flags |= ADA_FLAG_DIRTY; - /* FALLTHROUGH */ case BIO_READ: { uint64_t lba = bp->bio_pblkno; uint16_t count = bp->bio_bcount / softc->params.secsize; + void *data_ptr; + int rw_op; + + if (bp->bio_cmd == BIO_WRITE) { + softc->flags |= ADA_FLAG_DIRTY; + rw_op = CAM_DIR_OUT; + } else { + rw_op = CAM_DIR_IN; + } + + data_ptr = bp->bio_data; + if ((bp->bio_flags & (BIO_UNMAPPED|BIO_VLIST)) != 0) { + rw_op |= CAM_DATA_BIO; + data_ptr = bp; + } + #ifdef ADA_TEST_FAILURE int fail = 0; @@ -1623,12 +1637,9 @@ adastart(struct cam_periph *periph, union ccb *start_ccb) cam_fill_ataio(ataio, ada_retry_count, adadone, - (bp->bio_cmd == BIO_READ ? CAM_DIR_IN : - CAM_DIR_OUT) | ((bp->bio_flags & BIO_UNMAPPED) - != 0 ? CAM_DATA_BIO : 0), + rw_op, tag_code, - ((bp->bio_flags & BIO_UNMAPPED) != 0) ? (void *)bp : - bp->bio_data, + data_ptr, bp->bio_bcount, ada_default_timeout*1000); diff --git a/sys/cam/cam_ccb.h b/sys/cam/cam_ccb.h index 98bb9ea..12d3803 100644 --- a/sys/cam/cam_ccb.h +++ b/sys/cam/cam_ccb.h @@ -111,6 +111,9 @@ typedef enum { typedef enum { CAM_EXTLUN_VALID = 0x00000001,/* 64bit lun field is valid */ + CAM_USER_DATA_ADDR = 0x00000002,/* Userspace data pointers */ + CAM_SG_FORMAT_IOVEC = 0x00000004,/* iovec instead of busdma S/G*/ + CAM_UNMAPPED_BUF = 0x00000008 /* use unmapped I/O */ } ccb_xflags; /* XPT Opcodes for xpt_action */ diff --git a/sys/cam/cam_xpt.c b/sys/cam/cam_xpt.c index ba0863a..6773829 100644 --- a/sys/cam/cam_xpt.c +++ b/sys/cam/cam_xpt.c @@ -3337,7 +3337,8 @@ xpt_merge_ccb(union ccb *master_ccb, union ccb *slave_ccb) } void -xpt_setup_ccb(struct ccb_hdr *ccb_h, struct cam_path *path, u_int32_t priority) +xpt_setup_ccb_flags(struct ccb_hdr *ccb_h, struct cam_path *path, + u_int32_t priority, u_int32_t flags) { CAM_DEBUG(path, CAM_DEBUG_TRACE, ("xpt_setup_ccb\n")); @@ -3355,10 +3356,16 @@ xpt_setup_ccb(struct ccb_hdr *ccb_h, struct cam_path *path, u_int32_t priority) ccb_h->target_lun = CAM_TARGET_WILDCARD; } ccb_h->pinfo.index = CAM_UNQUEUED_INDEX; - ccb_h->flags = 0; + ccb_h->flags = flags; ccb_h->xflags = 0; } +void +xpt_setup_ccb(struct ccb_hdr *ccb_h, struct cam_path *path, u_int32_t priority) +{ + xpt_setup_ccb_flags(ccb_h, path, priority, /*flags*/ 0); +} + /* Path manipulation functions */ cam_status xpt_create_path(struct cam_path **new_path_ptr, struct cam_periph *perph, diff --git a/sys/cam/cam_xpt.h b/sys/cam/cam_xpt.h index 1d983c9..ca7dccc 100644 --- a/sys/cam/cam_xpt.h +++ b/sys/cam/cam_xpt.h @@ -70,6 +70,10 @@ void xpt_action_default(union ccb *new_ccb); union ccb *xpt_alloc_ccb(void); union ccb *xpt_alloc_ccb_nowait(void); void xpt_free_ccb(union ccb *free_ccb); +void xpt_setup_ccb_flags(struct ccb_hdr *ccb_h, + struct cam_path *path, + u_int32_t priority, + u_int32_t flags); void xpt_setup_ccb(struct ccb_hdr *ccb_h, struct cam_path *path, u_int32_t priority); diff --git a/sys/cam/scsi/scsi_da.c b/sys/cam/scsi/scsi_da.c index 4e3fe76..1cd687a 100644 --- a/sys/cam/scsi/scsi_da.c +++ b/sys/cam/scsi/scsi_da.c @@ -2332,29 +2332,40 @@ skipstate: switch (bp->bio_cmd) { case BIO_WRITE: - softc->flags |= DA_FLAG_DIRTY; - /* FALLTHROUGH */ case BIO_READ: + { + void *data_ptr; + int rw_op; + + if (bp->bio_cmd == BIO_WRITE) { + softc->flags |= DA_FLAG_DIRTY; + rw_op = SCSI_RW_WRITE; + } else { + rw_op = SCSI_RW_READ; + } + + data_ptr = bp->bio_data; + if ((bp->bio_flags & (BIO_UNMAPPED|BIO_VLIST)) != 0) { + rw_op |= SCSI_RW_BIO; + data_ptr = bp; + } + scsi_read_write(&start_ccb->csio, /*retries*/da_retry_count, /*cbfcnp*/dadone, /*tag_action*/tag_code, - /*read_op*/(bp->bio_cmd == BIO_READ ? - SCSI_RW_READ : SCSI_RW_WRITE) | - ((bp->bio_flags & BIO_UNMAPPED) != 0 ? - SCSI_RW_BIO : 0), + rw_op, /*byte2*/0, softc->minimum_cmd_size, /*lba*/bp->bio_pblkno, /*block_count*/bp->bio_bcount / softc->params.secsize, - /*data_ptr*/ (bp->bio_flags & - BIO_UNMAPPED) != 0 ? (void *)bp : - bp->bio_data, + data_ptr, /*dxfer_len*/ bp->bio_bcount, /*sense_len*/SSD_FULL_SIZE, da_default_timeout * 1000); break; + } case BIO_FLUSH: /* * BIO_FLUSH doesn't currently communicate diff --git a/sys/cam/scsi/scsi_pass.c b/sys/cam/scsi/scsi_pass.c index 174151e..09cda5b 100644 --- a/sys/cam/scsi/scsi_pass.c +++ b/sys/cam/scsi/scsi_pass.c @@ -28,27 +28,39 @@ #include <sys/cdefs.h> __FBSDID("$FreeBSD$"); +#include "opt_kdtrace.h" + #include <sys/param.h> #include <sys/systm.h> #include <sys/kernel.h> +#include <sys/conf.h> #include <sys/types.h> #include <sys/bio.h> -#include <sys/malloc.h> -#include <sys/fcntl.h> -#include <sys/conf.h> -#include <sys/errno.h> +#include <sys/bus.h> #include <sys/devicestat.h> +#include <sys/errno.h> +#include <sys/fcntl.h> +#include <sys/malloc.h> #include <sys/proc.h> +#include <sys/poll.h> +#include <sys/selinfo.h> +#include <sys/sdt.h> #include <sys/taskqueue.h> +#include <vm/uma.h> +#include <vm/vm.h> +#include <vm/vm_extern.h> + +#include <machine/bus.h> #include <cam/cam.h> #include <cam/cam_ccb.h> #include <cam/cam_periph.h> #include <cam/cam_queue.h> +#include <cam/cam_xpt.h> #include <cam/cam_xpt_periph.h> #include <cam/cam_debug.h> -#include <cam/cam_sim.h> #include <cam/cam_compat.h> +#include <cam/cam_xpt_periph.h> #include <cam/scsi/scsi_all.h> #include <cam/scsi/scsi_pass.h> @@ -57,7 +69,11 @@ typedef enum { PASS_FLAG_OPEN = 0x01, PASS_FLAG_LOCKED = 0x02, PASS_FLAG_INVALID = 0x04, - PASS_FLAG_INITIAL_PHYSPATH = 0x08 + PASS_FLAG_INITIAL_PHYSPATH = 0x08, + PASS_FLAG_ZONE_INPROG = 0x10, + PASS_FLAG_ZONE_VALID = 0x20, + PASS_FLAG_UNMAPPED_CAPABLE = 0x40, + PASS_FLAG_ABANDONED_REF_SET = 0x80 } pass_flags; typedef enum { @@ -65,38 +81,104 @@ typedef enum { } pass_state; typedef enum { - PASS_CCB_BUFFER_IO + PASS_CCB_BUFFER_IO, + PASS_CCB_QUEUED_IO } pass_ccb_types; #define ccb_type ppriv_field0 -#define ccb_bp ppriv_ptr1 +#define ccb_ioreq ppriv_ptr1 -struct pass_softc { - pass_state state; - pass_flags flags; - u_int8_t pd_type; - union ccb saved_ccb; - int open_count; - u_int maxio; - struct devstat *device_stats; - struct cdev *dev; - struct cdev *alias_dev; - struct task add_physpath_task; +/* + * The maximum number of memory segments we preallocate. + */ +#define PASS_MAX_SEGS 16 + +typedef enum { + PASS_IO_NONE = 0x00, + PASS_IO_USER_SEG_MALLOC = 0x01, + PASS_IO_KERN_SEG_MALLOC = 0x02, + PASS_IO_ABANDONED = 0x04 +} pass_io_flags; + +struct pass_io_req { + union ccb ccb; + union ccb *alloced_ccb; + union ccb *user_ccb_ptr; + camq_entry user_periph_links; + ccb_ppriv_area user_periph_priv; + struct cam_periph_map_info mapinfo; + pass_io_flags flags; + ccb_flags data_flags; + int num_user_segs; + bus_dma_segment_t user_segs[PASS_MAX_SEGS]; + int num_kern_segs; + bus_dma_segment_t kern_segs[PASS_MAX_SEGS]; + bus_dma_segment_t *user_segptr; + bus_dma_segment_t *kern_segptr; + int num_bufs; + uint32_t dirs[CAM_PERIPH_MAXMAPS]; + uint32_t lengths[CAM_PERIPH_MAXMAPS]; + uint8_t *user_bufs[CAM_PERIPH_MAXMAPS]; + uint8_t *kern_bufs[CAM_PERIPH_MAXMAPS]; + struct bintime start_time; + TAILQ_ENTRY(pass_io_req) links; }; +struct pass_softc { + pass_state state; + pass_flags flags; + u_int8_t pd_type; + union ccb saved_ccb; + int open_count; + u_int maxio; + struct devstat *device_stats; + struct cdev *dev; + struct cdev *alias_dev; + struct task add_physpath_task; + struct task shutdown_kqueue_task; + struct selinfo read_select; + TAILQ_HEAD(, pass_io_req) incoming_queue; + TAILQ_HEAD(, pass_io_req) active_queue; + TAILQ_HEAD(, pass_io_req) abandoned_queue; + TAILQ_HEAD(, pass_io_req) done_queue; + struct cam_periph *periph; + char zone_name[12]; + char io_zone_name[12]; + uma_zone_t pass_zone; + uma_zone_t pass_io_zone; + size_t io_zone_size; +}; static d_open_t passopen; static d_close_t passclose; static d_ioctl_t passioctl; static d_ioctl_t passdoioctl; +static d_poll_t passpoll; +static d_kqfilter_t passkqfilter; +static void passreadfiltdetach(struct knote *kn); +static int passreadfilt(struct knote *kn, long hint); static periph_init_t passinit; static periph_ctor_t passregister; static periph_oninv_t passoninvalidate; static periph_dtor_t passcleanup; -static void pass_add_physpath(void *context, int pending); +static periph_start_t passstart; +static void pass_shutdown_kqueue(void *context, int pending); +static void pass_add_physpath(void *context, int pending); static void passasync(void *callback_arg, u_int32_t code, struct cam_path *path, void *arg); +static void passdone(struct cam_periph *periph, + union ccb *done_ccb); +static int passcreatezone(struct cam_periph *periph); +static void passiocleanup(struct pass_softc *softc, + struct pass_io_req *io_req); +static int passcopysglist(struct cam_periph *periph, + struct pass_io_req *io_req, + ccb_flags direction); +static int passmemsetup(struct cam_periph *periph, + struct pass_io_req *io_req); +static int passmemdone(struct cam_periph *periph, + struct pass_io_req *io_req); static int passerror(union ccb *ccb, u_int32_t cam_flags, u_int32_t sense_flags); static int passsendccb(struct cam_periph *periph, union ccb *ccb, @@ -116,9 +198,19 @@ static struct cdevsw pass_cdevsw = { .d_open = passopen, .d_close = passclose, .d_ioctl = passioctl, + .d_poll = passpoll, + .d_kqfilter = passkqfilter, .d_name = "pass", }; +static struct filterops passread_filtops = { + .f_isfd = 1, + .f_detach = passreadfiltdetach, + .f_event = passreadfilt +}; + +static MALLOC_DEFINE(M_SCSIPASS, "scsi_pass", "scsi passthrough buffers"); + static void passinit(void) { @@ -138,6 +230,60 @@ passinit(void) } static void +passrejectios(struct cam_periph *periph) +{ + struct pass_io_req *io_req, *io_req2; + struct pass_softc *softc; + + softc = (struct pass_softc *)periph->softc; + + /* + * The user can no longer get status for I/O on the done queue, so + * clean up all outstanding I/O on the done queue. + */ + TAILQ_FOREACH_SAFE(io_req, &softc->done_queue, links, io_req2) { + TAILQ_REMOVE(&softc->done_queue, io_req, links); + passiocleanup(softc, io_req); + uma_zfree(softc->pass_zone, io_req); + } + + /* + * The underlying device is gone, so we can't issue these I/Os. + * The devfs node has been shut down, so we can't return status to + * the user. Free any I/O left on the incoming queue. + */ + TAILQ_FOREACH_SAFE(io_req, &softc->incoming_queue, links, io_req2) { + TAILQ_REMOVE(&softc->incoming_queue, io_req, links); + passiocleanup(softc, io_req); + uma_zfree(softc->pass_zone, io_req); + } + + /* + * Normally we would put I/Os on the abandoned queue and acquire a + * reference when we saw the final close. But, the device went + * away and devfs may have moved everything off to deadfs by the + * time the I/O done callback is called; as a result, we won't see + * any more closes. So, if we have any active I/Os, we need to put + * them on the abandoned queue. When the abandoned queue is empty, + * we'll release the remaining reference (see below) to the peripheral. + */ + TAILQ_FOREACH_SAFE(io_req, &softc->active_queue, links, io_req2) { + TAILQ_REMOVE(&softc->active_queue, io_req, links); + io_req->flags |= PASS_IO_ABANDONED; + TAILQ_INSERT_TAIL(&softc->abandoned_queue, io_req, links); + } + + /* + * If we put any I/O on the abandoned queue, acquire a reference. + */ + if ((!TAILQ_EMPTY(&softc->abandoned_queue)) + && ((softc->flags & PASS_FLAG_ABANDONED_REF_SET) == 0)) { + cam_periph_doacquire(periph); + softc->flags |= PASS_FLAG_ABANDONED_REF_SET; + } +} + +static void passdevgonecb(void *arg) { struct cam_periph *periph; @@ -165,17 +311,26 @@ passdevgonecb(void *arg) /* * Release the reference held for the device node, it is gone now. + * Accordingly, inform all queued I/Os of their fate. */ cam_periph_release_locked(periph); + passrejectios(periph); /* - * We reference the lock directly here, instead of using + * We reference the SIM lock directly here, instead of using * cam_periph_unlock(). The reason is that the final call to * cam_periph_release_locked() above could result in the periph * getting freed. If that is the case, dereferencing the periph * with a cam_periph_unlock() call would cause a page fault. */ mtx_unlock(mtx); + + /* + * We have to remove our kqueue context from a thread because it + * may sleep. It would be nice if we could get a callback from + * kqueue when it is done cleaning up resources. + */ + taskqueue_enqueue(taskqueue_thread, &softc->shutdown_kqueue_task); } static void @@ -197,12 +352,6 @@ passoninvalidate(struct cam_periph *periph) * when it has cleaned up its state. */ destroy_dev_sched_cb(softc->dev, passdevgonecb, periph); - - /* - * XXX Return all queued I/O with ENXIO. - * XXX Handle any transactions queued to the card - * with XPT_ABORT_CCB. - */ } static void @@ -212,9 +361,40 @@ passcleanup(struct cam_periph *periph) softc = (struct pass_softc *)periph->softc; + cam_periph_assert(periph, MA_OWNED); + KASSERT(TAILQ_EMPTY(&softc->active_queue), + ("%s called when there are commands on the active queue!\n", + __func__)); + KASSERT(TAILQ_EMPTY(&softc->abandoned_queue), + ("%s called when there are commands on the abandoned queue!\n", + __func__)); + KASSERT(TAILQ_EMPTY(&softc->incoming_queue), + ("%s called when there are commands on the incoming queue!\n", + __func__)); + KASSERT(TAILQ_EMPTY(&softc->done_queue), + ("%s called when there are commands on the done queue!\n", + __func__)); + devstat_remove_entry(softc->device_stats); cam_periph_unlock(periph); + + /* + * We call taskqueue_drain() for the physpath task to make sure it + * is complete. We drop the lock because this can potentially + * sleep. XXX KDM that is bad. Need a way to get a callback when + * a taskqueue is drained. + * + * Note that we don't drain the kqueue shutdown task queue. This + * is because we hold a reference on the periph for kqueue, and + * release that reference from the kqueue shutdown task queue. So + * we cannot come into this routine unless we've released that + * reference. Also, because that could be the last reference, we + * could be called from the cam_periph_release() call in + * pass_shutdown_kqueue(). In that case, the taskqueue_drain() + * would deadlock. It would be preferable if we had a way to + * get a callback when a taskqueue is done. + */ taskqueue_drain(taskqueue_thread, &softc->add_physpath_task); cam_periph_lock(periph); @@ -223,10 +403,29 @@ passcleanup(struct cam_periph *periph) } static void +pass_shutdown_kqueue(void *context, int pending) +{ + struct cam_periph *periph; + struct pass_softc *softc; + + periph = context; + softc = periph->softc; + + knlist_clear(&softc->read_select.si_note, /*is_locked*/ 0); + knlist_destroy(&softc->read_select.si_note); + + /* + * Release the reference we held for kqueue. + */ + cam_periph_release(periph); +} + +static void pass_add_physpath(void *context, int pending) { struct cam_periph *periph; struct pass_softc *softc; + struct mtx *mtx; char *physpath; /* @@ -236,34 +435,38 @@ pass_add_physpath(void *context, int pending) periph = context; softc = periph->softc; physpath = malloc(MAXPATHLEN, M_DEVBUF, M_WAITOK); - cam_periph_lock(periph); - if (periph->flags & CAM_PERIPH_INVALID) { - cam_periph_unlock(periph); + mtx = cam_periph_mtx(periph); + mtx_lock(mtx); + + if (periph->flags & CAM_PERIPH_INVALID) goto out; - } + if (xpt_getattr(physpath, MAXPATHLEN, "GEOM::physpath", periph->path) == 0 && strlen(physpath) != 0) { - cam_periph_unlock(periph); + mtx_unlock(mtx); make_dev_physpath_alias(MAKEDEV_WAITOK, &softc->alias_dev, softc->dev, softc->alias_dev, physpath); - cam_periph_lock(periph); + mtx_lock(mtx); } +out: /* * Now that we've made our alias, we no longer have to have a * reference to the device. */ - if ((softc->flags & PASS_FLAG_INITIAL_PHYSPATH) == 0) { + if ((softc->flags & PASS_FLAG_INITIAL_PHYSPATH) == 0) softc->flags |= PASS_FLAG_INITIAL_PHYSPATH; - cam_periph_unlock(periph); - dev_rel(softc->dev); - } - else - cam_periph_unlock(periph); -out: + /* + * We always acquire a reference to the periph before queueing this + * task queue function, so it won't go away before we run. + */ + while (pending-- > 0) + cam_periph_release_locked(periph); + mtx_unlock(mtx); + free(physpath, M_DEVBUF); } @@ -291,7 +494,7 @@ passasync(void *callback_arg, u_int32_t code, * process. */ status = cam_periph_alloc(passregister, passoninvalidate, - passcleanup, NULL, "pass", + passcleanup, passstart, "pass", CAM_PERIPH_BIO, path, passasync, AC_FOUND_DEVICE, cgd); @@ -315,8 +518,19 @@ passasync(void *callback_arg, u_int32_t code, buftype = (uintptr_t)arg; if (buftype == CDAI_TYPE_PHYS_PATH) { struct pass_softc *softc; + cam_status status; softc = (struct pass_softc *)periph->softc; + /* + * Acquire a reference to the periph before we + * start the taskqueue, so that we don't run into + * a situation where the periph goes away before + * the task queue has a chance to run. + */ + status = cam_periph_acquire(periph); + if (status != CAM_REQ_CMP) + break; + taskqueue_enqueue(taskqueue_thread, &softc->add_physpath_task); } @@ -361,6 +575,17 @@ passregister(struct cam_periph *periph, void *arg) softc->pd_type = T_DIRECT; periph->softc = softc; + softc->periph = periph; + TAILQ_INIT(&softc->incoming_queue); + TAILQ_INIT(&softc->active_queue); + TAILQ_INIT(&softc->abandoned_queue); + TAILQ_INIT(&softc->done_queue); + snprintf(softc->zone_name, sizeof(softc->zone_name), "%s%d", + periph->periph_name, periph->unit_number); + snprintf(softc->io_zone_name, sizeof(softc->io_zone_name), "%s%dIO", + periph->periph_name, periph->unit_number); + softc->io_zone_size = MAXPHYS; + knlist_init_mtx(&softc->read_select.si_note, cam_periph_mtx(periph)); bzero(&cpi, sizeof(cpi)); xpt_setup_ccb(&cpi.ccb_h, periph->path, CAM_PRIORITY_NORMAL); @@ -374,6 +599,9 @@ passregister(struct cam_periph *periph, void *arg) else softc->maxio = cpi.maxio; /* real value */ + if (cpi.hba_misc & PIM_UNMAPPED) + softc->flags |= PASS_FLAG_UNMAPPED_CAPABLE; + /* * We pass in 0 for a blocksize, since we don't * know what the blocksize of this device is, if @@ -391,6 +619,23 @@ passregister(struct cam_periph *periph, void *arg) DEVSTAT_PRIORITY_PASS); /* + * Initialize the taskqueue handler for shutting down kqueue. + */ + TASK_INIT(&softc->shutdown_kqueue_task, /*priority*/ 0, + pass_shutdown_kqueue, periph); + + /* + * Acquire a reference to the periph that we can release once we've + * cleaned up the kqueue. + */ + if (cam_periph_acquire(periph) != CAM_REQ_CMP) { + xpt_print(periph->path, "%s: lost periph during " + "registration!\n", __func__); + cam_periph_lock(periph); + return (CAM_REQ_CMP_ERR); + } + + /* * Acquire a reference to the periph before we create the devfs * instance for it. We'll release this reference once the devfs * instance has been freed. @@ -408,12 +653,15 @@ passregister(struct cam_periph *periph, void *arg) periph->periph_name, periph->unit_number); /* - * Now that we have made the devfs instance, hold a reference to it - * until the task queue has run to setup the physical path alias. - * That way devfs won't get rid of the device before we add our - * alias. + * Hold a reference to the periph before we create the physical + * path alias so it can't go away. */ - dev_ref(softc->dev); + if (cam_periph_acquire(periph) != CAM_REQ_CMP) { + xpt_print(periph->path, "%s: lost periph during " + "registration!\n", __func__); + cam_periph_lock(periph); + return (CAM_REQ_CMP_ERR); + } cam_periph_lock(periph); softc->dev->si_drv1 = periph; @@ -514,6 +762,55 @@ passclose(struct cdev *dev, int flag, int fmt, struct thread *td) softc = periph->softc; softc->open_count--; + if (softc->open_count == 0) { + struct pass_io_req *io_req, *io_req2; + int need_unlock; + + need_unlock = 0; + + TAILQ_FOREACH_SAFE(io_req, &softc->done_queue, links, io_req2) { + TAILQ_REMOVE(&softc->done_queue, io_req, links); + passiocleanup(softc, io_req); + uma_zfree(softc->pass_zone, io_req); + } + + TAILQ_FOREACH_SAFE(io_req, &softc->incoming_queue, links, + io_req2) { + TAILQ_REMOVE(&softc->incoming_queue, io_req, links); + passiocleanup(softc, io_req); + uma_zfree(softc->pass_zone, io_req); + } + + /* + * If there are any active I/Os, we need to forcibly acquire a + * reference to the peripheral so that we don't go away + * before they complete. We'll release the reference when + * the abandoned queue is empty. + */ + io_req = TAILQ_FIRST(&softc->active_queue); + if ((io_req != NULL) + && (softc->flags & PASS_FLAG_ABANDONED_REF_SET) == 0) { + cam_periph_doacquire(periph); + softc->flags |= PASS_FLAG_ABANDONED_REF_SET; + } + + /* + * Since the I/O in the active queue is not under our + * control, just set a flag so that we can clean it up when + * it completes and put it on the abandoned queue. This + * will prevent our sending spurious completions in the + * event that the device is opened again before these I/Os + * complete. + */ + TAILQ_FOREACH_SAFE(io_req, &softc->active_queue, links, + io_req2) { + TAILQ_REMOVE(&softc->active_queue, io_req, links); + io_req->flags |= PASS_IO_ABANDONED; + TAILQ_INSERT_TAIL(&softc->abandoned_queue, io_req, + links); + } + } + cam_periph_release_locked(periph); /* @@ -533,6 +830,915 @@ passclose(struct cdev *dev, int flag, int fmt, struct thread *td) return (0); } + +static void +passstart(struct cam_periph *periph, union ccb *start_ccb) +{ + struct pass_softc *softc; + + softc = (struct pass_softc *)periph->softc; + + switch (softc->state) { + case PASS_STATE_NORMAL: { + struct pass_io_req *io_req; + + /* + * Check for any queued I/O requests that require an + * allocated slot. + */ + io_req = TAILQ_FIRST(&softc->incoming_queue); + if (io_req == NULL) { + xpt_release_ccb(start_ccb); + break; + } + TAILQ_REMOVE(&softc->incoming_queue, io_req, links); + TAILQ_INSERT_TAIL(&softc->active_queue, io_req, links); + /* + * Merge the user's CCB into the allocated CCB. + */ + xpt_merge_ccb(start_ccb, &io_req->ccb); + start_ccb->ccb_h.ccb_type = PASS_CCB_QUEUED_IO; + start_ccb->ccb_h.ccb_ioreq = io_req; + start_ccb->ccb_h.cbfcnp = passdone; + io_req->alloced_ccb = start_ccb; + binuptime(&io_req->start_time); + devstat_start_transaction(softc->device_stats, + &io_req->start_time); + + xpt_action(start_ccb); + + /* + * If we have any more I/O waiting, schedule ourselves again. + */ + if (!TAILQ_EMPTY(&softc->incoming_queue)) + xpt_schedule(periph, CAM_PRIORITY_NORMAL); + break; + } + default: + break; + } +} + +static void +passdone(struct cam_periph *periph, union ccb *done_ccb) +{ + struct pass_softc *softc; + struct ccb_scsiio *csio; + + softc = (struct pass_softc *)periph->softc; + + cam_periph_assert(periph, MA_OWNED); + + csio = &done_ccb->csio; + switch (csio->ccb_h.ccb_type) { + case PASS_CCB_QUEUED_IO: { + struct pass_io_req *io_req; + + io_req = done_ccb->ccb_h.ccb_ioreq; +#if 0 + xpt_print(periph->path, "%s: called for user CCB %p\n", + __func__, io_req->user_ccb_ptr); +#endif + if (((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) + && (done_ccb->ccb_h.flags & CAM_PASS_ERR_RECOVER) + && ((io_req->flags & PASS_IO_ABANDONED) == 0)) { + int error; + + error = passerror(done_ccb, CAM_RETRY_SELTO, + SF_RETRY_UA | SF_NO_PRINT); + + if (error == ERESTART) { + /* + * A retry was scheduled, so + * just return. + */ + return; + } + } + + /* + * Copy the allocated CCB contents back to the malloced CCB + * so we can give status back to the user when he requests it. + */ + bcopy(done_ccb, &io_req->ccb, sizeof(*done_ccb)); + + /* + * Log data/transaction completion with devstat(9). + */ + switch (done_ccb->ccb_h.func_code) { + case XPT_SCSI_IO: + devstat_end_transaction(softc->device_stats, + done_ccb->csio.dxfer_len - done_ccb->csio.resid, + done_ccb->csio.tag_action & 0x3, + ((done_ccb->ccb_h.flags & CAM_DIR_MASK) == + CAM_DIR_NONE) ? DEVSTAT_NO_DATA : + (done_ccb->ccb_h.flags & CAM_DIR_OUT) ? + DEVSTAT_WRITE : DEVSTAT_READ, NULL, + &io_req->start_time); + break; + case XPT_ATA_IO: + devstat_end_transaction(softc->device_stats, + done_ccb->ataio.dxfer_len - done_ccb->ataio.resid, + done_ccb->ataio.tag_action & 0x3, + ((done_ccb->ccb_h.flags & CAM_DIR_MASK) == + CAM_DIR_NONE) ? DEVSTAT_NO_DATA : + (done_ccb->ccb_h.flags & CAM_DIR_OUT) ? + DEVSTAT_WRITE : DEVSTAT_READ, NULL, + &io_req->start_time); + break; + case XPT_SMP_IO: + /* + * XXX KDM this isn't quite right, but there isn't + * currently an easy way to represent a bidirectional + * transfer in devstat. The only way to do it + * and have the byte counts come out right would + * mean that we would have to record two + * transactions, one for the request and one for the + * response. For now, so that we report something, + * just treat the entire thing as a read. + */ + devstat_end_transaction(softc->device_stats, + done_ccb->smpio.smp_request_len + + done_ccb->smpio.smp_response_len, + DEVSTAT_TAG_SIMPLE, DEVSTAT_READ, NULL, + &io_req->start_time); + break; + default: + devstat_end_transaction(softc->device_stats, 0, + DEVSTAT_TAG_NONE, DEVSTAT_NO_DATA, NULL, + &io_req->start_time); + break; + } + + /* + * In the normal case, take the completed I/O off of the + * active queue and put it on the done queue. Notitfy the + * user that we have a completed I/O. + */ + if ((io_req->flags & PASS_IO_ABANDONED) == 0) { + TAILQ_REMOVE(&softc->active_queue, io_req, links); + TAILQ_INSERT_TAIL(&softc->done_queue, io_req, links); + selwakeuppri(&softc->read_select, PRIBIO); + KNOTE_LOCKED(&softc->read_select.si_note, 0); + } else { + /* + * In the case of an abandoned I/O (final close + * without fetching the I/O), take it off of the + * abandoned queue and free it. + */ + TAILQ_REMOVE(&softc->abandoned_queue, io_req, links); + passiocleanup(softc, io_req); + uma_zfree(softc->pass_zone, io_req); + + /* + * Release the done_ccb here, since we may wind up + * freeing the peripheral when we decrement the + * reference count below. + */ + xpt_release_ccb(done_ccb); + + /* + * If the abandoned queue is empty, we can release + * our reference to the periph since we won't have + * any more completions coming. + */ + if ((TAILQ_EMPTY(&softc->abandoned_queue)) + && (softc->flags & PASS_FLAG_ABANDONED_REF_SET)) { + softc->flags &= ~PASS_FLAG_ABANDONED_REF_SET; + cam_periph_release_locked(periph); + } + + /* + * We have already released the CCB, so we can + * return. + */ + return; + } + break; + } + } + xpt_release_ccb(done_ccb); +} + +static int +passcreatezone(struct cam_periph *periph) +{ + struct pass_softc *softc; + int error; + + error = 0; + softc = (struct pass_softc *)periph->softc; + + cam_periph_assert(periph, MA_OWNED); + KASSERT(((softc->flags & PASS_FLAG_ZONE_VALID) == 0), + ("%s called when the pass(4) zone is valid!\n", __func__)); + KASSERT((softc->pass_zone == NULL), + ("%s called when the pass(4) zone is allocated!\n", __func__)); + + if ((softc->flags & PASS_FLAG_ZONE_INPROG) == 0) { + + /* + * We're the first context through, so we need to create + * the pass(4) UMA zone for I/O requests. + */ + softc->flags |= PASS_FLAG_ZONE_INPROG; + + /* + * uma_zcreate() does a blocking (M_WAITOK) allocation, + * so we cannot hold a mutex while we call it. + */ + cam_periph_unlock(periph); + + softc->pass_zone = uma_zcreate(softc->zone_name, + sizeof(struct pass_io_req), NULL, NULL, NULL, NULL, + /*align*/ 0, /*flags*/ 0); + + softc->pass_io_zone = uma_zcreate(softc->io_zone_name, + softc->io_zone_size, NULL, NULL, NULL, NULL, + /*align*/ 0, /*flags*/ 0); + + cam_periph_lock(periph); + + if ((softc->pass_zone == NULL) + || (softc->pass_io_zone == NULL)) { + if (softc->pass_zone == NULL) + xpt_print(periph->path, "unable to allocate " + "IO Req UMA zone\n"); + else + xpt_print(periph->path, "unable to allocate " + "IO UMA zone\n"); + softc->flags &= ~PASS_FLAG_ZONE_INPROG; + goto bailout; + } + + /* + * Set the flags appropriately and notify any other waiters. + */ + softc->flags &= PASS_FLAG_ZONE_INPROG; + softc->flags |= PASS_FLAG_ZONE_VALID; + wakeup(&softc->pass_zone); + } else { + /* + * In this case, the UMA zone has not yet been created, but + * another context is in the process of creating it. We + * need to sleep until the creation is either done or has + * failed. + */ + while ((softc->flags & PASS_FLAG_ZONE_INPROG) + && ((softc->flags & PASS_FLAG_ZONE_VALID) == 0)) { + error = msleep(&softc->pass_zone, + cam_periph_mtx(periph), PRIBIO, + "paszon", 0); + if (error != 0) + goto bailout; + } + /* + * If the zone creation failed, no luck for the user. + */ + if ((softc->flags & PASS_FLAG_ZONE_VALID) == 0){ + error = ENOMEM; + goto bailout; + } + } +bailout: + return (error); +} + +static void +passiocleanup(struct pass_softc *softc, struct pass_io_req *io_req) +{ + union ccb *ccb; + u_int8_t **data_ptrs[CAM_PERIPH_MAXMAPS]; + int i, numbufs; + + ccb = &io_req->ccb; + + switch (ccb->ccb_h.func_code) { + case XPT_DEV_MATCH: + numbufs = min(io_req->num_bufs, 2); + + if (numbufs == 1) { + data_ptrs[0] = (u_int8_t **)&ccb->cdm.matches; + } else { + data_ptrs[0] = (u_int8_t **)&ccb->cdm.patterns; + data_ptrs[1] = (u_int8_t **)&ccb->cdm.matches; + } + break; + case XPT_SCSI_IO: + case XPT_CONT_TARGET_IO: + data_ptrs[0] = &ccb->csio.data_ptr; + numbufs = min(io_req->num_bufs, 1); + break; + case XPT_ATA_IO: + data_ptrs[0] = &ccb->ataio.data_ptr; + numbufs = min(io_req->num_bufs, 1); + break; + case XPT_SMP_IO: + numbufs = min(io_req->num_bufs, 2); + data_ptrs[0] = &ccb->smpio.smp_request; + data_ptrs[1] = &ccb->smpio.smp_response; + break; + case XPT_DEV_ADVINFO: + numbufs = min(io_req->num_bufs, 1); + data_ptrs[0] = (uint8_t **)&ccb->cdai.buf; + break; + default: + /* allow ourselves to be swapped once again */ + return; + break; /* NOTREACHED */ + } + + if (io_req->flags & PASS_IO_USER_SEG_MALLOC) { + free(io_req->user_segptr, M_SCSIPASS); + io_req->user_segptr = NULL; + } + + /* + * We only want to free memory we malloced. + */ + if (io_req->data_flags == CAM_DATA_VADDR) { + for (i = 0; i < io_req->num_bufs; i++) { + if (io_req->kern_bufs[i] == NULL) + continue; + + free(io_req->kern_bufs[i], M_SCSIPASS); + io_req->kern_bufs[i] = NULL; + } + } else if (io_req->data_flags == CAM_DATA_SG) { + for (i = 0; i < io_req->num_kern_segs; i++) { + if ((uint8_t *)(uintptr_t) + io_req->kern_segptr[i].ds_addr == NULL) + continue; + + uma_zfree(softc->pass_io_zone, (uint8_t *)(uintptr_t) + io_req->kern_segptr[i].ds_addr); + io_req->kern_segptr[i].ds_addr = 0; + } + } + + if (io_req->flags & PASS_IO_KERN_SEG_MALLOC) { + free(io_req->kern_segptr, M_SCSIPASS); + io_req->kern_segptr = NULL; + } + + if (io_req->data_flags != CAM_DATA_PADDR) { + for (i = 0; i < numbufs; i++) { + /* + * Restore the user's buffer pointers to their + * previous values. + */ + if (io_req->user_bufs[i] != NULL) + *data_ptrs[i] = io_req->user_bufs[i]; + } + } + +} + +static int +passcopysglist(struct cam_periph *periph, struct pass_io_req *io_req, + ccb_flags direction) +{ + bus_size_t kern_watermark, user_watermark, len_copied, len_to_copy; + bus_dma_segment_t *user_sglist, *kern_sglist; + int i, j, error; + + error = 0; + kern_watermark = 0; + user_watermark = 0; + len_to_copy = 0; + len_copied = 0; + user_sglist = io_req->user_segptr; + kern_sglist = io_req->kern_segptr; + + for (i = 0, j = 0; i < io_req->num_user_segs && + j < io_req->num_kern_segs;) { + uint8_t *user_ptr, *kern_ptr; + + len_to_copy = min(user_sglist[i].ds_len -user_watermark, + kern_sglist[j].ds_len - kern_watermark); + + user_ptr = (uint8_t *)(uintptr_t)user_sglist[i].ds_addr; + user_ptr = user_ptr + user_watermark; + kern_ptr = (uint8_t *)(uintptr_t)kern_sglist[j].ds_addr; + kern_ptr = kern_ptr + kern_watermark; + + user_watermark += len_to_copy; + kern_watermark += len_to_copy; + + if (!useracc(user_ptr, len_to_copy, + (direction == CAM_DIR_IN) ? VM_PROT_WRITE : VM_PROT_READ)) { + xpt_print(periph->path, "%s: unable to access user " + "S/G list element %p len %zu\n", __func__, + user_ptr, len_to_copy); + error = EFAULT; + goto bailout; + } + + if (direction == CAM_DIR_IN) { + error = copyout(kern_ptr, user_ptr, len_to_copy); + if (error != 0) { + xpt_print(periph->path, "%s: copyout of %u " + "bytes from %p to %p failed with " + "error %d\n", __func__, len_to_copy, + kern_ptr, user_ptr, error); + goto bailout; + } + } else { + error = copyin(user_ptr, kern_ptr, len_to_copy); + if (error != 0) { + xpt_print(periph->path, "%s: copyin of %u " + "bytes from %p to %p failed with " + "error %d\n", __func__, len_to_copy, + user_ptr, kern_ptr, error); + goto bailout; + } + } + + len_copied += len_to_copy; + + if (user_sglist[i].ds_len == user_watermark) { + i++; + user_watermark = 0; + } + + if (kern_sglist[j].ds_len == kern_watermark) { + j++; + kern_watermark = 0; + } + } + +bailout: + + return (error); +} + +static int +passmemsetup(struct cam_periph *periph, struct pass_io_req *io_req) +{ + union ccb *ccb; + struct pass_softc *softc; + int numbufs, i; + uint8_t **data_ptrs[CAM_PERIPH_MAXMAPS]; + uint32_t lengths[CAM_PERIPH_MAXMAPS]; + uint32_t dirs[CAM_PERIPH_MAXMAPS]; + uint32_t num_segs; + uint16_t *seg_cnt_ptr; + size_t maxmap; + int error; + + cam_periph_assert(periph, MA_NOTOWNED); + + softc = periph->softc; + + error = 0; + ccb = &io_req->ccb; + maxmap = 0; + num_segs = 0; + seg_cnt_ptr = NULL; + + switch(ccb->ccb_h.func_code) { + case XPT_DEV_MATCH: + if (ccb->cdm.match_buf_len == 0) { + printf("%s: invalid match buffer length 0\n", __func__); + return(EINVAL); + } + if (ccb->cdm.pattern_buf_len > 0) { + data_ptrs[0] = (u_int8_t **)&ccb->cdm.patterns; + lengths[0] = ccb->cdm.pattern_buf_len; + dirs[0] = CAM_DIR_OUT; + data_ptrs[1] = (u_int8_t **)&ccb->cdm.matches; + lengths[1] = ccb->cdm.match_buf_len; + dirs[1] = CAM_DIR_IN; + numbufs = 2; + } else { + data_ptrs[0] = (u_int8_t **)&ccb->cdm.matches; + lengths[0] = ccb->cdm.match_buf_len; + dirs[0] = CAM_DIR_IN; + numbufs = 1; + } + io_req->data_flags = CAM_DATA_VADDR; + break; + case XPT_SCSI_IO: + case XPT_CONT_TARGET_IO: + if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_NONE) + return(0); + + /* + * The user shouldn't be able to supply a bio. + */ + if ((ccb->ccb_h.flags & CAM_DATA_MASK) == CAM_DATA_BIO) + return (EINVAL); + + io_req->data_flags = ccb->ccb_h.flags & CAM_DATA_MASK; + + data_ptrs[0] = &ccb->csio.data_ptr; + lengths[0] = ccb->csio.dxfer_len; + dirs[0] = ccb->ccb_h.flags & CAM_DIR_MASK; + num_segs = ccb->csio.sglist_cnt; + seg_cnt_ptr = &ccb->csio.sglist_cnt; + numbufs = 1; + maxmap = softc->maxio; + break; + case XPT_ATA_IO: + if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_NONE) + return(0); + + /* + * We only support a single virtual address for ATA I/O. + */ + if ((ccb->ccb_h.flags & CAM_DATA_MASK) != CAM_DATA_VADDR) + return (EINVAL); + + io_req->data_flags = CAM_DATA_VADDR; + + data_ptrs[0] = &ccb->ataio.data_ptr; + lengths[0] = ccb->ataio.dxfer_len; + dirs[0] = ccb->ccb_h.flags & CAM_DIR_MASK; + numbufs = 1; + maxmap = softc->maxio; + break; + case XPT_SMP_IO: + io_req->data_flags = CAM_DATA_VADDR; + + data_ptrs[0] = &ccb->smpio.smp_request; + lengths[0] = ccb->smpio.smp_request_len; + dirs[0] = CAM_DIR_OUT; + data_ptrs[1] = &ccb->smpio.smp_response; + lengths[1] = ccb->smpio.smp_response_len; + dirs[1] = CAM_DIR_IN; + numbufs = 2; + maxmap = softc->maxio; + break; + case XPT_DEV_ADVINFO: + if (ccb->cdai.bufsiz == 0) + return (0); + + io_req->data_flags = CAM_DATA_VADDR; + + data_ptrs[0] = (uint8_t **)&ccb->cdai.buf; + lengths[0] = ccb->cdai.bufsiz; + dirs[0] = CAM_DIR_IN; + numbufs = 1; + break; + default: + return(EINVAL); + break; /* NOTREACHED */ + } + + io_req->num_bufs = numbufs; + + /* + * If there is a maximum, check to make sure that the user's + * request fits within the limit. In general, we should only have + * a maximum length for requests that go to hardware. Otherwise it + * is whatever we're able to malloc. + */ + for (i = 0; i < numbufs; i++) { + io_req->user_bufs[i] = *data_ptrs[i]; + io_req->dirs[i] = dirs[i]; + io_req->lengths[i] = lengths[i]; + + if (maxmap == 0) + continue; + + if (lengths[i] <= maxmap) + continue; + + xpt_print(periph->path, "%s: data length %u > max allowed %u " + "bytes\n", __func__, lengths[i], maxmap); + error = EINVAL; + goto bailout; + } + + switch (io_req->data_flags) { + case CAM_DATA_VADDR: + /* Map or copy the buffer into kernel address space */ + for (i = 0; i < numbufs; i++) { + uint8_t *tmp_buf; + + /* + * If for some reason no length is specified, we + * don't need to allocate anything. + */ + if (io_req->lengths[i] == 0) + continue; + + /* + * Make sure that the user's buffer is accessible + * to that process. + */ + if (!useracc(io_req->user_bufs[i], io_req->lengths[i], + (io_req->dirs[i] == CAM_DIR_IN) ? VM_PROT_WRITE : + VM_PROT_READ)) { + xpt_print(periph->path, "%s: user address %p " + "length %u is not accessible\n", __func__, + io_req->user_bufs[i], io_req->lengths[i]); + error = EFAULT; + goto bailout; + } + + tmp_buf = malloc(lengths[i], M_SCSIPASS, + M_WAITOK | M_ZERO); + io_req->kern_bufs[i] = tmp_buf; + *data_ptrs[i] = tmp_buf; + +#if 0 + xpt_print(periph->path, "%s: malloced %p len %u, user " + "buffer %p, operation: %s\n", __func__, + tmp_buf, lengths[i], io_req->user_bufs[i], + (dirs[i] == CAM_DIR_IN) ? "read" : "write"); +#endif + /* + * We only need to copy in if the user is writing. + */ + if (dirs[i] != CAM_DIR_OUT) + continue; + + error = copyin(io_req->user_bufs[i], + io_req->kern_bufs[i], lengths[i]); + if (error != 0) { + xpt_print(periph->path, "%s: copy of user " + "buffer from %p to %p failed with " + "error %d\n", __func__, + io_req->user_bufs[i], + io_req->kern_bufs[i], error); + goto bailout; + } + } + break; + case CAM_DATA_PADDR: + /* Pass down the pointer as-is */ + break; + case CAM_DATA_SG: { + size_t sg_length, size_to_go, alloc_size; + uint32_t num_segs_needed; + + /* + * Copy the user S/G list in, and then copy in the + * individual segments. + */ + /* + * We shouldn't see this, but check just in case. + */ + if (numbufs != 1) { + xpt_print(periph->path, "%s: cannot currently handle " + "more than one S/G list per CCB\n", __func__); + error = EINVAL; + goto bailout; + } + + /* + * We have to have at least one segment. + */ + if (num_segs == 0) { + xpt_print(periph->path, "%s: CAM_DATA_SG flag set, " + "but sglist_cnt=0!\n", __func__); + error = EINVAL; + goto bailout; + } + + /* + * Make sure the user specified the total length and didn't + * just leave it to us to decode the S/G list. + */ + if (lengths[0] == 0) { + xpt_print(periph->path, "%s: no dxfer_len specified, " + "but CAM_DATA_SG flag is set!\n", __func__); + error = EINVAL; + goto bailout; + } + + /* + * We allocate buffers in io_zone_size increments for an + * S/G list. This will generally be MAXPHYS. + */ + if (lengths[0] <= softc->io_zone_size) + num_segs_needed = 1; + else { + num_segs_needed = lengths[0] / softc->io_zone_size; + if ((lengths[0] % softc->io_zone_size) != 0) + num_segs_needed++; + } + + /* Figure out the size of the S/G list */ + sg_length = num_segs * sizeof(bus_dma_segment_t); + io_req->num_user_segs = num_segs; + io_req->num_kern_segs = num_segs_needed; + + /* Save the user's S/G list pointer for later restoration */ + io_req->user_bufs[0] = *data_ptrs[0]; + + /* + * If we have enough segments allocated by default to handle + * the length of the user's S/G list, + */ + if (num_segs > PASS_MAX_SEGS) { + io_req->user_segptr = malloc(sizeof(bus_dma_segment_t) * + num_segs, M_SCSIPASS, M_WAITOK | M_ZERO); + io_req->flags |= PASS_IO_USER_SEG_MALLOC; + } else + io_req->user_segptr = io_req->user_segs; + + if (!useracc(*data_ptrs[0], sg_length, VM_PROT_READ)) { + xpt_print(periph->path, "%s: unable to access user " + "S/G list at %p\n", __func__, *data_ptrs[0]); + error = EFAULT; + goto bailout; + } + + error = copyin(*data_ptrs[0], io_req->user_segptr, sg_length); + if (error != 0) { + xpt_print(periph->path, "%s: copy of user S/G list " + "from %p to %p failed with error %d\n", + __func__, *data_ptrs[0], io_req->user_segptr, + error); + goto bailout; + } + + if (num_segs_needed > PASS_MAX_SEGS) { + io_req->kern_segptr = malloc(sizeof(bus_dma_segment_t) * + num_segs_needed, M_SCSIPASS, M_WAITOK | M_ZERO); + io_req->flags |= PASS_IO_KERN_SEG_MALLOC; + } else { + io_req->kern_segptr = io_req->kern_segs; + } + + /* + * Allocate the kernel S/G list. + */ + for (size_to_go = lengths[0], i = 0; + size_to_go > 0 && i < num_segs_needed; + i++, size_to_go -= alloc_size) { + uint8_t *kern_ptr; + + alloc_size = min(size_to_go, softc->io_zone_size); + kern_ptr = uma_zalloc(softc->pass_io_zone, M_WAITOK); + io_req->kern_segptr[i].ds_addr = + (bus_addr_t)(uintptr_t)kern_ptr; + io_req->kern_segptr[i].ds_len = alloc_size; + } + if (size_to_go > 0) { + printf("%s: size_to_go = %zu, software error!\n", + __func__, size_to_go); + error = EINVAL; + goto bailout; + } + + *data_ptrs[0] = (uint8_t *)io_req->kern_segptr; + *seg_cnt_ptr = io_req->num_kern_segs; + + /* + * We only need to copy data here if the user is writing. + */ + if (dirs[0] == CAM_DIR_OUT) + error = passcopysglist(periph, io_req, dirs[0]); + break; + } + case CAM_DATA_SG_PADDR: { + size_t sg_length; + + /* + * We shouldn't see this, but check just in case. + */ + if (numbufs != 1) { + printf("%s: cannot currently handle more than one " + "S/G list per CCB\n", __func__); + error = EINVAL; + goto bailout; + } + + /* + * We have to have at least one segment. + */ + if (num_segs == 0) { + xpt_print(periph->path, "%s: CAM_DATA_SG_PADDR flag " + "set, but sglist_cnt=0!\n", __func__); + error = EINVAL; + goto bailout; + } + + /* + * Make sure the user specified the total length and didn't + * just leave it to us to decode the S/G list. + */ + if (lengths[0] == 0) { + xpt_print(periph->path, "%s: no dxfer_len specified, " + "but CAM_DATA_SG flag is set!\n", __func__); + error = EINVAL; + goto bailout; + } + + /* Figure out the size of the S/G list */ + sg_length = num_segs * sizeof(bus_dma_segment_t); + io_req->num_user_segs = num_segs; + io_req->num_kern_segs = io_req->num_user_segs; + + /* Save the user's S/G list pointer for later restoration */ + io_req->user_bufs[0] = *data_ptrs[0]; + + if (num_segs > PASS_MAX_SEGS) { + io_req->user_segptr = malloc(sizeof(bus_dma_segment_t) * + num_segs, M_SCSIPASS, M_WAITOK | M_ZERO); + io_req->flags |= PASS_IO_USER_SEG_MALLOC; + } else + io_req->user_segptr = io_req->user_segs; + + io_req->kern_segptr = io_req->user_segptr; + + error = copyin(*data_ptrs[0], io_req->user_segptr, sg_length); + if (error != 0) { + xpt_print(periph->path, "%s: copy of user S/G list " + "from %p to %p failed with error %d\n", + __func__, *data_ptrs[0], io_req->user_segptr, + error); + goto bailout; + } + break; + } + default: + case CAM_DATA_BIO: + /* + * A user shouldn't be attaching a bio to the CCB. It + * isn't a user-accessible structure. + */ + error = EINVAL; + break; + } + +bailout: + if (error != 0) + passiocleanup(softc, io_req); + + return (error); +} + +static int +passmemdone(struct cam_periph *periph, struct pass_io_req *io_req) +{ + struct pass_softc *softc; + union ccb *ccb; + int error; + int i; + + error = 0; + softc = (struct pass_softc *)periph->softc; + ccb = &io_req->ccb; + + switch (io_req->data_flags) { + case CAM_DATA_VADDR: + /* + * Copy back to the user buffer if this was a read. + */ + for (i = 0; i < io_req->num_bufs; i++) { + if (io_req->dirs[i] != CAM_DIR_IN) + continue; + + error = copyout(io_req->kern_bufs[i], + io_req->user_bufs[i], io_req->lengths[i]); + if (error != 0) { + xpt_print(periph->path, "Unable to copy %u " + "bytes from %p to user address %p\n", + io_req->lengths[i], + io_req->kern_bufs[i], + io_req->user_bufs[i]); + goto bailout; + } + + } + break; + case CAM_DATA_PADDR: + /* Do nothing. The pointer is a physical address already */ + break; + case CAM_DATA_SG: + /* + * Copy back to the user buffer if this was a read. + * Restore the user's S/G list buffer pointer. + */ + if (io_req->dirs[0] == CAM_DIR_IN) + error = passcopysglist(periph, io_req, io_req->dirs[0]); + break; + case CAM_DATA_SG_PADDR: + /* + * Restore the user's S/G list buffer pointer. No need to + * copy. + */ + break; + default: + case CAM_DATA_BIO: + error = EINVAL; + break; + } + +bailout: + /* + * Reset the user's pointers to their original values and free + * allocated memory. + */ + passiocleanup(softc, io_req); + + return (error); +} + static int passioctl(struct cdev *dev, u_long cmd, caddr_t addr, int flag, struct thread *td) { @@ -622,15 +1828,317 @@ passdoioctl(struct cdev *dev, u_long cmd, caddr_t addr, int flag, struct thread break; } + case CAMIOQUEUE: + { + struct pass_io_req *io_req; + union ccb **user_ccb, *ccb; + xpt_opcode fc; + + if ((softc->flags & PASS_FLAG_ZONE_VALID) == 0) { + error = passcreatezone(periph); + if (error != 0) + goto bailout; + } + + /* + * We're going to do a blocking allocation for this I/O + * request, so we have to drop the lock. + */ + cam_periph_unlock(periph); + + io_req = uma_zalloc(softc->pass_zone, M_WAITOK | M_ZERO); + ccb = &io_req->ccb; + user_ccb = (union ccb **)addr; + + /* + * Unlike the CAMIOCOMMAND ioctl above, we only have a + * pointer to the user's CCB, so we have to copy the whole + * thing in to a buffer we have allocated (above) instead + * of allowing the ioctl code to malloc a buffer and copy + * it in. + * + * This is an advantage for this asynchronous interface, + * since we don't want the memory to get freed while the + * CCB is outstanding. + */ +#if 0 + xpt_print(periph->path, "Copying user CCB %p to " + "kernel address %p\n", *user_ccb, ccb); +#endif + error = copyin(*user_ccb, ccb, sizeof(*ccb)); + if (error != 0) { + xpt_print(periph->path, "Copy of user CCB %p to " + "kernel address %p failed with error %d\n", + *user_ccb, ccb, error); + uma_zfree(softc->pass_zone, io_req); + cam_periph_lock(periph); + break; + } + + /* + * Some CCB types, like scan bus and scan lun can only go + * through the transport layer device. + */ + if (ccb->ccb_h.func_code & XPT_FC_XPT_ONLY) { + xpt_print(periph->path, "CCB function code %#x is " + "restricted to the XPT device\n", + ccb->ccb_h.func_code); + uma_zfree(softc->pass_zone, io_req); + cam_periph_lock(periph); + error = ENODEV; + break; + } + + /* + * Save the user's CCB pointer as well as his linked list + * pointers and peripheral private area so that we can + * restore these later. + */ + io_req->user_ccb_ptr = *user_ccb; + io_req->user_periph_links = ccb->ccb_h.periph_links; + io_req->user_periph_priv = ccb->ccb_h.periph_priv; + + /* + * Now that we've saved the user's values, we can set our + * own peripheral private entry. + */ + ccb->ccb_h.ccb_ioreq = io_req; + + /* Compatibility for RL/priority-unaware code. */ + priority = ccb->ccb_h.pinfo.priority; + if (priority <= CAM_PRIORITY_OOB) + priority += CAM_PRIORITY_OOB + 1; + + /* + * Setup fields in the CCB like the path and the priority. + * The path in particular cannot be done in userland, since + * it is a pointer to a kernel data structure. + */ + xpt_setup_ccb_flags(&ccb->ccb_h, periph->path, priority, + ccb->ccb_h.flags); + + /* + * Setup our done routine. There is no way for the user to + * have a valid pointer here. + */ + ccb->ccb_h.cbfcnp = passdone; + + fc = ccb->ccb_h.func_code; + /* + * If this function code has memory that can be mapped in + * or out, we need to call passmemsetup(). + */ + if ((fc == XPT_SCSI_IO) || (fc == XPT_ATA_IO) + || (fc == XPT_SMP_IO) || (fc == XPT_DEV_MATCH) + || (fc == XPT_DEV_ADVINFO)) { + error = passmemsetup(periph, io_req); + if (error != 0) { + uma_zfree(softc->pass_zone, io_req); + cam_periph_lock(periph); + break; + } + } else + io_req->mapinfo.num_bufs_used = 0; + + cam_periph_lock(periph); + + /* + * Everything goes on the incoming queue initially. + */ + TAILQ_INSERT_TAIL(&softc->incoming_queue, io_req, links); + + /* + * If the CCB is queued, and is not a user CCB, then + * we need to allocate a slot for it. Call xpt_schedule() + * so that our start routine will get called when a CCB is + * available. + */ + if ((fc & XPT_FC_QUEUED) + && ((fc & XPT_FC_USER_CCB) == 0)) { + xpt_schedule(periph, priority); + break; + } + + /* + * At this point, the CCB in question is either an + * immediate CCB (like XPT_DEV_ADVINFO) or it is a user CCB + * and therefore should be malloced, not allocated via a slot. + * Remove the CCB from the incoming queue and add it to the + * active queue. + */ + TAILQ_REMOVE(&softc->incoming_queue, io_req, links); + TAILQ_INSERT_TAIL(&softc->active_queue, io_req, links); + + xpt_action(ccb); + + /* + * If this is not a queued CCB (i.e. it is an immediate CCB), + * then it is already done. We need to put it on the done + * queue for the user to fetch. + */ + if ((fc & XPT_FC_QUEUED) == 0) { + TAILQ_REMOVE(&softc->active_queue, io_req, links); + TAILQ_INSERT_TAIL(&softc->done_queue, io_req, links); + } + break; + } + case CAMIOGET: + { + union ccb **user_ccb; + struct pass_io_req *io_req; + int old_error; + + user_ccb = (union ccb **)addr; + old_error = 0; + + io_req = TAILQ_FIRST(&softc->done_queue); + if (io_req == NULL) { + error = ENOENT; + break; + } + + /* + * Remove the I/O from the done queue. + */ + TAILQ_REMOVE(&softc->done_queue, io_req, links); + + /* + * We have to drop the lock during the copyout because the + * copyout can result in VM faults that require sleeping. + */ + cam_periph_unlock(periph); + + /* + * Do any needed copies (e.g. for reads) and revert the + * pointers in the CCB back to the user's pointers. + */ + error = passmemdone(periph, io_req); + + old_error = error; + + io_req->ccb.ccb_h.periph_links = io_req->user_periph_links; + io_req->ccb.ccb_h.periph_priv = io_req->user_periph_priv; + +#if 0 + xpt_print(periph->path, "Copying to user CCB %p from " + "kernel address %p\n", *user_ccb, &io_req->ccb); +#endif + + error = copyout(&io_req->ccb, *user_ccb, sizeof(union ccb)); + if (error != 0) { + xpt_print(periph->path, "Copy to user CCB %p from " + "kernel address %p failed with error %d\n", + *user_ccb, &io_req->ccb, error); + } + + /* + * Prefer the first error we got back, and make sure we + * don't overwrite bad status with good. + */ + if (old_error != 0) + error = old_error; + + cam_periph_lock(periph); + + /* + * At this point, if there was an error, we could potentially + * re-queue the I/O and try again. But why? The error + * would almost certainly happen again. We might as well + * not leak memory. + */ + uma_zfree(softc->pass_zone, io_req); + break; + } default: error = cam_periph_ioctl(periph, cmd, addr, passerror); break; } +bailout: cam_periph_unlock(periph); + return(error); } +static int +passpoll(struct cdev *dev, int poll_events, struct thread *td) +{ + struct cam_periph *periph; + struct pass_softc *softc; + int revents; + + periph = (struct cam_periph *)dev->si_drv1; + if (periph == NULL) + return (ENXIO); + + softc = (struct pass_softc *)periph->softc; + + revents = poll_events & (POLLOUT | POLLWRNORM); + if ((poll_events & (POLLIN | POLLRDNORM)) != 0) { + cam_periph_lock(periph); + + if (!TAILQ_EMPTY(&softc->done_queue)) { + revents |= poll_events & (POLLIN | POLLRDNORM); + } + cam_periph_unlock(periph); + if (revents == 0) + selrecord(td, &softc->read_select); + } + + return (revents); +} + +static int +passkqfilter(struct cdev *dev, struct knote *kn) +{ + struct cam_periph *periph; + struct pass_softc *softc; + + periph = (struct cam_periph *)dev->si_drv1; + if (periph == NULL) + return (ENXIO); + + softc = (struct pass_softc *)periph->softc; + + kn->kn_hook = (caddr_t)periph; + kn->kn_fop = &passread_filtops; + knlist_add(&softc->read_select.si_note, kn, 0); + + return (0); +} + +static void +passreadfiltdetach(struct knote *kn) +{ + struct cam_periph *periph; + struct pass_softc *softc; + + periph = (struct cam_periph *)kn->kn_hook; + softc = (struct pass_softc *)periph->softc; + + knlist_remove(&softc->read_select.si_note, kn, 0); +} + +static int +passreadfilt(struct knote *kn, long hint) +{ + struct cam_periph *periph; + struct pass_softc *softc; + int retval; + + periph = (struct cam_periph *)kn->kn_hook; + softc = (struct pass_softc *)periph->softc; + + cam_periph_assert(periph, MA_OWNED); + + if (TAILQ_EMPTY(&softc->done_queue)) + retval = 0; + else + retval = 1; + + return (retval); +} + /* * Generally, "ccb" should be the CCB supplied by the kernel. "inccb" * should be the CCB that is copied in from the user. @@ -652,6 +2160,10 @@ passsendccb(struct cam_periph *periph, union ccb *ccb, union ccb *inccb) xpt_merge_ccb(ccb, inccb); /* + */ + ccb->ccb_h.cbfcnp = passdone; + + /* * Let cam_periph_mapmem do a sanity check on the data pointer format. * Even if no data transfer is needed, it's a cheap check and it * simplifies the code. diff --git a/sys/cam/scsi/scsi_pass.h b/sys/cam/scsi/scsi_pass.h index ae0e058..797ef08 100644 --- a/sys/cam/scsi/scsi_pass.h +++ b/sys/cam/scsi/scsi_pass.h @@ -39,4 +39,12 @@ #define CAMIOCOMMAND _IOWR(CAM_VERSION, 2, union ccb) #define CAMGETPASSTHRU _IOWR(CAM_VERSION, 3, union ccb) +/* + * These two ioctls take a union ccb *, but that is not explicitly declared + * to avoid having the ioctl handling code malloc and free their own copy + * of the CCB or the CCB pointer. + */ +#define CAMIOQUEUE _IO(CAM_VERSION, 4) +#define CAMIOGET _IO(CAM_VERSION, 5) + #endif |