summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorscottl <scottl@FreeBSD.org>2005-07-10 15:05:39 +0000
committerscottl <scottl@FreeBSD.org>2005-07-10 15:05:39 +0000
commit9126bcda9d6b16e4f6f3a24f5f7db39bf74deaa3 (patch)
treec3881cc2325f0e42681a5dd64beebf71a45a23c0
parent256b79e5c0817cdedce47566f473ba2ef443694f (diff)
downloadFreeBSD-src-9126bcda9d6b16e4f6f3a24f5f7db39bf74deaa3.zip
FreeBSD-src-9126bcda9d6b16e4f6f3a24f5f7db39bf74deaa3.tar.gz
Massive overhaul of MPT Fusion driver:
o Add timeout error recovery (from a thread context to avoid the deferral of other critical interrupts). o Properly recover commands across controller reset events. o Update the driver to handle events and status codes that have been added to the MPI spec since the driver was originally written. o Make the driver more modular to improve maintainability and support dynamic "personality" registration (e.g. SCSI Initiator, RAID, SAS, FC, etc). o Shorten and simplify the common I/O path to improve driver performance. o Add RAID volume and RAID member state/settings reporting. o Add periodic volume resynchronization status reporting. o Add support for sysctl tunable resync rate, member write cache enable, and volume transaction queue depth. Sponsored by ---------------- Avid Technologies Inc: SCSI error recovery, driver re-organization, update of MPI library headers, portions of dynamic personality registration, and misc bug fixes. Wheel Open Technologies: RAID event notification, RAID member pass-thru support, firmware upload/download support, enhanced RAID resync speed, portions of dynamic personality registration, and misc bug fixes. Detailed Changes ================ mpt.c mpt_cam.c mpt_raid.c mpt_pci.c: o Add support for personality modules. Each module exports load, and unload module scope methods as well as probe, attach, event, reset, shutdown, and detach per-device instance methods mpt.c mpt.h mpt_pci.c: o The driver now associates a callback function (via an index) with every transaction submitted to the controller. This allows the main interrupt handler to absolve itself of any knowledge of individual transaction/response types by simply calling the callback function "registered" for the transaction. We use a callback index instead of a callback function pointer in each requests so we can properly handle responses (e.g. event notifications) that are not associated with a transaction. Personality modules dynamically register their callbacks with the driver core to receive the callback index to use for their handlers. o Move the interrupt handler into mpt.c. The ISR algorithm is bus transport and OS independent and thus had no reason to be in mpt_pci.c. o Simplify configuration message reply handling by copying reply frame data for the requester and storing completion status in the original request structure. o Add the mpt_complete_request_chain() helper method and use it to implement reset handlers that must abort transactions. o Keep track of all pending requests on the new requests_pending_list in the softc. o Add default handlers to mpt.c to handle generic event notifications and controller reset activities. The event handler code is largely the same as in the original driver. The reset handler is new and terminates any pending transactions with a status code indicating the controller needs to be re-initialized. o Add some endian support to the driver. A complete audit is still required for this driver to have any hope of operating in a big-endian environment. o Use inttypes.h and __inline. Come closer to being style(9) compliant. o Remove extraneous use of typedefs. o Convert request state from a strict enumeration to a series of flags. This allows us to, for example, tag transactions that have timed-out while retaining the state that the transaction is still in-flight on the controller. o Add mpt_wait_req() which allows a caller to poll or sleep for the completion of a request. Use this to simplify and factor code out from many initialization routines. We also use this to sleep for task management request completions in our CAM timeout handler. mpt.c: o Correct a bug in the event handler where request structures were freed even if the request reply was marked as a continuation reply. Continuation replies indicate that the controller still owns the request and freeing these replies prematurely corrupted controller state. o Implement firmware upload and download. On controllers that do not have dedicated NVRAM (as in the Sun v20/v40z), the firmware image is downloaded to the controller by the system BIOS. This image occupies precious controller RAM space until the host driver fetches the image, reducing the number of concurrent I/Os the controller can processes. The uploaded image is used to re-program the controller during hard reset events since the controller cannot fetch the firmware on its own. Implementing this feature allows much higher queue depths when RAID volumes are configured. o Changed configuration page accessors to allow threads to sleep rather than busy wait for completion. o Removed hard coded data transfer sizes from configuration page routines so that RAID configuration page processing is possible. mpt_reg.h: o Move controller register definitions into a separate file. mpt.h: o Re-arrange includes to allow inlined functions to be defined in mpt.h. o Add reply, event, and reset handler definitions. o Add softc fields for handling timeout and controller reset recovery. mpt_cam.c: o Move mpt_freebsd.c to mpt_cam.c. Move all core functionality, such as event handling, into mpt.c leaving only CAM SCSI support here. o Revamp completion handler to provide correct CAM status for all currently defined SCSI MPI message result codes. o Register event and reset handlers with the MPT core. Modify the event handler to notify CAM of bus reset events. The controller reset handler will abort any transactions that have timed out. All other pending CAM transactions are correctly aborted by the core driver's reset handler. o Allocate a single request up front to perform task management operations. This guarantees that we can always perform a TMF operation even when the controller is saturated with other operations. The single request also serves as a perfect mechanism of guaranteeing that only a single TMF is in flight at a time - something that is required according to the MPT Fusion documentation. o Add a helper function for issuing task management requests to the controller. This is used to abort individual requests or perform a bus reset. o Modify the CAM XPT_BUS_RESET ccb handler to wait for and properly handle the status of the bus reset task management frame used to reset the bus. The previous code assumed that the reset request would always succeed. o Add timeout recovery support. When a timeout occurs, the timed-out request is added to a queue to be processed by our recovery thread and the thread is woken up. The recovery thread processes timed-out command serially, attempting first to abort them and then falling back to a bus reset if an abort fails. o Add calls to mpt_reset() to reset the controller if any handshake command, bus reset attempt or abort attempt fails due to a timeout. o Export a secondary "bus" to CAM that exposes all volume drive members as pass-thru devices, allowing CAM to perform proper speed negotiation to hidden devices. o Add a CAM async event handler tracking the AC_FOUND_DEVICE event. Use this to trigger calls to set the per-volume queue depth once the volume is fully registered with CAM. This is required to avoid hitting firmware limits on volume queue depth. Exceeding the limit causes the firmware to hang. mpt_cam.h: o Add several helper functions for interfacing to CAM and performing timeout recovery. mpt_pci.c: o Disable interrupts on the controller before registering and enabling interrupt delivery to the OS. Otherwise we risk receiving interrupts before the driver is ready to receive them. o Make use of compatibility macros that allow the driver to be compiled under 4.x and 5.x. mpt_raid.c: o Add a per-controller instance RAID thread to perform settings changes and query status (minimizes CPU busy wait loops). o Use a shutdown handler to disable "Member Write Cache Enable" (MWCE) setting for RAID arrays set to enable MWCE During Rebuild. o Change reply handler function signature to allow handlers to defer the deletion of reply frames. Use this to allow the event reply handler to queue up events that need to be acked if no resources are available to immediately ack an event. Queued events are processed in mpt_free_request() where resources are freed. This avoids a panic on resource shortage. o Parse and print out RAID controller capabilities during driver probe. o Define, allocate, and maintain RAID data structures for volumes, hidden member physical disks and spare disks. o Add dynamic sysctls for per-instance setting of the log level, array resync rate, array member cache enable, and volume queue depth. mpt_debug.c: o Add mpt_lprt and mpt_lprtc for printing diagnostics conditioned on a particular log level to aid in tracking down driver issues. o Add mpt_decode_value() which parses the bits in an integer value based on a parsing table (mask, value, name string, tuples). mpilib/*: o Update mpi library header files to latest distribution from LSI. Submitted by: gibbs Approved by: re
-rw-r--r--sys/dev/mpt/mpilib/fc_log.h43
-rw-r--r--sys/dev/mpt/mpilib/mpi.h63
-rw-r--r--sys/dev/mpt/mpilib/mpi_cnfg.h114
-rw-r--r--sys/dev/mpt/mpilib/mpi_fc.h59
-rw-r--r--sys/dev/mpt/mpilib/mpi_init.h48
-rw-r--r--sys/dev/mpt/mpilib/mpi_ioc.h49
-rw-r--r--sys/dev/mpt/mpilib/mpi_lan.h46
-rw-r--r--sys/dev/mpt/mpilib/mpi_raid.h46
-rw-r--r--sys/dev/mpt/mpilib/mpi_targ.h83
-rw-r--r--sys/dev/mpt/mpilib/mpi_type.h73
-rw-r--r--sys/dev/mpt/mpt.c2206
-rw-r--r--sys/dev/mpt/mpt.h913
-rw-r--r--sys/dev/mpt/mpt_cam.c1931
-rw-r--r--sys/dev/mpt/mpt_cam.h110
-rw-r--r--sys/dev/mpt/mpt_debug.c162
-rw-r--r--sys/dev/mpt/mpt_freebsd.c1530
-rw-r--r--sys/dev/mpt/mpt_freebsd.h357
-rw-r--r--sys/dev/mpt/mpt_pci.c369
-rw-r--r--sys/dev/mpt/mpt_raid.c1674
-rw-r--r--sys/dev/mpt/mpt_raid.h95
-rw-r--r--sys/dev/mpt/mpt_reg.h125
21 files changed, 7130 insertions, 2966 deletions
diff --git a/sys/dev/mpt/mpilib/fc_log.h b/sys/dev/mpt/mpilib/fc_log.h
index fd68bf6..23018ab 100644
--- a/sys/dev/mpt/mpilib/fc_log.h
+++ b/sys/dev/mpt/mpilib/fc_log.h
@@ -1,27 +1,34 @@
/* $FreeBSD$ */
/*-
- * Copyright (c) 2000, 2001 by LSI Logic Corporation
+ * Copyright (c) 2000-2005, LSI Logic Corporation and its contributors.
+ * All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
+ * modification, are permitted provided that the following conditions are
+ * met:
* 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
*
* NAME: fc_log.h
* SUMMARY: MPI IocLogInfo definitions for the SYMFC9xx chips
diff --git a/sys/dev/mpt/mpilib/mpi.h b/sys/dev/mpt/mpilib/mpi.h
index 1044d82..3b3f6e9 100644
--- a/sys/dev/mpt/mpilib/mpi.h
+++ b/sys/dev/mpt/mpilib/mpi.h
@@ -1,34 +1,40 @@
/* $FreeBSD$ */
/*-
- * Copyright (c) 2000, 2001 by LSI Logic Corporation
- *
+ * Copyright (c) 2000-2005, LSI Logic Corporation and its contributors.
+ * All rights reserved.
+ *
* Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
+ * modification, are permitted provided that the following conditions are
+ * met:
* 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*
* Name: MPI.H
* Title: MPI Message independent structures and definitions
* Creation Date: July 27, 2000
*
- * MPI.H Version: 01.02.09
+ * MPI.H Version: 01.02.11
*
* Version History
* ---------------
@@ -73,6 +79,8 @@
* 11-15-02 01.02.08 Added define MPI_IOCSTATUS_TARGET_INVALID_IO_INDEX and
* obsoleted define MPI_IOCSTATUS_TARGET_INVALID_IOCINDEX.
* 04-01-03 01.02.09 New IOCStatus code: MPI_IOCSTATUS_FC_EXCHANGE_CANCELED
+ * 06-26-03 01.02.10 Bumped MPI_HEADER_VERSION_UNIT value.
+ * 01-16-04 01.02.11 Added define for MPI_IOCLOGINFO_TYPE_SHIFT.
* --------------------------------------------------------------------------
*/
@@ -101,7 +109,7 @@
/* Note: The major versions of 0xe0 through 0xff are reserved */
/* versioning for this MPI header set */
-#define MPI_HEADER_VERSION_UNIT (0x09)
+#define MPI_HEADER_VERSION_UNIT (0x0D)
#define MPI_HEADER_VERSION_DEV (0x00)
#define MPI_HEADER_VERSION_UNIT_MASK (0xFF00)
#define MPI_HEADER_VERSION_UNIT_SHIFT (8)
@@ -318,7 +326,7 @@ typedef struct _SGE_SIMPLE_UNION
{
U32 Address32;
U64 Address64;
- } _u;
+ }u;
} SGESimpleUnion_t, MPI_POINTER pSGESimpleUnion_t,
SGE_SIMPLE_UNION, MPI_POINTER PTR_SGE_SIMPLE_UNION;
@@ -353,7 +361,7 @@ typedef struct _SGE_CHAIN_UNION
{
U32 Address32;
U64 Address64;
- } _u;
+ }u;
} SGE_CHAIN_UNION, MPI_POINTER PTR_SGE_CHAIN_UNION,
SGEChainUnion_t, MPI_POINTER pSGEChainUnion_t;
@@ -417,7 +425,7 @@ typedef struct _SGE_TRANSACTION_UNION
U32 TransactionContext64[2];
U32 TransactionContext96[3];
U32 TransactionContext128[4];
- } _u;
+ }u;
U32 TransactionDetails[1];
} SGE_TRANSACTION_UNION, MPI_POINTER PTR_SGE_TRANSACTION_UNION,
SGETransactionUnion_t, MPI_POINTER pSGETransactionUnion_t;
@@ -433,7 +441,7 @@ typedef struct _SGE_IO_UNION
{
SGE_SIMPLE_UNION Simple;
SGE_CHAIN_UNION Chain;
- } _u;
+ } u;
} SGE_IO_UNION, MPI_POINTER PTR_SGE_IO_UNION,
SGEIOUnion_t, MPI_POINTER pSGEIOUnion_t;
@@ -447,7 +455,7 @@ typedef struct _SGE_TRANS_SIMPLE_UNION
{
SGE_SIMPLE_UNION Simple;
SGE_TRANSACTION_UNION Transaction;
- } _u;
+ } u;
} SGE_TRANS_SIMPLE_UNION, MPI_POINTER PTR_SGE_TRANS_SIMPLE_UNION,
SGETransSimpleUnion_t, MPI_POINTER pSGETransSimpleUnion_t;
@@ -462,7 +470,7 @@ typedef struct _SGE_MPI_UNION
SGE_SIMPLE_UNION Simple;
SGE_CHAIN_UNION Chain;
SGE_TRANSACTION_UNION Transaction;
- } _u;
+ } u;
} SGE_MPI_UNION, MPI_POINTER PTR_SGE_MPI_UNION,
MPI_SGE_UNION_t, MPI_POINTER pMPI_SGE_UNION_t,
SGEAllUnion_t, MPI_POINTER pSGEAllUnion_t;
@@ -696,6 +704,7 @@ typedef struct _MSG_DEFAULT_REPLY
/****************************************************************************/
#define MPI_IOCLOGINFO_TYPE_MASK (0xF0000000)
+#define MPI_IOCLOGINFO_TYPE_SHIFT (28)
#define MPI_IOCLOGINFO_TYPE_NONE (0x0)
#define MPI_IOCLOGINFO_TYPE_SCSI (0x1)
#define MPI_IOCLOGINFO_TYPE_FC (0x2)
diff --git a/sys/dev/mpt/mpilib/mpi_cnfg.h b/sys/dev/mpt/mpilib/mpi_cnfg.h
index 356aee7..7ab0a9f 100644
--- a/sys/dev/mpt/mpilib/mpi_cnfg.h
+++ b/sys/dev/mpt/mpilib/mpi_cnfg.h
@@ -1,34 +1,40 @@
/* $FreeBSD$ */
/*-
- * Copyright (c) 2000, 2001 by LSI Logic Corporation
- *
+ * Copyright (c) 2000-2005, LSI Logic Corporation and its contributors.
+ * All rights reserved.
+ *
* Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
+ * modification, are permitted provided that the following conditions are
+ * met:
* 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*
* Name: MPI_CNFG.H
* Title: MPI Config message, structures, and Pages
* Creation Date: July 27, 2000
*
- * MPI_CNFG.H Version: 01.02.11
+ * MPI_CNFG.H Version: 01.02.13
*
* Version History
* ---------------
@@ -158,6 +164,21 @@
* Added define MPI_FCPORTPAGE5_FLAGS_DISABLE to disable
* an alias.
* Added more device id defines.
+ * 06-26-03 01.02.12 Added MPI_IOUNITPAGE1_IR_USE_STATIC_VOLUME_ID define.
+ * Added TargetConfig and IDConfig fields to
+ * CONFIG_PAGE_SCSI_PORT_1.
+ * Added more PortFlags defines for CONFIG_PAGE_SCSI_PORT_2
+ * to control DV.
+ * Added more Flags defines for CONFIG_PAGE_FC_PORT_1.
+ * In CONFIG_PAGE_FC_DEVICE_0, replaced Reserved1 field
+ * with ADISCHardALPA.
+ * Added MPI_FC_DEVICE_PAGE0_PROT_FCP_RETRY define.
+ * 01-16-04 01.02.13 Added InitiatorDeviceTimeout and InitiatorIoPendTimeout
+ * fields and related defines to CONFIG_PAGE_FC_PORT_1.
+ * Added define for
+ * MPI_FCPORTPAGE1_FLAGS_SOFT_ALPA_FALLBACK.
+ * Added new fields to the substructures of
+ * CONFIG_PAGE_FC_PORT_10.
* --------------------------------------------------------------------------
*/
@@ -459,6 +480,7 @@ typedef struct _CONFIG_PAGE_IO_UNIT_1
#define MPI_IOUNITPAGE1_SINGLE_FUNCTION (0x00000001)
#define MPI_IOUNITPAGE1_MULTI_PATHING (0x00000002)
#define MPI_IOUNITPAGE1_SINGLE_PATHING (0x00000000)
+#define MPI_IOUNITPAGE1_IR_USE_STATIC_VOLUME_ID (0x00000004)
#define MPI_IOUNITPAGE1_DISABLE_IR (0x00000040)
#define MPI_IOUNITPAGE1_FORCE_32 (0x00000080)
@@ -742,14 +764,22 @@ typedef struct _CONFIG_PAGE_SCSI_PORT_1
CONFIG_PAGE_HEADER Header; /* 00h */
U32 Configuration; /* 04h */
U32 OnBusTimerValue; /* 08h */
+ U8 TargetConfig; /* 0Ch */
+ U8 Reserved1; /* 0Dh */
+ U16 IDConfig; /* 0Eh */
} CONFIG_PAGE_SCSI_PORT_1, MPI_POINTER PTR_CONFIG_PAGE_SCSI_PORT_1,
SCSIPortPage1_t, MPI_POINTER pSCSIPortPage1_t;
-#define MPI_SCSIPORTPAGE1_PAGEVERSION (0x02)
+#define MPI_SCSIPORTPAGE1_PAGEVERSION (0x03)
+/* Configuration values */
#define MPI_SCSIPORTPAGE1_CFG_PORT_SCSI_ID_MASK (0x000000FF)
#define MPI_SCSIPORTPAGE1_CFG_PORT_RESPONSE_ID_MASK (0xFFFF0000)
+/* TargetConfig values */
+#define MPI_SCSIPORTPAGE1_TARGCONFIG_TARG_ONLY (0x01)
+#define MPI_SCSIPORTPAGE1_TARGCONFIG_INIT_TARG (0x02)
+
typedef struct _MPI_DEVICE_INFO
{
@@ -768,13 +798,20 @@ typedef struct _CONFIG_PAGE_SCSI_PORT_2
} CONFIG_PAGE_SCSI_PORT_2, MPI_POINTER PTR_CONFIG_PAGE_SCSI_PORT_2,
SCSIPortPage2_t, MPI_POINTER pSCSIPortPage2_t;
-#define MPI_SCSIPORTPAGE2_PAGEVERSION (0x01)
+#define MPI_SCSIPORTPAGE2_PAGEVERSION (0x02)
+/* PortFlags values */
#define MPI_SCSIPORTPAGE2_PORT_FLAGS_SCAN_HIGH_TO_LOW (0x00000001)
#define MPI_SCSIPORTPAGE2_PORT_FLAGS_AVOID_SCSI_RESET (0x00000004)
#define MPI_SCSIPORTPAGE2_PORT_FLAGS_ALTERNATE_CHS (0x00000008)
#define MPI_SCSIPORTPAGE2_PORT_FLAGS_TERMINATION_DISABLE (0x00000010)
+#define MPI_SCSIPORTPAGE2_PORT_FLAGS_DV_MASK (0x00000060)
+#define MPI_SCSIPORTPAGE2_PORT_FLAGS_FULL_DV (0x00000000)
+#define MPI_SCSIPORTPAGE2_PORT_FLAGS_BASIC_DV_ONLY (0x00000020)
+#define MPI_SCSIPORTPAGE2_PORT_FLAGS_OFF_DV (0x00000060)
+
+/* PortSettings values */
#define MPI_SCSIPORTPAGE2_PORT_HOST_ID_MASK (0x0000000F)
#define MPI_SCSIPORTPAGE2_PORT_MASK_INIT_HBA (0x00000030)
#define MPI_SCSIPORTPAGE2_PORT_DISABLE_INIT_HBA (0x00000000)
@@ -1016,18 +1053,23 @@ typedef struct _CONFIG_PAGE_FC_PORT_1
U8 AltConnector; /* 1Bh */
U8 NumRequestedAliases; /* 1Ch */
U8 RR_TOV; /* 1Dh */
- U16 Reserved2; /* 1Eh */
+ U8 InitiatorDeviceTimeout; /* 1Eh */
+ U8 InitiatorIoPendTimeout; /* 1Fh */
} CONFIG_PAGE_FC_PORT_1, MPI_POINTER PTR_CONFIG_PAGE_FC_PORT_1,
FCPortPage1_t, MPI_POINTER pFCPortPage1_t;
-#define MPI_FCPORTPAGE1_PAGEVERSION (0x05)
+#define MPI_FCPORTPAGE1_PAGEVERSION (0x06)
#define MPI_FCPORTPAGE1_FLAGS_EXT_FCP_STATUS_EN (0x08000000)
#define MPI_FCPORTPAGE1_FLAGS_IMMEDIATE_ERROR_REPLY (0x04000000)
#define MPI_FCPORTPAGE1_FLAGS_FORCE_USE_NOSEEPROM_WWNS (0x02000000)
#define MPI_FCPORTPAGE1_FLAGS_VERBOSE_RESCAN_EVENTS (0x01000000)
#define MPI_FCPORTPAGE1_FLAGS_TARGET_MODE_OXID (0x00800000)
+#define MPI_FCPORTPAGE1_FLAGS_PORT_OFFLINE (0x00400000)
+#define MPI_FCPORTPAGE1_FLAGS_SOFT_ALPA_FALLBACK (0x00200000)
#define MPI_FCPORTPAGE1_FLAGS_MASK_RR_TOV_UNITS (0x00000070)
+#define MPI_FCPORTPAGE1_FLAGS_SUPPRESS_PROT_REG (0x00000008)
+#define MPI_FCPORTPAGE1_FLAGS_PLOGI_ON_LOGO (0x00000004)
#define MPI_FCPORTPAGE1_FLAGS_MAINTAIN_LOGINS (0x00000002)
#define MPI_FCPORTPAGE1_FLAGS_SORT_BY_DID (0x00000001)
#define MPI_FCPORTPAGE1_FLAGS_SORT_BY_WWN (0x00000000)
@@ -1060,6 +1102,9 @@ typedef struct _CONFIG_PAGE_FC_PORT_1
#define MPI_FCPORTPAGE1_ALT_CONN_UNKNOWN (0x00)
+#define MPI_FCPORTPAGE1_INITIATOR_DEV_TIMEOUT_MASK (0x7F)
+#define MPI_FCPORTPAGE1_INITIATOR_DEV_UNIT_16 (0x80)
+
typedef struct _CONFIG_PAGE_FC_PORT_2
{
@@ -1254,8 +1299,8 @@ typedef struct _CONFIG_PAGE_FC_PORT_10_BASE_SFP_DATA
U8 VendorOUI[3]; /* 35h */
U8 VendorPN[16]; /* 38h */
U8 VendorRev[4]; /* 48h */
- U16 Reserved4; /* 4Ch */
- U8 Reserved5; /* 4Eh */
+ U16 Wavelength; /* 4Ch */
+ U8 Reserved4; /* 4Eh */
U8 CC_BASE; /* 4Fh */
} CONFIG_PAGE_FC_PORT_10_BASE_SFP_DATA,
MPI_POINTER PTR_CONFIG_PAGE_FC_PORT_10_BASE_SFP_DATA,
@@ -1313,7 +1358,9 @@ typedef struct _CONFIG_PAGE_FC_PORT_10_EXTENDED_SFP_DATA
U8 BitRateMin; /* 53h */
U8 VendorSN[16]; /* 54h */
U8 DateCode[8]; /* 64h */
- U8 Reserved5[3]; /* 6Ch */
+ U8 DiagMonitoringType; /* 6Ch */
+ U8 EnhancedOptions; /* 6Dh */
+ U8 SFF8472Compliance; /* 6Eh */
U8 CC_EXT; /* 6Fh */
} CONFIG_PAGE_FC_PORT_10_EXTENDED_SFP_DATA,
MPI_POINTER PTR_CONFIG_PAGE_FC_PORT_10_EXTENDED_SFP_DATA,
@@ -1340,7 +1387,7 @@ typedef struct _CONFIG_PAGE_FC_PORT_10
} CONFIG_PAGE_FC_PORT_10, MPI_POINTER PTR_CONFIG_PAGE_FC_PORT_10,
FCPortPage10_t, MPI_POINTER pFCPortPage10_t;
-#define MPI_FCPORTPAGE10_PAGEVERSION (0x00)
+#define MPI_FCPORTPAGE10_PAGEVERSION (0x01)
/* standard MODDEF pin definitions (from GBIC spec.) */
#define MPI_FCPORTPAGE10_FLAGS_MODDEF_MASK (0x00000007)
@@ -1374,7 +1421,7 @@ typedef struct _CONFIG_PAGE_FC_DEVICE_0
U8 Flags; /* 19h */
U16 BBCredit; /* 1Ah */
U16 MaxRxFrameSize; /* 1Ch */
- U8 Reserved1; /* 1Eh */
+ U8 ADISCHardALPA; /* 1Eh */
U8 PortNumber; /* 1Fh */
U8 FcPhLowestVersion; /* 20h */
U8 FcPhHighestVersion; /* 21h */
@@ -1383,7 +1430,7 @@ typedef struct _CONFIG_PAGE_FC_DEVICE_0
} CONFIG_PAGE_FC_DEVICE_0, MPI_POINTER PTR_CONFIG_PAGE_FC_DEVICE_0,
FCDevicePage0_t, MPI_POINTER pFCDevicePage0_t;
-#define MPI_FC_DEVICE_PAGE0_PAGEVERSION (0x02)
+#define MPI_FC_DEVICE_PAGE0_PAGEVERSION (0x03)
#define MPI_FC_DEVICE_PAGE0_FLAGS_TARGETID_BUS_VALID (0x01)
#define MPI_FC_DEVICE_PAGE0_FLAGS_PLOGI_INVALID (0x02)
@@ -1392,6 +1439,7 @@ typedef struct _CONFIG_PAGE_FC_DEVICE_0
#define MPI_FC_DEVICE_PAGE0_PROT_IP (0x01)
#define MPI_FC_DEVICE_PAGE0_PROT_FCP_TARGET (0x02)
#define MPI_FC_DEVICE_PAGE0_PROT_FCP_INITIATOR (0x04)
+#define MPI_FC_DEVICE_PAGE0_PROT_FCP_RETRY (0x08)
#define MPI_FC_DEVICE_PAGE0_PGAD_PORT_MASK (MPI_FC_DEVICE_PGAD_PORT_MASK)
#define MPI_FC_DEVICE_PAGE0_PGAD_FORM_MASK (MPI_FC_DEVICE_PGAD_FORM_MASK)
@@ -1402,6 +1450,7 @@ typedef struct _CONFIG_PAGE_FC_DEVICE_0
#define MPI_FC_DEVICE_PAGE0_PGAD_BUS_SHIFT (MPI_FC_DEVICE_PGAD_BT_BUS_SHIFT)
#define MPI_FC_DEVICE_PAGE0_PGAD_TID_MASK (MPI_FC_DEVICE_PGAD_BT_TID_MASK)
+#define MPI_FC_DEVICE_PAGE0_HARD_ALPA_UNKNOWN (0xFF)
/****************************************************************************
* RAID Volume Config Pages
@@ -1487,8 +1536,9 @@ typedef struct _CONFIG_PAGE_RAID_VOL_0
U32 Reserved2; /* 1Ch */
U32 Reserved3; /* 20h */
U8 NumPhysDisks; /* 24h */
- U8 Reserved4; /* 25h */
- U16 Reserved5; /* 26h */
+ U8 DataScrubRate; /* 25h */
+ U8 ResyncRate; /* 26h */
+ U8 InactiveStatus; /* 27h */
RAID_VOL0_PHYS_DISK PhysDisk[MPI_RAID_VOL_PAGE_0_PHYSDISK_MAX];/* 28h */
} CONFIG_PAGE_RAID_VOL_0, MPI_POINTER PTR_CONFIG_PAGE_RAID_VOL_0,
RaidVolumePage0_t, MPI_POINTER pRaidVolumePage0_t;
diff --git a/sys/dev/mpt/mpilib/mpi_fc.h b/sys/dev/mpt/mpilib/mpi_fc.h
index 470d385..4067fc3 100644
--- a/sys/dev/mpt/mpilib/mpi_fc.h
+++ b/sys/dev/mpt/mpilib/mpi_fc.h
@@ -1,34 +1,40 @@
/* $FreeBSD$ */
/*-
- * Copyright (c) 2000, 2001 by LSI Logic Corporation
- *
+ * Copyright (c) 2000-2005, LSI Logic Corporation and its contributors.
+ * All rights reserved.
+ *
* Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
+ * modification, are permitted provided that the following conditions are
+ * met:
* 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*
* Name: MPI_FC.H
* Title: MPI Fibre Channel messages and structures
* Creation Date: June 12, 2000
*
- * MPI Version: 01.02.02
+ * MPI_FC.H Version: 01.02.04
*
* Version History
* ---------------
@@ -57,6 +63,8 @@
* 08-08-01 01.02.01 Original release for v1.2 work.
* 09-28-01 01.02.02 Change name of reserved field in
* MSG_LINK_SERVICE_RSP_REPLY.
+ * 05-31-02 01.02.03 Adding AliasIndex to FC Direct Access requests.
+ * 01-16-04 01.02.04 Added define for MPI_FC_PRIM_SEND_FLAGS_ML_RESET_LINK.
* --------------------------------------------------------------------------
*/
@@ -215,7 +223,7 @@ typedef struct _MSG_LINK_SERVICE_RSP_REPLY
typedef struct _MSG_EXLINK_SERVICE_SEND_REQUEST
{
U8 SendFlags; /* 00h */
- U8 Reserved; /* 01h */
+ U8 AliasIndex; /* 01h */
U8 ChainOffset; /* 02h */
U8 Function; /* 03h */
U32 MsgFlags_Did; /* 04h */
@@ -234,7 +242,8 @@ typedef struct _MSG_EXLINK_SERVICE_SEND_REQUEST
/* Extended Link Service Send Reply */
typedef struct _MSG_EXLINK_SERVICE_SEND_REPLY
{
- U16 Reserved; /* 00h */
+ U8 Reserved; /* 00h */
+ U8 AliasIndex; /* 01h */
U8 MsgLength; /* 02h */
U8 Function; /* 03h */
U16 Reserved1; /* 04h */
@@ -297,7 +306,7 @@ typedef struct _MSG_FC_ABORT_REPLY
typedef struct _MSG_FC_COMMON_TRANSPORT_SEND_REQUEST
{
U8 SendFlags; /* 00h */
- U8 Reserved; /* 01h */
+ U8 AliasIndex; /* 01h */
U8 ChainOffset; /* 02h */
U8 Function; /* 03h */
U32 MsgFlags_Did; /* 04h */
@@ -319,7 +328,8 @@ typedef struct _MSG_FC_COMMON_TRANSPORT_SEND_REQUEST
/* FC Common Transport Send Reply */
typedef struct _MSG_FC_COMMON_TRANSPORT_SEND_REPLY
{
- U16 Reserved; /* 00h */
+ U8 Reserved; /* 00h */
+ U8 AliasIndex; /* 01h */
U8 MsgLength; /* 02h */
U8 Function; /* 03h */
U16 Reserved1; /* 04h */
@@ -353,6 +363,7 @@ typedef struct _MSG_FC_PRIMITIVE_SEND_REQUEST
FcPrimitiveSendRequest_t, MPI_POINTER pFcPrimitiveSendRequest_t;
#define MPI_FC_PRIM_SEND_FLAGS_PORT_MASK (0x01)
+#define MPI_FC_PRIM_SEND_FLAGS_ML_RESET_LINK (0x02)
#define MPI_FC_PRIM_SEND_FLAGS_RESET_LINK (0x04)
#define MPI_FC_PRIM_SEND_FLAGS_STOP_SEND (0x08)
#define MPI_FC_PRIM_SEND_FLAGS_SEND_ONCE (0x10)
diff --git a/sys/dev/mpt/mpilib/mpi_init.h b/sys/dev/mpt/mpilib/mpi_init.h
index 527b1f9..a3dfa42 100644
--- a/sys/dev/mpt/mpilib/mpi_init.h
+++ b/sys/dev/mpt/mpilib/mpi_init.h
@@ -1,34 +1,40 @@
/* $FreeBSD$ */
/*-
- * Copyright (c) 2000, 2001 by LSI Logic Corporation
- *
+ * Copyright (c) 2000-2005, LSI Logic Corporation and its contributors.
+ * All rights reserved.
+ *
* Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
+ * modification, are permitted provided that the following conditions are
+ * met:
* 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*
* Name: MPI_INIT.H
* Title: MPI initiator mode messages and structures
* Creation Date: June 8, 2000
*
- * MPI_INIT.H Version: 01.02.06
+ * MPI_INIT.H Version: 01.02.07
*
* Version History
* ---------------
@@ -54,6 +60,7 @@
* 05-31-02 01.02.05 Added MPI_SCSIIO_MSGFLGS_CMD_DETERMINES_DATA_DIR define
* for SCSI IO requests.
* 11-15-02 01.02.06 Added special extended SCSI Status defines for FCP.
+ * 06-26-03 01.02.07 Added MPI_SCSI_STATUS_FCPEXT_UNASSIGNED define.
* --------------------------------------------------------------------------
*/
@@ -178,6 +185,7 @@ typedef struct _MSG_SCSI_IO_REPLY
#define MPI_SCSI_STATUS_FCPEXT_DEVICE_LOGGED_OUT (0x80)
#define MPI_SCSI_STATUS_FCPEXT_NO_LINK (0x81)
+#define MPI_SCSI_STATUS_FCPEXT_UNASSIGNED (0x82)
/* SCSI IO Reply SCSIState values */
diff --git a/sys/dev/mpt/mpilib/mpi_ioc.h b/sys/dev/mpt/mpilib/mpi_ioc.h
index a8f8d3e..78d20bc 100644
--- a/sys/dev/mpt/mpilib/mpi_ioc.h
+++ b/sys/dev/mpt/mpilib/mpi_ioc.h
@@ -1,34 +1,40 @@
/* $FreeBSD$ */
/*-
- * Copyright (c) 2000, 2001 by LSI Logic Corporation
- *
+ * Copyright (c) 2000-2005, LSI Logic Corporation and its contributors.
+ * All rights reserved.
+ *
* Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
+ * modification, are permitted provided that the following conditions are
+ * met:
* 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*
* Name: MPI_IOC.H
* Title: MPI IOC, Port, Event, FW Download, and FW Upload messages
* Creation Date: August 11, 2000
*
- * MPI_IOC.H Version: 01.02.07
+ * MPI_IOC.H Version: 01.02.08
*
* Version History
* ---------------
@@ -78,6 +84,7 @@
* MPI_IOCFACTS_EXCEPT_RAID_CONFIG_INVALID.
* Added AliasIndex to EVENT_DATA_LOGOUT structure.
* 04-01-03 01.02.07 Added defines for MPI_FW_HEADER_SIGNATURE_.
+ * 06-26-03 01.02.08 Added new values to the product family defines.
* --------------------------------------------------------------------------
*/
@@ -700,6 +707,8 @@ typedef struct _MPI_FW_HEADER
#define MPI_FW_HEADER_PID_FAMILY_1020C0_SCSI (0x0008)
#define MPI_FW_HEADER_PID_FAMILY_1035A0_SCSI (0x0009)
#define MPI_FW_HEADER_PID_FAMILY_1035B0_SCSI (0x000A)
+#define MPI_FW_HEADER_PID_FAMILY_1030TA0_SCSI (0x000B)
+#define MPI_FW_HEADER_PID_FAMILY_1020TA0_SCSI (0x000C)
#define MPI_FW_HEADER_PID_FAMILY_909_FC (0x0000)
#define MPI_FW_HEADER_PID_FAMILY_919_FC (0x0001)
#define MPI_FW_HEADER_PID_FAMILY_919X_FC (0x0002)
diff --git a/sys/dev/mpt/mpilib/mpi_lan.h b/sys/dev/mpt/mpilib/mpi_lan.h
index ceaf353..a7551c6 100644
--- a/sys/dev/mpt/mpilib/mpi_lan.h
+++ b/sys/dev/mpt/mpilib/mpi_lan.h
@@ -1,34 +1,40 @@
/* $FreeBSD$ */
/*-
- * Copyright (c) 2000, 2001 by LSI Logic Corporation
- *
+ * Copyright (c) 2000-2005, LSI Logic Corporation and its contributors.
+ * All rights reserved.
+ *
* Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
+ * modification, are permitted provided that the following conditions are
+ * met:
* 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*
* Name: MPI_LAN.H
* Title: MPI LAN messages and structures
* Creation Date: June 30, 2000
*
- * MPI Version: 01.02.01
+ * MPI_LAN.H Version: 01.02.01
*
* Version History
* ---------------
diff --git a/sys/dev/mpt/mpilib/mpi_raid.h b/sys/dev/mpt/mpilib/mpi_raid.h
index 196f1ba..97dcd20 100644
--- a/sys/dev/mpt/mpilib/mpi_raid.h
+++ b/sys/dev/mpt/mpilib/mpi_raid.h
@@ -1,27 +1,33 @@
/* $FreeBSD$ */
/*-
- * Copyright (c) 2000, 2001 by LSI Logic Corporation
- *
+ * Copyright (c) 2000-2005, LSI Logic Corporation and its contributors.
+ * All rights reserved.
+ *
* Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
+ * modification, are permitted provided that the following conditions are
+ * met:
* 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*
* Name: MPI_RAID.H
@@ -106,6 +112,8 @@ typedef struct _MSG_RAID_ACTION
#define MPI_RAID_ACTION_REPLACE_PHYSDISK (0x10)
#define MPI_RAID_ACTION_ACTIVATE_VOLUME (0x11)
#define MPI_RAID_ACTION_INACTIVATE_VOLUME (0x12)
+#define MPI_RAID_ACTION_SET_RESYNC_RATE (0x13)
+#define MPI_RAID_ACTION_SET_DATA_SCRUB_RATE (0x14)
/* ActionDataWord defines for use with MPI_RAID_ACTION_CREATE_VOLUME action */
#define MPI_RAID_ACTION_ADATA_DO_NOT_SYNC (0x00000001)
diff --git a/sys/dev/mpt/mpilib/mpi_targ.h b/sys/dev/mpt/mpilib/mpi_targ.h
index ba8b2ca..2642a67 100644
--- a/sys/dev/mpt/mpilib/mpi_targ.h
+++ b/sys/dev/mpt/mpilib/mpi_targ.h
@@ -1,34 +1,40 @@
/* $FreeBSD$ */
/*-
- * Copyright (c) 2000, 2001 by LSI Logic Corporation
- *
+ * Copyright (c) 2000-2005, LSI Logic Corporation and its contributors.
+ * All rights reserved.
+ *
* Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
+ * modification, are permitted provided that the following conditions are
+ * met:
* 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*
* Name: MPI_TARG.H
* Title: MPI Target mode messages and structures
* Creation Date: June 22, 2000
*
- * MPI Version: 01.02.04
+ * MPI_TARG.H Version: 01.02.09
*
* Version History
* ---------------
@@ -56,6 +62,15 @@
* of MPI.
* 10-04-01 01.02.03 Added PriorityReason to MSG_TARGET_ERROR_REPLY.
* 11-01-01 01.02.04 Added define for TARGET_STATUS_SEND_FLAGS_HIGH_PRIORITY.
+ * 03-14-02 01.02.05 Modified MPI_TARGET_FCP_RSP_BUFFER to get the proper
+ * byte ordering.
+ * 05-31-02 01.02.06 Modified TARGET_MODE_REPLY_ALIAS_MASK to only include
+ * one bit.
+ * Added AliasIndex field to MPI_TARGET_FCP_CMD_BUFFER.
+ * 09-16-02 01.02.07 Added flags for confirmed completion.
+ * Added PRIORITY_REASON_TARGET_BUSY.
+ * 11-15-02 01.02.08 Added AliasID field to MPI_TARGET_SCSI_SPI_CMD_BUFFER.
+ * 04-01-03 01.02.09 Added OptionalOxid field to MPI_TARGET_FCP_CMD_BUFFER.
* --------------------------------------------------------------------------
*/
@@ -77,7 +92,7 @@ typedef struct _CMD_BUFFER_DESCRIPTOR
{
U32 PhysicalAddress32;
U64 PhysicalAddress64;
- } _u;
+ } u;
} CMD_BUFFER_DESCRIPTOR, MPI_POINTER PTR_CMD_BUFFER_DESCRIPTOR,
CmdBufferDescriptor_t, MPI_POINTER pCmdBufferDescriptor_t;
@@ -155,6 +170,7 @@ typedef struct _MSG_PRIORITY_CMD_RECEIVED_REPLY
#define PRIORITY_REASON_PROTOCOL_ERR (0x06)
#define PRIORITY_REASON_DATA_OUT_PARITY_ERR (0x07)
#define PRIORITY_REASON_DATA_OUT_CRC_ERR (0x08)
+#define PRIORITY_REASON_TARGET_BUSY (0x09)
#define PRIORITY_REASON_UNKNOWN (0xFF)
@@ -183,6 +199,9 @@ typedef struct _MPI_TARGET_FCP_CMD_BUFFER
U8 FcpCntl[4]; /* 08h */
U8 FcpCdb[16]; /* 0Ch */
U32 FcpDl; /* 1Ch */
+ U8 AliasIndex; /* 20h */
+ U8 Reserved1; /* 21h */
+ U16 OptionalOxid; /* 22h */
} MPI_TARGET_FCP_CMD_BUFFER, MPI_POINTER PTR_MPI_TARGET_FCP_CMD_BUFFER,
MpiTargetFcpCmdBuffer, MPI_POINTER pMpiTargetFcpCmdBuffer;
@@ -201,6 +220,10 @@ typedef struct _MPI_TARGET_SCSI_SPI_CMD_BUFFER
U8 TaskManagementFlags; /* 12h */
U8 AdditionalCDBLength; /* 13h */
U8 CDB[16]; /* 14h */
+ /* Alias ID */
+ U8 AliasID; /* 24h */
+ U8 Reserved1; /* 25h */
+ U16 Reserved2; /* 26h */
} MPI_TARGET_SCSI_SPI_CMD_BUFFER,
MPI_POINTER PTR_MPI_TARGET_SCSI_SPI_CMD_BUFFER,
MpiTargetScsiSpiCmdBuffer, MPI_POINTER pMpiTargetScsiSpiCmdBuffer;
@@ -231,6 +254,7 @@ typedef struct _MSG_TARGET_ASSIST_REQUEST
#define TARGET_ASSIST_FLAGS_DATA_DIRECTION (0x01)
#define TARGET_ASSIST_FLAGS_AUTO_STATUS (0x02)
#define TARGET_ASSIST_FLAGS_HIGH_PRIORITY (0x04)
+#define TARGET_ASSIST_FLAGS_CONFIRMED (0x08)
#define TARGET_ASSIST_FLAGS_REPOST_CMD_BUFFER (0x80)
@@ -275,14 +299,19 @@ typedef struct _MSG_TARGET_STATUS_SEND_REQUEST
#define TARGET_STATUS_SEND_FLAGS_AUTO_GOOD_STATUS (0x01)
#define TARGET_STATUS_SEND_FLAGS_HIGH_PRIORITY (0x04)
+#define TARGET_STATUS_SEND_FLAGS_CONFIRMED (0x08)
#define TARGET_STATUS_SEND_FLAGS_REPOST_CMD_BUFFER (0x80)
+/*
+ * NOTE: FCP_RSP data is big-endian. When used on a little-endian system, this
+ * structure properly orders the bytes.
+ */
typedef struct _MPI_TARGET_FCP_RSP_BUFFER
{
U8 Reserved0[8]; /* 00h */
- U8 FcpStatus; /* 08h */
- U8 FcpFlags; /* 09h */
- U8 Reserved1[2]; /* 0Ah */
+ U8 Reserved1[2]; /* 08h */
+ U8 FcpFlags; /* 0Ah */
+ U8 FcpStatus; /* 0Bh */
U32 FcpResid; /* 0Ch */
U32 FcpSenseLength; /* 10h */
U32 FcpResponseLength; /* 14h */
@@ -291,6 +320,10 @@ typedef struct _MPI_TARGET_FCP_RSP_BUFFER
} MPI_TARGET_FCP_RSP_BUFFER, MPI_POINTER PTR_MPI_TARGET_FCP_RSP_BUFFER,
MpiTargetFcpRspBuffer, MPI_POINTER pMpiTargetFcpRspBuffer;
+/*
+ * NOTE: The SPI status IU is big-endian. When used on a little-endian system,
+ * this structure properly orders the bytes.
+ */
typedef struct _MPI_TARGET_SCSI_SPI_STATUS_IU
{
U8 Reserved0; /* 00h */
@@ -354,7 +387,7 @@ typedef struct _MSG_TARGET_MODE_ABORT_REPLY
#define TARGET_MODE_REPLY_IO_INDEX_SHIFT (0)
#define TARGET_MODE_REPLY_INITIATOR_INDEX_MASK (0x03FFC000)
#define TARGET_MODE_REPLY_INITIATOR_INDEX_SHIFT (14)
-#define TARGET_MODE_REPLY_ALIAS_MASK (0x0C000000)
+#define TARGET_MODE_REPLY_ALIAS_MASK (0x04000000)
#define TARGET_MODE_REPLY_ALIAS_SHIFT (26)
#define TARGET_MODE_REPLY_PORT_MASK (0x10000000)
#define TARGET_MODE_REPLY_PORT_SHIFT (28)
diff --git a/sys/dev/mpt/mpilib/mpi_type.h b/sys/dev/mpt/mpilib/mpi_type.h
index 2fc4832..c505388 100644
--- a/sys/dev/mpt/mpilib/mpi_type.h
+++ b/sys/dev/mpt/mpilib/mpi_type.h
@@ -1,27 +1,33 @@
/* $FreeBSD$ */
-/*-
- * Copyright (c) 2000, 2001 by LSI Logic Corporation
- *
+/*
+ * Copyright (c) 2000-2005, LSI Logic Corporation and its contributors.
+ * All rights reserved.
+ *
* Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
+ * modification, are permitted provided that the following conditions are
+ * met:
* 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*
* Name: MPI_TYPE.H
@@ -46,9 +52,6 @@
#ifndef MPI_TYPE_H
#define MPI_TYPE_H
-#ifndef _SYS_CDEFS_H_
-#error this file needs sys/cdefs.h as a prerequisite
-#endif
/*******************************************************************************
* Define MPI_POINTER if it hasn't already been defined. By default MPI_POINTER
@@ -66,24 +69,12 @@
*
*****************************************************************************/
-typedef signed char S8;
-typedef unsigned char U8;
-typedef signed short S16;
-typedef unsigned short U16;
-
-#if defined(unix) || defined(__arm) || defined(ALPHA) \
- || defined(__CC_INT_IS_32BIT)
-
- typedef signed int S32;
- typedef unsigned int U32;
-
-#else
-
- typedef signed long S32;
- typedef unsigned long U32;
-
-#endif
-
+typedef int8_t S8;
+typedef uint8_t U8;
+typedef int16_t S16;
+typedef uint16_t U16;
+typedef int32_t S32;
+typedef uint32_t U32;
typedef struct _S64
{
diff --git a/sys/dev/mpt/mpt.c b/sys/dev/mpt/mpt.c
index 9e6e144..3354397 100644
--- a/sys/dev/mpt/mpt.c
+++ b/sys/dev/mpt/mpt.c
@@ -24,15 +24,53 @@
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
+ *
+ * Additional Copyright (c) 2002 by Matthew Jacob under same license.
*/
/*
- * Additional Copyright (c) 2002 by Matthew Jacob under same license.
+ * Copyright (c) 2004, Avid Technology, Inc. and its contributors.
+ * Copyright (c) 2005, WHEEL Sp. z o.o.
+ * Copyright (c) 2004, 2005 Justin T. Gibbs
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
-#include <dev/mpt/mpt_freebsd.h>
+#include <dev/mpt/mpt.h>
+#include <dev/mpt/mpt_cam.h> /* XXX For static handler registration */
+#include <dev/mpt/mpt_raid.h> /* XXX For static handler registration */
+
+#include <dev/mpt/mpilib/mpi.h>
+#include <dev/mpt/mpilib/mpi_ioc.h>
+
+#include <sys/sysctl.h>
#define MPT_MAX_TRYS 3
#define MPT_MAX_WAIT 300000
@@ -41,24 +79,596 @@ static int maxwait_ack = 0;
static int maxwait_int = 0;
static int maxwait_state = 0;
-static INLINE u_int32_t mpt_rd_db(mpt_softc_t *mpt);
-static INLINE u_int32_t mpt_rd_intr(mpt_softc_t *mpt);
+TAILQ_HEAD(, mpt_softc) mpt_tailq = TAILQ_HEAD_INITIALIZER(mpt_tailq);
+mpt_reply_handler_t *mpt_reply_handlers[MPT_NUM_REPLY_HANDLERS];
+
+static mpt_reply_handler_t mpt_default_reply_handler;
+static mpt_reply_handler_t mpt_config_reply_handler;
+static mpt_reply_handler_t mpt_handshake_reply_handler;
+static mpt_reply_handler_t mpt_event_reply_handler;
+static void mpt_send_event_ack(struct mpt_softc *mpt, request_t *ack_req,
+ MSG_EVENT_NOTIFY_REPLY *msg, uint32_t context);
+static int mpt_soft_reset(struct mpt_softc *mpt);
+static void mpt_hard_reset(struct mpt_softc *mpt);
+static int mpt_configure_ioc(struct mpt_softc *mpt);
+static int mpt_enable_ioc(struct mpt_softc *mpt);
+
+/************************* Personality Module Support *************************/
+/*
+ * We include one extra entry that is guaranteed to be NULL
+ * to simplify our itterator.
+ */
+static struct mpt_personality *mpt_personalities[MPT_MAX_PERSONALITIES + 1];
+static __inline struct mpt_personality*
+ mpt_pers_find(struct mpt_softc *, u_int);
+static __inline struct mpt_personality*
+ mpt_pers_find_reverse(struct mpt_softc *, u_int);
+
+static __inline struct mpt_personality *
+mpt_pers_find(struct mpt_softc *mpt, u_int start_at)
+{
+ KASSERT(start_at <= MPT_MAX_PERSONALITIES,
+ ("mpt_pers_find: starting position out of range\n"));
+
+ while (start_at < MPT_MAX_PERSONALITIES
+ && (mpt->mpt_pers_mask & (0x1 << start_at)) == 0) {
+ start_at++;
+ }
+ return (mpt_personalities[start_at]);
+}
+
+/*
+ * Used infrequenstly, so no need to optimize like a forward
+ * traversal where we use the MAX+1 is guaranteed to be NULL
+ * trick.
+ */
+static __inline struct mpt_personality *
+mpt_pers_find_reverse(struct mpt_softc *mpt, u_int start_at)
+{
+ while (start_at < MPT_MAX_PERSONALITIES
+ && (mpt->mpt_pers_mask & (0x1 << start_at)) == 0) {
+ start_at--;
+ }
+ if (start_at < MPT_MAX_PERSONALITIES)
+ return (mpt_personalities[start_at]);
+ return (NULL);
+}
+
+#define MPT_PERS_FOREACH(mpt, pers) \
+ for (pers = mpt_pers_find(mpt, /*start_at*/0); \
+ pers != NULL; \
+ pers = mpt_pers_find(mpt, /*start_at*/pers->id+1))
+
+#define MPT_PERS_FOREACH_REVERSE(mpt, pers) \
+ for (pers = mpt_pers_find_reverse(mpt, MPT_MAX_PERSONALITIES-1);\
+ pers != NULL; \
+ pers = mpt_pers_find_reverse(mpt, /*start_at*/pers->id-1))
+
+static mpt_load_handler_t mpt_stdload;
+static mpt_probe_handler_t mpt_stdprobe;
+static mpt_attach_handler_t mpt_stdattach;
+static mpt_event_handler_t mpt_stdevent;
+static mpt_reset_handler_t mpt_stdreset;
+static mpt_shutdown_handler_t mpt_stdshutdown;
+static mpt_detach_handler_t mpt_stddetach;
+static mpt_unload_handler_t mpt_stdunload;
+static struct mpt_personality mpt_default_personality =
+{
+ .load = mpt_stdload,
+ .probe = mpt_stdprobe,
+ .attach = mpt_stdattach,
+ .event = mpt_stdevent,
+ .reset = mpt_stdreset,
+ .shutdown = mpt_stdshutdown,
+ .detach = mpt_stddetach,
+ .unload = mpt_stdunload
+};
+
+static mpt_load_handler_t mpt_core_load;
+static mpt_attach_handler_t mpt_core_attach;
+static mpt_reset_handler_t mpt_core_ioc_reset;
+static mpt_event_handler_t mpt_core_event;
+static mpt_shutdown_handler_t mpt_core_shutdown;
+static mpt_shutdown_handler_t mpt_core_detach;
+static mpt_unload_handler_t mpt_core_unload;
+static struct mpt_personality mpt_core_personality =
+{
+ .name = "mpt_core",
+ .load = mpt_core_load,
+ .attach = mpt_core_attach,
+ .event = mpt_core_event,
+ .reset = mpt_core_ioc_reset,
+ .shutdown = mpt_core_shutdown,
+ .detach = mpt_core_detach,
+ .unload = mpt_core_unload,
+};
+
+/*
+ * Manual declaration so that DECLARE_MPT_PERSONALITY doesn't need
+ * ordering information. We want the core to always register FIRST.
+ * other modules are set to SI_ORDER_SECOND.
+ */
+static moduledata_t mpt_core_mod = {
+ "mpt_core", mpt_modevent, &mpt_core_personality
+};
+DECLARE_MODULE(mpt_core, mpt_core_mod, SI_SUB_DRIVERS, SI_ORDER_FIRST);
+MODULE_VERSION(mpt_core, 1);
+
+#define MPT_PERS_ATACHED(pers, mpt) \
+ ((mpt)->pers_mask & (0x1 << pers->id))
+
+
+int
+mpt_modevent(module_t mod, int type, void *data)
+{
+ struct mpt_personality *pers;
+ int error;
+
+ pers = (struct mpt_personality *)data;
+
+ error = 0;
+ switch (type) {
+ case MOD_LOAD:
+ {
+ mpt_load_handler_t **def_handler;
+ mpt_load_handler_t **pers_handler;
+ int i;
+
+ for (i = 0; i < MPT_MAX_PERSONALITIES; i++) {
+ if (mpt_personalities[i] == NULL)
+ break;
+ }
+ if (i >= MPT_MAX_PERSONALITIES) {
+ error = ENOMEM;
+ break;
+ }
+ pers->id = i;
+ mpt_personalities[i] = pers;
+
+ /* Install standard/noop handlers for any NULL entries. */
+ def_handler = MPT_PERS_FIRST_HANDLER(&mpt_default_personality);
+ pers_handler = MPT_PERS_FIRST_HANDLER(pers);
+ while (pers_handler <= MPT_PERS_LAST_HANDLER(pers)) {
+ if (*pers_handler == NULL)
+ *pers_handler = *def_handler;
+ pers_handler++;
+ def_handler++;
+ }
+
+ error = (pers->load(pers));
+ if (error != 0)
+ mpt_personalities[i] = NULL;
+ break;
+ }
+ case MOD_SHUTDOWN:
+ break;
+ case MOD_QUIESCE:
+ break;
+ case MOD_UNLOAD:
+ error = pers->unload(pers);
+ mpt_personalities[pers->id] = NULL;
+ break;
+ default:
+ error = EINVAL;
+ break;
+ }
+ return (error);
+}
+
+int
+mpt_stdload(struct mpt_personality *pers)
+{
+ /* Load is always successfull. */
+ return (0);
+}
+
+int
+mpt_stdprobe(struct mpt_softc *mpt)
+{
+ /* Probe is always successfull. */
+ return (0);
+}
+
+int
+mpt_stdattach(struct mpt_softc *mpt)
+{
+ /* Attach is always successfull. */
+ return (0);
+}
+
+int
+mpt_stdevent(struct mpt_softc *mpt, request_t *req, MSG_EVENT_NOTIFY_REPLY *rep)
+{
+ /* Event was not for us. */
+ return (0);
+}
+
+void
+mpt_stdreset(struct mpt_softc *mpt, int type)
+{
+}
+
+void
+mpt_stdshutdown(struct mpt_softc *mpt)
+{
+}
+
+void
+mpt_stddetach(struct mpt_softc *mpt)
+{
+}
+
+int
+mpt_stdunload(struct mpt_personality *pers)
+{
+ /* Unload is always successfull. */
+ return (0);
+}
+
+/******************************* Bus DMA Support ******************************/
+void
+mpt_map_rquest(void *arg, bus_dma_segment_t *segs, int nseg, int error)
+{
+ struct mpt_map_info *map_info;
+
+ map_info = (struct mpt_map_info *)arg;
+ map_info->error = error;
+ map_info->phys = segs->ds_addr;
+}
+
+/**************************** Reply/Event Handling ****************************/
+int
+mpt_register_handler(struct mpt_softc *mpt, mpt_handler_type type,
+ mpt_handler_t handler, uint32_t *phandler_id)
+{
+
+ switch (type) {
+ case MPT_HANDLER_REPLY:
+ {
+ u_int cbi;
+ u_int free_cbi;
+
+ if (phandler_id == NULL)
+ return (EINVAL);
+
+ free_cbi = MPT_HANDLER_ID_NONE;
+ for (cbi = 0; cbi < MPT_NUM_REPLY_HANDLERS; cbi++) {
+ /*
+ * If the same handler is registered multiple
+ * times, don't error out. Just return the
+ * index of the original registration.
+ */
+ if (mpt_reply_handlers[cbi] == handler.reply_handler) {
+ *phandler_id = MPT_CBI_TO_HID(cbi);
+ return (0);
+ }
+
+ /*
+ * Fill from the front in the hope that
+ * all registered handlers consume only a
+ * single cache line.
+ *
+ * We don't break on the first empty slot so
+ * that the full table is checked to see if
+ * this handler was previously registered.
+ */
+ if (free_cbi == MPT_HANDLER_ID_NONE
+ && (mpt_reply_handlers[cbi]
+ == mpt_default_reply_handler))
+ free_cbi = cbi;
+ }
+ if (free_cbi == MPT_HANDLER_ID_NONE)
+ return (ENOMEM);
+ mpt_reply_handlers[free_cbi] = handler.reply_handler;
+ *phandler_id = MPT_CBI_TO_HID(free_cbi);
+ break;
+ }
+ default:
+ mpt_prt(mpt, "mpt_register_handler unknown type %d\n", type);
+ return (EINVAL);
+ }
+ return (0);
+}
+
+int
+mpt_deregister_handler(struct mpt_softc *mpt, mpt_handler_type type,
+ mpt_handler_t handler, uint32_t handler_id)
+{
+
+ switch (type) {
+ case MPT_HANDLER_REPLY:
+ {
+ u_int cbi;
+
+ cbi = MPT_CBI(handler_id);
+ if (cbi >= MPT_NUM_REPLY_HANDLERS
+ || mpt_reply_handlers[cbi] != handler.reply_handler)
+ return (ENOENT);
+ mpt_reply_handlers[cbi] = mpt_default_reply_handler;
+ break;
+ }
+ default:
+ mpt_prt(mpt, "mpt_deregister_handler unknown type %d\n", type);
+ return (EINVAL);
+ }
+ return (0);
+}
+
+static int
+mpt_default_reply_handler(struct mpt_softc *mpt, request_t *req,
+ MSG_DEFAULT_REPLY *reply_frame)
+{
+ mpt_prt(mpt, "XXXX Default Handler Called. Req %p, Frame %p\n",
+ req, reply_frame);
+
+ if (reply_frame != NULL)
+ mpt_dump_reply_frame(mpt, reply_frame);
+
+ mpt_prt(mpt, "XXXX Reply Frame Ignored\n");
+
+ return (/*free_reply*/TRUE);
+}
+
+static int
+mpt_config_reply_handler(struct mpt_softc *mpt, request_t *req,
+ MSG_DEFAULT_REPLY *reply_frame)
+{
+ if (req != NULL) {
+
+ if (reply_frame != NULL) {
+ MSG_CONFIG *cfgp;
+ MSG_CONFIG_REPLY *reply;
+
+ cfgp = (MSG_CONFIG *)req->req_vbuf;
+ reply = (MSG_CONFIG_REPLY *)reply_frame;
+ req->IOCStatus = le16toh(reply_frame->IOCStatus);
+ bcopy(&reply->Header, &cfgp->Header,
+ sizeof(cfgp->Header));
+ }
+ req->state &= ~REQ_STATE_QUEUED;
+ req->state |= REQ_STATE_DONE;
+ TAILQ_REMOVE(&mpt->request_pending_list, req, links);
+
+ if ((req->state & REQ_STATE_NEED_WAKEUP) != 0)
+ wakeup(req);
+ }
+
+ return (/*free_reply*/TRUE);
+}
+
+static int
+mpt_handshake_reply_handler(struct mpt_softc *mpt, request_t *req,
+ MSG_DEFAULT_REPLY *reply_frame)
+{
+ /* Nothing to be done. */
+ return (/*free_reply*/TRUE);
+}
+
+static int
+mpt_event_reply_handler(struct mpt_softc *mpt, request_t *req,
+ MSG_DEFAULT_REPLY *reply_frame)
+{
+ int free_reply;
+
+ if (reply_frame == NULL) {
+ mpt_prt(mpt, "Event Handler: req %p - Unexpected NULL reply\n");
+ return (/*free_reply*/TRUE);
+ }
+
+ free_reply = TRUE;
+ switch (reply_frame->Function) {
+ case MPI_FUNCTION_EVENT_NOTIFICATION:
+ {
+ MSG_EVENT_NOTIFY_REPLY *msg;
+ struct mpt_personality *pers;
+ u_int handled;
+
+ handled = 0;
+ msg = (MSG_EVENT_NOTIFY_REPLY *)reply_frame;
+ MPT_PERS_FOREACH(mpt, pers)
+ handled += pers->event(mpt, req, msg);
+
+ if (handled == 0)
+ mpt_prt(mpt,
+ "Unhandled Event Notify Frame. Event %#x.\n",
+ msg->Event);
+
+ if (msg->AckRequired) {
+ request_t *ack_req;
+ uint32_t context;
+
+ context = htole32(req->index|MPT_REPLY_HANDLER_EVENTS);
+ ack_req = mpt_get_request(mpt, /*sleep_ok*/FALSE);
+ if (ack_req == NULL) {
+ struct mpt_evtf_record *evtf;
+
+ evtf = (struct mpt_evtf_record *)reply_frame;
+ evtf->context = context;
+ LIST_INSERT_HEAD(&mpt->ack_frames, evtf, links);
+ free_reply = FALSE;
+ break;
+ }
+ mpt_send_event_ack(mpt, ack_req, msg, context);
+ }
+ break;
+ }
+ case MPI_FUNCTION_PORT_ENABLE:
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "enable port reply\n");
+ break;
+ case MPI_FUNCTION_EVENT_ACK:
+ break;
+ default:
+ mpt_prt(mpt, "Unknown Event Function: %x\n",
+ reply_frame->Function);
+ break;
+ }
+
+ if (req != NULL
+ && (reply_frame->MsgFlags & MPI_MSGFLAGS_CONTINUATION_REPLY) == 0) {
+
+ req->state &= ~REQ_STATE_QUEUED;
+ req->state |= REQ_STATE_DONE;
+ TAILQ_REMOVE(&mpt->request_pending_list, req, links);
+
+ if ((req->state & REQ_STATE_NEED_WAKEUP) != 0)
+ wakeup(req);
+ else
+ mpt_free_request(mpt, req);
+ }
+ return (free_reply);
+}
+
+/*
+ * Process an asynchronous event from the IOC.
+ */
+static int
+mpt_core_event(struct mpt_softc *mpt, request_t *req,
+ MSG_EVENT_NOTIFY_REPLY *msg)
+{
+ switch(msg->Event & 0xFF) {
+ case MPI_EVENT_NONE:
+ break;
+ case MPI_EVENT_LOG_DATA:
+ {
+ int i;
+
+ /* Some error occured that LSI wants logged */
+ mpt_prt(mpt, "EvtLogData: IOCLogInfo: 0x%08x\n",
+ msg->IOCLogInfo);
+ mpt_prt(mpt, "\tEvtLogData: Event Data:");
+ for (i = 0; i < msg->EventDataLength; i++)
+ mpt_prtc(mpt, " %08x", msg->Data[i]);
+ mpt_prtc(mpt, "\n");
+ break;
+ }
+ case MPI_EVENT_EVENT_CHANGE:
+ /*
+ * This is just an acknowledgement
+ * of our mpt_send_event_request.
+ */
+ break;
+ default:
+ return (/*handled*/0);
+ break;
+ }
+ return (/*handled*/1);
+}
+
+static void
+mpt_send_event_ack(struct mpt_softc *mpt, request_t *ack_req,
+ MSG_EVENT_NOTIFY_REPLY *msg, uint32_t context)
+{
+ MSG_EVENT_ACK *ackp;
+
+ ackp = (MSG_EVENT_ACK *)ack_req->req_vbuf;
+ bzero(ackp, sizeof *ackp);
+ ackp->Function = MPI_FUNCTION_EVENT_ACK;
+ ackp->Event = msg->Event;
+ ackp->EventContext = msg->EventContext;
+ ackp->MsgContext = context;
+ mpt_check_doorbell(mpt);
+ mpt_send_cmd(mpt, ack_req);
+}
+
+/***************************** Interrupt Handling *****************************/
+void
+mpt_intr(void *arg)
+{
+ struct mpt_softc *mpt;
+ uint32_t reply_desc;
+
+ mpt = (struct mpt_softc *)arg;
+ while ((reply_desc = mpt_pop_reply_queue(mpt)) != MPT_REPLY_EMPTY) {
+ request_t *req;
+ MSG_DEFAULT_REPLY *reply_frame;
+ uint32_t reply_baddr;
+ u_int cb_index;
+ u_int req_index;
+ int free_rf;
+
+ req = NULL;
+ reply_frame = NULL;
+ reply_baddr = 0;
+ if ((reply_desc & MPI_ADDRESS_REPLY_A_BIT) != 0) {
+ u_int offset;
+
+ /*
+ * Insure that the reply frame is coherent.
+ */
+ reply_baddr = (reply_desc << 1);
+ offset = reply_baddr - (mpt->reply_phys & 0xFFFFFFFF);
+ bus_dmamap_sync_range(mpt->reply_dmat, mpt->reply_dmap,
+ offset, MPT_REPLY_SIZE,
+ BUS_DMASYNC_POSTREAD);
+ reply_frame = MPT_REPLY_OTOV(mpt, offset);
+ reply_desc = le32toh(reply_frame->MsgContext);
+ }
+ cb_index = MPT_CONTEXT_TO_CBI(reply_desc);
+ req_index = MPT_CONTEXT_TO_REQI(reply_desc);
+ if (req_index < MPT_MAX_REQUESTS(mpt))
+ req = &mpt->request_pool[req_index];
+
+ free_rf = mpt_reply_handlers[cb_index](mpt, req, reply_frame);
+
+ if (reply_frame != NULL && free_rf)
+ mpt_free_reply(mpt, reply_baddr);
+ }
+}
+
+/******************************* Error Recovery *******************************/
+void
+mpt_complete_request_chain(struct mpt_softc *mpt, struct req_queue *chain,
+ u_int iocstatus)
+{
+ MSG_DEFAULT_REPLY ioc_status_frame;
+ request_t *req;
+
+ bzero(&ioc_status_frame, sizeof(ioc_status_frame));
+ ioc_status_frame.MsgLength = roundup2(sizeof(ioc_status_frame), 4);
+ ioc_status_frame.IOCStatus = iocstatus;
+ while((req = TAILQ_FIRST(chain)) != NULL) {
+ MSG_REQUEST_HEADER *msg_hdr;
+ u_int cb_index;
+
+ msg_hdr = (MSG_REQUEST_HEADER *)req->req_vbuf;
+ ioc_status_frame.Function = msg_hdr->Function;
+ ioc_status_frame.MsgContext = msg_hdr->MsgContext;
+ cb_index = MPT_CONTEXT_TO_CBI(le32toh(msg_hdr->MsgContext));
+ mpt_reply_handlers[cb_index](mpt, req, &ioc_status_frame);
+ }
+}
+
+/********************************* Diagnostics ********************************/
+/*
+ * Perform a diagnostic dump of a reply frame.
+ */
+void
+mpt_dump_reply_frame(struct mpt_softc *mpt, MSG_DEFAULT_REPLY *reply_frame)
+{
+
+ mpt_prt(mpt, "Address Reply:\n");
+ mpt_print_reply(reply_frame);
+}
+
+/******************************* Doorbell Access ******************************/
+static __inline uint32_t mpt_rd_db(struct mpt_softc *mpt);
+static __inline uint32_t mpt_rd_intr(struct mpt_softc *mpt);
-static INLINE u_int32_t
-mpt_rd_db(mpt_softc_t *mpt)
+static __inline uint32_t
+mpt_rd_db(struct mpt_softc *mpt)
{
return mpt_read(mpt, MPT_OFFSET_DOORBELL);
}
-static INLINE u_int32_t
-mpt_rd_intr(mpt_softc_t *mpt)
+static __inline uint32_t
+mpt_rd_intr(struct mpt_softc *mpt)
{
return mpt_read(mpt, MPT_OFFSET_INTR_STATUS);
}
/* Busy wait for a door bell to be read by IOC */
static int
-mpt_wait_db_ack(mpt_softc_t *mpt)
+mpt_wait_db_ack(struct mpt_softc *mpt)
{
int i;
for (i=0; i < MPT_MAX_WAIT; i++) {
@@ -67,14 +677,14 @@ mpt_wait_db_ack(mpt_softc_t *mpt)
return MPT_OK;
}
- DELAY(100);
+ DELAY(1000);
}
return MPT_FAIL;
}
/* Busy wait for a door bell interrupt */
static int
-mpt_wait_db_int(mpt_softc_t *mpt)
+mpt_wait_db_int(struct mpt_softc *mpt)
{
int i;
for (i=0; i < MPT_MAX_WAIT; i++) {
@@ -89,23 +699,23 @@ mpt_wait_db_int(mpt_softc_t *mpt)
/* Wait for IOC to transition to a give state */
void
-mpt_check_doorbell(mpt_softc_t *mpt)
+mpt_check_doorbell(struct mpt_softc *mpt)
{
- u_int32_t db = mpt_rd_db(mpt);
+ uint32_t db = mpt_rd_db(mpt);
if (MPT_STATE(db) != MPT_DB_STATE_RUNNING) {
- mpt_prt(mpt, "Device not running");
+ mpt_prt(mpt, "Device not running\n");
mpt_print_db(db);
}
}
/* Wait for IOC to transition to a give state */
static int
-mpt_wait_state(mpt_softc_t *mpt, enum DB_STATE_BITS state)
+mpt_wait_state(struct mpt_softc *mpt, enum DB_STATE_BITS state)
{
int i;
for (i = 0; i < MPT_MAX_WAIT; i++) {
- u_int32_t db = mpt_rd_db(mpt);
+ uint32_t db = mpt_rd_db(mpt);
if (MPT_STATE(db) == state) {
maxwait_state = i > maxwait_state ? i : maxwait_state;
return (MPT_OK);
@@ -116,17 +726,18 @@ mpt_wait_state(mpt_softc_t *mpt, enum DB_STATE_BITS state)
}
+/************************* Intialization/Configuration ************************/
+static int mpt_download_fw(struct mpt_softc *mpt);
+
/* Issue the reset COMMAND to the IOC */
-int
-mpt_soft_reset(mpt_softc_t *mpt)
+static int
+mpt_soft_reset(struct mpt_softc *mpt)
{
- if (mpt->verbose) {
- mpt_prt(mpt, "soft reset");
- }
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "soft reset\n");
/* Have to use hard reset if we are not in Running state */
if (MPT_STATE(mpt_rd_db(mpt)) != MPT_DB_STATE_RUNNING) {
- mpt_prt(mpt, "soft reset failed: device not running");
+ mpt_prt(mpt, "soft reset failed: device not running\n");
return MPT_FAIL;
}
@@ -135,7 +746,7 @@ mpt_soft_reset(mpt_softc_t *mpt)
* processing. So don't waste our time.
*/
if (MPT_DB_IS_IN_USE(mpt_rd_db(mpt))) {
- mpt_prt(mpt, "soft reset failed: doorbell wedged");
+ mpt_prt(mpt, "soft reset failed: doorbell wedged\n");
return MPT_FAIL;
}
@@ -143,60 +754,132 @@ mpt_soft_reset(mpt_softc_t *mpt)
mpt_write(mpt, MPT_OFFSET_DOORBELL,
MPI_FUNCTION_IOC_MESSAGE_UNIT_RESET << MPI_DOORBELL_FUNCTION_SHIFT);
if (mpt_wait_db_ack(mpt) != MPT_OK) {
- mpt_prt(mpt, "soft reset failed: ack timeout");
+ mpt_prt(mpt, "soft reset failed: ack timeout\n");
return MPT_FAIL;
}
/* Wait for the IOC to reload and come out of reset state */
if (mpt_wait_state(mpt, MPT_DB_STATE_READY) != MPT_OK) {
- mpt_prt(mpt, "soft reset failed: device did not start running");
+ mpt_prt(mpt, "soft reset failed: device did not restart\n");
return MPT_FAIL;
}
return MPT_OK;
}
+static int
+mpt_enable_diag_mode(struct mpt_softc *mpt)
+{
+ int try;
+
+ try = 20;
+ while (--try) {
+
+ if ((mpt_read(mpt, MPT_OFFSET_DIAGNOSTIC) & MPI_DIAG_DRWE) != 0)
+ break;
+
+ /* Enable diagnostic registers */
+ mpt_write(mpt, MPT_OFFSET_SEQUENCE, 0xFF);
+ mpt_write(mpt, MPT_OFFSET_SEQUENCE, MPI_WRSEQ_1ST_KEY_VALUE);
+ mpt_write(mpt, MPT_OFFSET_SEQUENCE, MPI_WRSEQ_2ND_KEY_VALUE);
+ mpt_write(mpt, MPT_OFFSET_SEQUENCE, MPI_WRSEQ_3RD_KEY_VALUE);
+ mpt_write(mpt, MPT_OFFSET_SEQUENCE, MPI_WRSEQ_4TH_KEY_VALUE);
+ mpt_write(mpt, MPT_OFFSET_SEQUENCE, MPI_WRSEQ_5TH_KEY_VALUE);
+
+ DELAY(100000);
+ }
+ if (try == 0)
+ return (EIO);
+ return (0);
+}
+
+static void
+mpt_disable_diag_mode(struct mpt_softc *mpt)
+{
+ mpt_write(mpt, MPT_OFFSET_SEQUENCE, 0xFFFFFFFF);
+}
+
/* This is a magic diagnostic reset that resets all the ARM
* processors in the chip.
*/
-void
-mpt_hard_reset(mpt_softc_t *mpt)
+static void
+mpt_hard_reset(struct mpt_softc *mpt)
{
- /* This extra read comes for the Linux source
- * released by LSI. It's function is undocumented!
- */
- if (mpt->verbose) {
- mpt_prt(mpt, "hard reset");
+ int error;
+ int wait;
+ uint32_t diagreg;
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "hard reset\n");
+
+ error = mpt_enable_diag_mode(mpt);
+ if (error) {
+ mpt_prt(mpt, "WARNING - Could not enter diagnostic mode !\n");
+ mpt_prt(mpt, "Trying to reset anyway.\n");
}
- mpt_read(mpt, MPT_OFFSET_FUBAR);
- /* Enable diagnostic registers */
- mpt_write(mpt, MPT_OFFSET_SEQUENCE, MPT_DIAG_SEQUENCE_1);
- mpt_write(mpt, MPT_OFFSET_SEQUENCE, MPT_DIAG_SEQUENCE_2);
- mpt_write(mpt, MPT_OFFSET_SEQUENCE, MPT_DIAG_SEQUENCE_3);
- mpt_write(mpt, MPT_OFFSET_SEQUENCE, MPT_DIAG_SEQUENCE_4);
- mpt_write(mpt, MPT_OFFSET_SEQUENCE, MPT_DIAG_SEQUENCE_5);
+ diagreg = mpt_read(mpt, MPT_OFFSET_DIAGNOSTIC);
+
+ /*
+ * This appears to be a workaround required for some
+ * firmware or hardware revs.
+ */
+ mpt_write(mpt, MPT_OFFSET_DIAGNOSTIC, diagreg | MPI_DIAG_DISABLE_ARM);
+ DELAY(1000);
/* Diag. port is now active so we can now hit the reset bit */
- mpt_write(mpt, MPT_OFFSET_DIAGNOSTIC, MPT_DIAG_RESET_IOC);
+ mpt_write(mpt, MPT_OFFSET_DIAGNOSTIC, diagreg | MPI_DIAG_RESET_ADAPTER);
+
+ /*
+ * Ensure that the reset has finished. We delay 1ms
+ * prior to reading the register to make sure the chip
+ * has sufficiently completed its reset to handle register
+ * accesses.
+ */
+ wait = 5000;
+ do {
+ DELAY(1000);
+ diagreg = mpt_read(mpt, MPT_OFFSET_DIAGNOSTIC);
+ } while (--wait && (diagreg & MPI_DIAG_RESET_ADAPTER) == 0);
- DELAY(10000);
+ if (wait == 0) {
+ mpt_prt(mpt, "WARNING - Failed hard reset! "
+ "Trying to initialize anyway.\n");
+ }
- /* Disable Diagnostic Register */
- mpt_write(mpt, MPT_OFFSET_SEQUENCE, 0xFF);
+ /*
+ * If we have firmware to download, it must be loaded before
+ * the controller will become operational. Do so now.
+ */
+ if (mpt->fw_image != NULL) {
- /* Restore the config register values */
- /* Hard resets are known to screw up the BAR for diagnostic
- memory accesses (Mem1). */
- mpt_set_config_regs(mpt);
- if (mpt->mpt2 != NULL) {
- mpt_set_config_regs(mpt->mpt2);
+ error = mpt_download_fw(mpt);
+
+ if (error) {
+ mpt_prt(mpt, "WARNING - Firmware Download Failed!\n");
+ mpt_prt(mpt, "Trying to initialize anyway.\n");
+ }
}
- /* Note that if there is no valid firmware to run, the doorbell will
- remain in the reset state (0x00000000) */
+ /*
+ * Reseting the controller should have disabled write
+ * access to the diagnostic registers, but disable
+ * manually to be sure.
+ */
+ mpt_disable_diag_mode(mpt);
+}
+
+static void
+mpt_core_ioc_reset(struct mpt_softc *mpt, int type)
+{
+ /*
+ * Complete all pending requests with a status
+ * appropriate for an IOC reset.
+ */
+ mpt_complete_request_chain(mpt, &mpt->request_pending_list,
+ MPI_IOCSTATUS_INVALID_STATE);
}
+
/*
* Reset the IOC when needed. Try software command first then if needed
* poke at the magic diagnostic reset. Note that a hard reset resets
@@ -204,9 +887,10 @@ mpt_hard_reset(mpt_softc_t *mpt)
* fouls up the PCI configuration registers.
*/
int
-mpt_reset(mpt_softc_t *mpt)
+mpt_reset(struct mpt_softc *mpt, int reinit)
{
- int ret;
+ struct mpt_personality *pers;
+ int ret;
/* Try a soft reset */
if ((ret = mpt_soft_reset(mpt)) != MPT_OK) {
@@ -215,87 +899,162 @@ mpt_reset(mpt_softc_t *mpt)
/* Wait for the IOC to reload and come out of reset state */
ret = mpt_wait_state(mpt, MPT_DB_STATE_READY);
- if (ret != MPT_OK) {
- mpt_prt(mpt, "failed to reset device");
- }
+ if (ret != MPT_OK)
+ mpt_prt(mpt, "failed to reset device\n");
}
+ /*
+ * Invoke reset handlers. We bump the reset count so
+ * that mpt_wait_req() understands that regardless of
+ * the specified wait condition, it should stop its wait.
+ */
+ mpt->reset_cnt++;
+ MPT_PERS_FOREACH(mpt, pers)
+ pers->reset(mpt, ret);
+
+ if (reinit != 0)
+ mpt_enable_ioc(mpt);
+
return ret;
}
/* Return a command buffer to the free queue */
void
-mpt_free_request(mpt_softc_t *mpt, request_t *req)
+mpt_free_request(struct mpt_softc *mpt, request_t *req)
{
+ struct mpt_evtf_record *record;
+ uint32_t reply_baddr;
+
if (req == NULL || req != &mpt->request_pool[req->index]) {
panic("mpt_free_request bad req ptr\n");
return;
}
- req->sequence = 0;
req->ccb = NULL;
- req->debug = REQ_FREE;
- SLIST_INSERT_HEAD(&mpt->request_free_list, req, link);
+ req->state = REQ_STATE_FREE;
+ if (LIST_EMPTY(&mpt->ack_frames)) {
+ TAILQ_INSERT_HEAD(&mpt->request_free_list, req, links);
+ if (mpt->getreqwaiter != 0) {
+ mpt->getreqwaiter = 0;
+ wakeup(&mpt->request_free_list);
+ }
+ return;
+ }
+
+ /*
+ * Process an ack frame deferred due to resource shortage.
+ */
+ record = LIST_FIRST(&mpt->ack_frames);
+ LIST_REMOVE(record, links);
+ mpt_send_event_ack(mpt, req, &record->reply, record->context);
+ reply_baddr = (uint32_t)((uint8_t *)record - mpt->reply)
+ + (mpt->reply_phys & 0xFFFFFFFF);
+ mpt_free_reply(mpt, reply_baddr);
}
/* Get a command buffer from the free queue */
request_t *
-mpt_get_request(mpt_softc_t *mpt)
+mpt_get_request(struct mpt_softc *mpt, int sleep_ok)
{
request_t *req;
- req = SLIST_FIRST(&mpt->request_free_list);
+
+retry:
+ req = TAILQ_FIRST(&mpt->request_free_list);
if (req != NULL) {
- if (req != &mpt->request_pool[req->index]) {
- panic("mpt_get_request: corrupted request free list\n");
- }
- if (req->ccb != NULL) {
- panic("mpt_get_request: corrupted request free list (ccb)\n");
- }
- SLIST_REMOVE_HEAD(&mpt->request_free_list, link);
- req->debug = REQ_IN_PROGRESS;
+ KASSERT(req == &mpt->request_pool[req->index],
+ ("mpt_get_request: corrupted request free list\n"));
+ TAILQ_REMOVE(&mpt->request_free_list, req, links);
+ req->state = REQ_STATE_ALLOCATED;
+ } else if (sleep_ok != 0) {
+ mpt->getreqwaiter = 1;
+ mpt_sleep(mpt, &mpt->request_free_list, PUSER, "mptgreq", 0);
+ goto retry;
}
return req;
}
/* Pass the command to the IOC */
void
-mpt_send_cmd(mpt_softc_t *mpt, request_t *req)
-{
- req->sequence = mpt->sequence++;
- if (mpt->verbose > 1) {
- u_int32_t *pReq;
- pReq = req->req_vbuf;
- mpt_prt(mpt, "Send Request %d (0x%x):",
- req->index, req->req_pbuf);
- mpt_prt(mpt, "%08x %08x %08x %08x",
- pReq[0], pReq[1], pReq[2], pReq[3]);
- mpt_prt(mpt, "%08x %08x %08x %08x",
- pReq[4], pReq[5], pReq[6], pReq[7]);
- mpt_prt(mpt, "%08x %08x %08x %08x",
- pReq[8], pReq[9], pReq[10], pReq[11]);
- mpt_prt(mpt, "%08x %08x %08x %08x",
- pReq[12], pReq[13], pReq[14], pReq[15]);
- }
+mpt_send_cmd(struct mpt_softc *mpt, request_t *req)
+{
+ uint32_t *pReq;
+
+ pReq = req->req_vbuf;
+ mpt_lprt(mpt, MPT_PRT_TRACE, "Send Request %d (0x%x):\n",
+ req->index, req->req_pbuf);
+ mpt_lprt(mpt, MPT_PRT_TRACE, "%08x %08x %08x %08x\n",
+ pReq[0], pReq[1], pReq[2], pReq[3]);
+ mpt_lprt(mpt, MPT_PRT_TRACE, "%08x %08x %08x %08x\n",
+ pReq[4], pReq[5], pReq[6], pReq[7]);
+ mpt_lprt(mpt, MPT_PRT_TRACE, "%08x %08x %08x %08x\n",
+ pReq[8], pReq[9], pReq[10], pReq[11]);
+ mpt_lprt(mpt, MPT_PRT_TRACE, "%08x %08x %08x %08x\n",
+ pReq[12], pReq[13], pReq[14], pReq[15]);
+
bus_dmamap_sync(mpt->request_dmat, mpt->request_dmap,
BUS_DMASYNC_PREWRITE);
- req->debug = REQ_ON_CHIP;
- mpt_write(mpt, MPT_OFFSET_REQUEST_Q, (u_int32_t) req->req_pbuf);
+ req->state |= REQ_STATE_QUEUED;
+ TAILQ_INSERT_HEAD(&mpt->request_pending_list, req, links);
+ mpt_write(mpt, MPT_OFFSET_REQUEST_Q, (uint32_t) req->req_pbuf);
}
/*
- * Give the reply buffer back to the IOC after we have
- * finished processing it.
+ * Wait for a request to complete.
+ *
+ * Inputs:
+ * mpt softc of controller executing request
+ * req request to wait for
+ * sleep_ok nonzero implies may sleep in this context
+ * time_ms timeout in ms. 0 implies no timeout.
+ *
+ * Return Values:
+ * 0 Request completed
+ * non-0 Timeout fired before request completion.
*/
-void
-mpt_free_reply(mpt_softc_t *mpt, u_int32_t ptr)
+int
+mpt_wait_req(struct mpt_softc *mpt, request_t *req,
+ mpt_req_state_t state, mpt_req_state_t mask,
+ int sleep_ok, int time_ms)
{
- mpt_write(mpt, MPT_OFFSET_REPLY_Q, ptr);
-}
+ int error;
+ int timeout;
+ u_int saved_cnt;
-/* Get a reply from the IOC */
-u_int32_t
-mpt_pop_reply_queue(mpt_softc_t *mpt)
-{
- return mpt_read(mpt, MPT_OFFSET_REPLY_Q);
+ /*
+ * timeout is in ms. 0 indicates infinite wait.
+ * Convert to ticks or 500us units depending on
+ * our sleep mode.
+ */
+ if (sleep_ok != 0)
+ timeout = (time_ms * hz) / 1000;
+ else
+ timeout = time_ms * 2;
+ saved_cnt = mpt->reset_cnt;
+ req->state |= REQ_STATE_NEED_WAKEUP;
+ mask &= ~REQ_STATE_NEED_WAKEUP;
+ while ((req->state & mask) != state
+ && mpt->reset_cnt == saved_cnt) {
+
+ if (sleep_ok != 0) {
+ error = mpt_sleep(mpt, req, PUSER, "mptreq", timeout);
+ if (error == EWOULDBLOCK) {
+ timeout = 0;
+ break;
+ }
+ } else {
+ if (time_ms != 0 && --timeout == 0) {
+ mpt_prt(mpt, "mpt_wait_req timed out\n");
+ break;
+ }
+ DELAY(500);
+ mpt_intr(mpt);
+ }
+ }
+ req->state &= ~REQ_STATE_NEED_WAKEUP;
+ if (mpt->reset_cnt != saved_cnt)
+ return (EIO);
+ if (time_ms && timeout == 0)
+ return (ETIMEDOUT);
+ return (0);
}
/*
@@ -305,20 +1064,20 @@ mpt_pop_reply_queue(mpt_softc_t *mpt)
* commands such as device/bus reset as specified by LSI.
*/
int
-mpt_send_handshake_cmd(mpt_softc_t *mpt, size_t len, void *cmd)
+mpt_send_handshake_cmd(struct mpt_softc *mpt, size_t len, void *cmd)
{
int i;
- u_int32_t data, *data32;
+ uint32_t data, *data32;
/* Check condition of the IOC */
data = mpt_rd_db(mpt);
- if (((MPT_STATE(data) != MPT_DB_STATE_READY) &&
- (MPT_STATE(data) != MPT_DB_STATE_RUNNING) &&
- (MPT_STATE(data) != MPT_DB_STATE_FAULT)) ||
- ( MPT_DB_IS_IN_USE(data) )) {
- mpt_prt(mpt, "handshake aborted due to invalid doorbell state");
+ if ((MPT_STATE(data) != MPT_DB_STATE_READY
+ && MPT_STATE(data) != MPT_DB_STATE_RUNNING
+ && MPT_STATE(data) != MPT_DB_STATE_FAULT)
+ || MPT_DB_IS_IN_USE(data)) {
+ mpt_prt(mpt, "handshake aborted - invalid doorbell state\n");
mpt_print_db(data);
- return(EBUSY);
+ return (EBUSY);
}
/* We move things in 32 bit chunks */
@@ -339,25 +1098,26 @@ mpt_send_handshake_cmd(mpt_softc_t *mpt, size_t len, void *cmd)
/* Wait for the chip to notice */
if (mpt_wait_db_int(mpt) != MPT_OK) {
- mpt_prt(mpt, "mpt_send_handshake_cmd timeout1");
- return ETIMEDOUT;
+ mpt_prt(mpt, "mpt_send_handshake_cmd timeout1\n");
+ return (ETIMEDOUT);
}
/* Clear the interrupt */
mpt_write(mpt, MPT_OFFSET_INTR_STATUS, 0);
if (mpt_wait_db_ack(mpt) != MPT_OK) {
- mpt_prt(mpt, "mpt_send_handshake_cmd timeout2");
- return ETIMEDOUT;
+ mpt_prt(mpt, "mpt_send_handshake_cmd timeout2\n");
+ return (ETIMEDOUT);
}
/* Send the command */
for (i = 0; i < len; i++) {
mpt_write(mpt, MPT_OFFSET_DOORBELL, *data32++);
if (mpt_wait_db_ack(mpt) != MPT_OK) {
- mpt_prt(mpt,
- "mpt_send_handshake_cmd timeout! index = %d", i);
- return ETIMEDOUT;
+ mpt_prt(mpt,
+ "mpt_send_handshake_cmd timeout! index = %d\n",
+ i);
+ return (ETIMEDOUT);
}
}
return MPT_OK;
@@ -365,7 +1125,7 @@ mpt_send_handshake_cmd(mpt_softc_t *mpt, size_t len, void *cmd)
/* Get the response from the handshake register */
int
-mpt_recv_handshake_reply(mpt_softc_t *mpt, size_t reply_len, void *reply)
+mpt_recv_handshake_reply(struct mpt_softc *mpt, size_t reply_len, void *reply)
{
int left, reply_left;
u_int16_t *data16;
@@ -379,7 +1139,7 @@ mpt_recv_handshake_reply(mpt_softc_t *mpt, size_t reply_len, void *reply)
/* Get first word */
if (mpt_wait_db_int(mpt) != MPT_OK) {
- mpt_prt(mpt, "mpt_recv_handshake_cmd timeout1");
+ mpt_prt(mpt, "mpt_recv_handshake_cmd timeout1\n");
return ETIMEDOUT;
}
*data16++ = mpt_read(mpt, MPT_OFFSET_DOORBELL) & MPT_DB_DATA_MASK;
@@ -387,16 +1147,16 @@ mpt_recv_handshake_reply(mpt_softc_t *mpt, size_t reply_len, void *reply)
/* Get Second Word */
if (mpt_wait_db_int(mpt) != MPT_OK) {
- mpt_prt(mpt, "mpt_recv_handshake_cmd timeout2");
+ mpt_prt(mpt, "mpt_recv_handshake_cmd timeout2\n");
return ETIMEDOUT;
}
*data16++ = mpt_read(mpt, MPT_OFFSET_DOORBELL) & MPT_DB_DATA_MASK;
mpt_write(mpt, MPT_OFFSET_INTR_STATUS, 0);
/* With the second word, we can now look at the length */
- if (mpt->verbose > 1 && ((reply_len >> 1) != hdr->MsgLength)) {
+ if (((reply_len >> 1) != hdr->MsgLength)) {
mpt_prt(mpt, "reply length does not match message length: "
- "got 0x%02x, expected 0x%02x",
+ "got 0x%02x, expected 0x%02x\n",
hdr->MsgLength << 2, reply_len << 1);
}
@@ -407,7 +1167,7 @@ mpt_recv_handshake_reply(mpt_softc_t *mpt, size_t reply_len, void *reply)
u_int16_t datum;
if (mpt_wait_db_int(mpt) != MPT_OK) {
- mpt_prt(mpt, "mpt_recv_handshake_cmd timeout3");
+ mpt_prt(mpt, "mpt_recv_handshake_cmd timeout3\n");
return ETIMEDOUT;
}
datum = mpt_read(mpt, MPT_OFFSET_DOORBELL);
@@ -420,13 +1180,13 @@ mpt_recv_handshake_reply(mpt_softc_t *mpt, size_t reply_len, void *reply)
/* One more wait & clear at the end */
if (mpt_wait_db_int(mpt) != MPT_OK) {
- mpt_prt(mpt, "mpt_recv_handshake_cmd timeout4");
+ mpt_prt(mpt, "mpt_recv_handshake_cmd timeout4\n");
return ETIMEDOUT;
}
mpt_write(mpt, MPT_OFFSET_INTR_STATUS, 0);
if ((hdr->IOCStatus & MPI_IOCSTATUS_MASK) != MPI_IOCSTATUS_SUCCESS) {
- if (mpt->verbose > 1)
+ if (mpt->verbose >= MPT_PRT_TRACE)
mpt_print_reply(hdr);
return (MPT_FAIL | hdr->IOCStatus);
}
@@ -435,14 +1195,14 @@ mpt_recv_handshake_reply(mpt_softc_t *mpt, size_t reply_len, void *reply)
}
static int
-mpt_get_iocfacts(mpt_softc_t *mpt, MSG_IOC_FACTS_REPLY *freplp)
+mpt_get_iocfacts(struct mpt_softc *mpt, MSG_IOC_FACTS_REPLY *freplp)
{
MSG_IOC_FACTS f_req;
int error;
bzero(&f_req, sizeof f_req);
f_req.Function = MPI_FUNCTION_IOC_FACTS;
- f_req.MsgContext = 0x12071942;
+ f_req.MsgContext = htole32(MPT_REPLY_HANDLER_HANDSHAKE);
error = mpt_send_handshake_cmd(mpt, sizeof f_req, &f_req);
if (error)
return(error);
@@ -451,15 +1211,15 @@ mpt_get_iocfacts(mpt_softc_t *mpt, MSG_IOC_FACTS_REPLY *freplp)
}
static int
-mpt_get_portfacts(mpt_softc_t *mpt, MSG_PORT_FACTS_REPLY *freplp)
+mpt_get_portfacts(struct mpt_softc *mpt, MSG_PORT_FACTS_REPLY *freplp)
{
MSG_PORT_FACTS f_req;
int error;
/* XXX: Only getting PORT FACTS for Port 0 */
- bzero(&f_req, sizeof f_req);
+ memset(&f_req, 0, sizeof f_req);
f_req.Function = MPI_FUNCTION_PORT_FACTS;
- f_req.MsgContext = 0x12071943;
+ f_req.MsgContext = htole32(MPT_REPLY_HANDLER_HANDSHAKE);
error = mpt_send_handshake_cmd(mpt, sizeof f_req, &f_req);
if (error)
return(error);
@@ -474,7 +1234,7 @@ mpt_get_portfacts(mpt_softc_t *mpt, MSG_PORT_FACTS_REPLY *freplp)
* frames from the IOC that we will be allocating.
*/
static int
-mpt_send_ioc_init(mpt_softc_t *mpt, u_int32_t who)
+mpt_send_ioc_init(struct mpt_softc *mpt, uint32_t who)
{
int error = 0;
MSG_IOC_INIT init;
@@ -490,7 +1250,7 @@ mpt_send_ioc_init(mpt_softc_t *mpt, u_int32_t who)
}
init.MaxBuses = 1;
init.ReplyFrameSize = MPT_REPLY_SIZE;
- init.MsgContext = 0x12071941;
+ init.MsgContext = htole32(MPT_REPLY_HANDLER_HANDSHAKE);
if ((error = mpt_send_handshake_cmd(mpt, sizeof init, &init)) != 0) {
return(error);
@@ -504,215 +1264,311 @@ mpt_send_ioc_init(mpt_softc_t *mpt, u_int32_t who)
/*
* Utiltity routine to read configuration headers and pages
*/
-
-static int
-mpt_read_cfg_header(mpt_softc_t *, int, int, int, CONFIG_PAGE_HEADER *);
-
-static int
-mpt_read_cfg_header(mpt_softc_t *mpt, int PageType, int PageNumber,
- int PageAddress, CONFIG_PAGE_HEADER *rslt)
+int
+mpt_issue_cfg_req(struct mpt_softc *mpt, request_t *req, u_int Action,
+ u_int PageVersion, u_int PageLength, u_int PageNumber,
+ u_int PageType, uint32_t PageAddress, bus_addr_t addr,
+ bus_size_t len, int sleep_ok, int timeout_ms)
{
- int count;
- request_t *req;
MSG_CONFIG *cfgp;
- MSG_CONFIG_REPLY *reply;
-
- req = mpt_get_request(mpt);
+ SGE_SIMPLE32 *se;
cfgp = req->req_vbuf;
- bzero(cfgp, sizeof *cfgp);
-
- cfgp->Action = MPI_CONFIG_ACTION_PAGE_HEADER;
+ memset(cfgp, 0, sizeof *cfgp);
+ cfgp->Action = Action;
cfgp->Function = MPI_FUNCTION_CONFIG;
- cfgp->Header.PageNumber = (U8) PageNumber;
- cfgp->Header.PageType = (U8) PageType;
+ cfgp->Header.PageVersion = PageVersion;
+ cfgp->Header.PageLength = PageLength;
+ cfgp->Header.PageNumber = PageNumber;
+ cfgp->Header.PageType = PageType;
cfgp->PageAddress = PageAddress;
- MPI_pSGE_SET_FLAGS(((SGE_SIMPLE32 *) &cfgp->PageBufferSGE),
- (MPI_SGE_FLAGS_LAST_ELEMENT | MPI_SGE_FLAGS_END_OF_BUFFER |
- MPI_SGE_FLAGS_SIMPLE_ELEMENT | MPI_SGE_FLAGS_END_OF_LIST));
- cfgp->MsgContext = req->index | 0x80000000;
+ se = (SGE_SIMPLE32 *)&cfgp->PageBufferSGE;
+ se->Address = addr;
+ MPI_pSGE_SET_LENGTH(se, len);
+ MPI_pSGE_SET_FLAGS(se, (MPI_SGE_FLAGS_SIMPLE_ELEMENT |
+ MPI_SGE_FLAGS_LAST_ELEMENT | MPI_SGE_FLAGS_END_OF_BUFFER |
+ MPI_SGE_FLAGS_END_OF_LIST |
+ ((Action == MPI_CONFIG_ACTION_PAGE_WRITE_CURRENT
+ || Action == MPI_CONFIG_ACTION_PAGE_WRITE_NVRAM)
+ ? MPI_SGE_FLAGS_HOST_TO_IOC : MPI_SGE_FLAGS_IOC_TO_HOST)));
+ cfgp->MsgContext = htole32(req->index | MPT_REPLY_HANDLER_CONFIG);
mpt_check_doorbell(mpt);
mpt_send_cmd(mpt, req);
- count = 0;
- do {
- DELAY(500);
- mpt_intr(mpt);
- if (++count == 1000) {
- mpt_prt(mpt, "read_cfg_header timed out");
- return (-1);
- }
- } while (req->debug == REQ_ON_CHIP);
+ return (mpt_wait_req(mpt, req, REQ_STATE_DONE, REQ_STATE_DONE,
+ sleep_ok, timeout_ms));
+}
+
- reply = (MSG_CONFIG_REPLY *) MPT_REPLY_PTOV(mpt, req->sequence);
- if ((reply->IOCStatus & MPI_IOCSTATUS_MASK) != MPI_IOCSTATUS_SUCCESS) {
- mpt_prt(mpt, "mpt_read_cfg_header: Config Info Status %x",
- reply->IOCStatus);
- mpt_free_reply(mpt, (req->sequence << 1));
+int
+mpt_read_cfg_header(struct mpt_softc *mpt, int PageType, int PageNumber,
+ uint32_t PageAddress, CONFIG_PAGE_HEADER *rslt,
+ int sleep_ok, int timeout_ms)
+{
+ request_t *req;
+ int error;
+
+ req = mpt_get_request(mpt, sleep_ok);
+ if (req == NULL) {
+ mpt_prt(mpt, "mpt_read_cfg_header: Get request failed!\n");
return (-1);
}
- bcopy(&reply->Header, rslt, sizeof (CONFIG_PAGE_HEADER));
- mpt_free_reply(mpt, (req->sequence << 1));
+
+ error = mpt_issue_cfg_req(mpt, req, MPI_CONFIG_ACTION_PAGE_HEADER,
+ /*PageVersion*/0, /*PageLength*/0, PageNumber,
+ PageType, PageAddress, /*addr*/0, /*len*/0,
+ sleep_ok, timeout_ms);
+ if (error != 0) {
+ mpt_prt(mpt, "read_cfg_header timed out\n");
+ return (-1);
+ }
+
+ if ((req->IOCStatus & MPI_IOCSTATUS_MASK) != MPI_IOCSTATUS_SUCCESS) {
+ mpt_prt(mpt, "mpt_read_cfg_header: Config Info Status %x\n",
+ req->IOCStatus);
+ error = -1;
+ } else {
+ MSG_CONFIG *cfgp;
+
+ cfgp = req->req_vbuf;
+ bcopy(&cfgp->Header, rslt, sizeof(*rslt));
+ error = 0;
+ }
mpt_free_request(mpt, req);
- return (0);
+ return (error);
}
#define CFG_DATA_OFF 128
int
-mpt_read_cfg_page(mpt_softc_t *mpt, int PageAddress, CONFIG_PAGE_HEADER *hdr)
+mpt_read_cfg_page(struct mpt_softc *mpt, int Action, uint32_t PageAddress,
+ CONFIG_PAGE_HEADER *hdr, size_t len, int sleep_ok,
+ int timeout_ms)
{
- int count;
- request_t *req;
- SGE_SIMPLE32 *se;
- MSG_CONFIG *cfgp;
- size_t amt;
- MSG_CONFIG_REPLY *reply;
-
- req = mpt_get_request(mpt);
-
- cfgp = req->req_vbuf;
- bzero(cfgp, MPT_REQUEST_AREA);
- cfgp->Action = MPI_CONFIG_ACTION_PAGE_READ_CURRENT;
- cfgp->Function = MPI_FUNCTION_CONFIG;
- cfgp->Header = *hdr;
- amt = (cfgp->Header.PageLength * sizeof (u_int32_t));
- cfgp->Header.PageType &= MPI_CONFIG_PAGETYPE_MASK;
- cfgp->PageAddress = PageAddress;
- se = (SGE_SIMPLE32 *) &cfgp->PageBufferSGE;
- se->Address = req->req_pbuf + CFG_DATA_OFF;
- MPI_pSGE_SET_LENGTH(se, amt);
- MPI_pSGE_SET_FLAGS(se, (MPI_SGE_FLAGS_SIMPLE_ELEMENT |
- MPI_SGE_FLAGS_LAST_ELEMENT | MPI_SGE_FLAGS_END_OF_BUFFER |
- MPI_SGE_FLAGS_END_OF_LIST));
+ request_t *req;
+ int error;
- cfgp->MsgContext = req->index | 0x80000000;
+ req = mpt_get_request(mpt, sleep_ok);
+ if (req == NULL) {
+ mpt_prt(mpt, "mpt_read_cfg_page: Get request failed!\n");
+ return (-1);
+ }
- mpt_check_doorbell(mpt);
- mpt_send_cmd(mpt, req);
- count = 0;
- do {
- DELAY(500);
- mpt_intr(mpt);
- if (++count == 1000) {
- mpt_prt(mpt, "read_cfg_page timed out");
- return (-1);
- }
- } while (req->debug == REQ_ON_CHIP);
+ error = mpt_issue_cfg_req(mpt, req, Action, hdr->PageVersion,
+ hdr->PageLength, hdr->PageNumber,
+ hdr->PageType & MPI_CONFIG_PAGETYPE_MASK,
+ PageAddress, req->req_pbuf + CFG_DATA_OFF,
+ len, sleep_ok, timeout_ms);
+ if (error != 0) {
+ mpt_prt(mpt, "read_cfg_page(%d) timed out\n", Action);
+ return (-1);
+ }
- reply = (MSG_CONFIG_REPLY *) MPT_REPLY_PTOV(mpt, req->sequence);
- if ((reply->IOCStatus & MPI_IOCSTATUS_MASK) != MPI_IOCSTATUS_SUCCESS) {
- mpt_prt(mpt, "mpt_read_cfg_page: Config Info Status %x",
- reply->IOCStatus);
- mpt_free_reply(mpt, (req->sequence << 1));
+ if ((req->IOCStatus & MPI_IOCSTATUS_MASK) != MPI_IOCSTATUS_SUCCESS) {
+ mpt_prt(mpt, "mpt_read_cfg_page: Config Info Status %x\n",
+ req->IOCStatus);
+ mpt_free_request(mpt, req);
return (-1);
}
- mpt_free_reply(mpt, (req->sequence << 1));
bus_dmamap_sync(mpt->request_dmat, mpt->request_dmap,
BUS_DMASYNC_POSTREAD);
- if (cfgp->Header.PageType == MPI_CONFIG_PAGETYPE_SCSI_PORT &&
- cfgp->Header.PageNumber == 0) {
- amt = sizeof (CONFIG_PAGE_SCSI_PORT_0);
- } else if (cfgp->Header.PageType == MPI_CONFIG_PAGETYPE_SCSI_PORT &&
- cfgp->Header.PageNumber == 1) {
- amt = sizeof (CONFIG_PAGE_SCSI_PORT_1);
- } else if (cfgp->Header.PageType == MPI_CONFIG_PAGETYPE_SCSI_PORT &&
- cfgp->Header.PageNumber == 2) {
- amt = sizeof (CONFIG_PAGE_SCSI_PORT_2);
- } else if (cfgp->Header.PageType == MPI_CONFIG_PAGETYPE_SCSI_DEVICE &&
- cfgp->Header.PageNumber == 0) {
- amt = sizeof (CONFIG_PAGE_SCSI_DEVICE_0);
- } else if (cfgp->Header.PageType == MPI_CONFIG_PAGETYPE_SCSI_DEVICE &&
- cfgp->Header.PageNumber == 1) {
- amt = sizeof (CONFIG_PAGE_SCSI_DEVICE_1);
- }
- bcopy(((caddr_t)req->req_vbuf)+CFG_DATA_OFF, hdr, amt);
+ memcpy(hdr, ((uint8_t *)req->req_vbuf)+CFG_DATA_OFF, len);
mpt_free_request(mpt, req);
return (0);
}
int
-mpt_write_cfg_page(mpt_softc_t *mpt, int PageAddress, CONFIG_PAGE_HEADER *hdr)
+mpt_write_cfg_page(struct mpt_softc *mpt, int Action, uint32_t PageAddress,
+ CONFIG_PAGE_HEADER *hdr, size_t len, int sleep_ok,
+ int timeout_ms)
{
- int count, hdr_attr;
- request_t *req;
- SGE_SIMPLE32 *se;
- MSG_CONFIG *cfgp;
- size_t amt;
- MSG_CONFIG_REPLY *reply;
-
- req = mpt_get_request(mpt);
-
- cfgp = req->req_vbuf;
- bzero(cfgp, sizeof *cfgp);
+ request_t *req;
+ u_int hdr_attr;
+ int error;
hdr_attr = hdr->PageType & MPI_CONFIG_PAGEATTR_MASK;
if (hdr_attr != MPI_CONFIG_PAGEATTR_CHANGEABLE &&
hdr_attr != MPI_CONFIG_PAGEATTR_PERSISTENT) {
- mpt_prt(mpt, "page type 0x%x not changeable",
- hdr->PageType & MPI_CONFIG_PAGETYPE_MASK);
+ mpt_prt(mpt, "page type 0x%x not changeable\n",
+ hdr->PageType & MPI_CONFIG_PAGETYPE_MASK);
return (-1);
}
- hdr->PageType &= MPI_CONFIG_PAGETYPE_MASK;
+ hdr->PageType &= MPI_CONFIG_PAGETYPE_MASK,
- cfgp->Action = MPI_CONFIG_ACTION_PAGE_WRITE_CURRENT;
- cfgp->Function = MPI_FUNCTION_CONFIG;
- cfgp->Header = *hdr;
- amt = (cfgp->Header.PageLength * sizeof (u_int32_t));
- cfgp->PageAddress = PageAddress;
+ req = mpt_get_request(mpt, sleep_ok);
+ if (req == NULL)
+ return (-1);
- se = (SGE_SIMPLE32 *) &cfgp->PageBufferSGE;
- se->Address = req->req_pbuf + CFG_DATA_OFF;
- MPI_pSGE_SET_LENGTH(se, amt);
- MPI_pSGE_SET_FLAGS(se, (MPI_SGE_FLAGS_SIMPLE_ELEMENT |
- MPI_SGE_FLAGS_LAST_ELEMENT | MPI_SGE_FLAGS_END_OF_BUFFER |
- MPI_SGE_FLAGS_END_OF_LIST | MPI_SGE_FLAGS_HOST_TO_IOC));
-
- cfgp->MsgContext = req->index | 0x80000000;
-
- if (cfgp->Header.PageType == MPI_CONFIG_PAGETYPE_SCSI_PORT &&
- cfgp->Header.PageNumber == 0) {
- amt = sizeof (CONFIG_PAGE_SCSI_PORT_0);
- } else if (cfgp->Header.PageType == MPI_CONFIG_PAGETYPE_SCSI_PORT &&
- cfgp->Header.PageNumber == 1) {
- amt = sizeof (CONFIG_PAGE_SCSI_PORT_1);
- } else if (cfgp->Header.PageType == MPI_CONFIG_PAGETYPE_SCSI_PORT &&
- cfgp->Header.PageNumber == 2) {
- amt = sizeof (CONFIG_PAGE_SCSI_PORT_2);
- } else if (cfgp->Header.PageType == MPI_CONFIG_PAGETYPE_SCSI_DEVICE &&
- cfgp->Header.PageNumber == 0) {
- amt = sizeof (CONFIG_PAGE_SCSI_DEVICE_0);
- } else if (cfgp->Header.PageType == MPI_CONFIG_PAGETYPE_SCSI_DEVICE &&
- cfgp->Header.PageNumber == 1) {
- amt = sizeof (CONFIG_PAGE_SCSI_DEVICE_1);
- }
- bcopy(hdr, ((caddr_t)req->req_vbuf)+CFG_DATA_OFF, amt);
+ memcpy(((caddr_t)req->req_vbuf)+CFG_DATA_OFF, hdr, len);
/* Restore stripped out attributes */
hdr->PageType |= hdr_attr;
- mpt_check_doorbell(mpt);
- mpt_send_cmd(mpt, req);
- count = 0;
- do {
- DELAY(500);
- mpt_intr(mpt);
- if (++count == 1000) {
- hdr->PageType |= hdr_attr;
- mpt_prt(mpt, "mpt_write_cfg_page timed out");
- return (-1);
+ error = mpt_issue_cfg_req(mpt, req, Action, hdr->PageVersion,
+ hdr->PageLength, hdr->PageNumber,
+ hdr->PageType & MPI_CONFIG_PAGETYPE_MASK,
+ PageAddress, req->req_pbuf + CFG_DATA_OFF,
+ len, sleep_ok, timeout_ms);
+ if (error != 0) {
+ mpt_prt(mpt, "mpt_write_cfg_page timed out\n");
+ return (-1);
+ }
+
+ if ((req->IOCStatus & MPI_IOCSTATUS_MASK) != MPI_IOCSTATUS_SUCCESS) {
+ mpt_prt(mpt, "mpt_write_cfg_page: Config Info Status %x\n",
+ req->IOCStatus);
+ mpt_free_request(mpt, req);
+ return (-1);
+ }
+ mpt_free_request(mpt, req);
+ return (0);
+}
+
+/*
+ * Read IOC configuration information
+ */
+static int
+mpt_read_config_info_ioc(struct mpt_softc *mpt)
+{
+ CONFIG_PAGE_HEADER hdr;
+ struct mpt_raid_volume *mpt_raid;
+ int rv;
+ int i;
+ size_t len;
+
+ rv = mpt_read_cfg_header(mpt, MPI_CONFIG_PAGETYPE_IOC,
+ /*PageNumber*/2, /*PageAddress*/0, &hdr,
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv)
+ return (EIO);
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "IOC Page 2 Header: ver %x, len %x, "
+ "num %x, type %x\n", hdr.PageVersion,
+ hdr.PageLength * sizeof(uint32_t),
+ hdr.PageNumber, hdr.PageType);
+
+ len = hdr.PageLength * sizeof(uint32_t);
+ mpt->ioc_page2 = malloc(len, M_DEVBUF, M_NOWAIT);
+ if (mpt->ioc_page2 == NULL)
+ return (ENOMEM);
+ memset(mpt->ioc_page2, 0, sizeof(*mpt->ioc_page2));
+ memcpy(&mpt->ioc_page2->Header, &hdr, sizeof(hdr));
+ rv = mpt_read_cur_cfg_page(mpt, /*PageAddress*/0,
+ &mpt->ioc_page2->Header, len,
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv) {
+ mpt_prt(mpt, "failed to read IOC Page 2\n");
+ } else if (mpt->ioc_page2->CapabilitiesFlags != 0) {
+ uint32_t mask;
+
+ mpt_prt(mpt, "Capabilities: (");
+ for (mask = 1; mask != 0; mask <<= 1) {
+ if ((mpt->ioc_page2->CapabilitiesFlags & mask) == 0)
+ continue;
+
+ switch (mask) {
+ case MPI_IOCPAGE2_CAP_FLAGS_IS_SUPPORT:
+ mpt_prtc(mpt, " RAID-0");
+ break;
+ case MPI_IOCPAGE2_CAP_FLAGS_IME_SUPPORT:
+ mpt_prtc(mpt, " RAID-1E");
+ break;
+ case MPI_IOCPAGE2_CAP_FLAGS_IM_SUPPORT:
+ mpt_prtc(mpt, " RAID-1");
+ break;
+ case MPI_IOCPAGE2_CAP_FLAGS_SES_SUPPORT:
+ mpt_prtc(mpt, " SES");
+ break;
+ case MPI_IOCPAGE2_CAP_FLAGS_SAFTE_SUPPORT:
+ mpt_prtc(mpt, " SAFTE");
+ break;
+ case MPI_IOCPAGE2_CAP_FLAGS_CROSS_CHANNEL_SUPPORT:
+ mpt_prtc(mpt, " Multi-Channel-Arrays");
+ default:
+ break;
+ }
+ }
+ mpt_prtc(mpt, " )\n");
+ if ((mpt->ioc_page2->CapabilitiesFlags
+ & (MPI_IOCPAGE2_CAP_FLAGS_IS_SUPPORT
+ | MPI_IOCPAGE2_CAP_FLAGS_IME_SUPPORT
+ | MPI_IOCPAGE2_CAP_FLAGS_IM_SUPPORT)) != 0) {
+ mpt_prt(mpt, "%d Active Volume%s(%d Max)\n",
+ mpt->ioc_page2->NumActiveVolumes,
+ mpt->ioc_page2->NumActiveVolumes != 1
+ ? "s " : " ",
+ mpt->ioc_page2->MaxVolumes);
+ mpt_prt(mpt, "%d Hidden Drive Member%s(%d Max)\n",
+ mpt->ioc_page2->NumActivePhysDisks,
+ mpt->ioc_page2->NumActivePhysDisks != 1
+ ? "s " : " ",
+ mpt->ioc_page2->MaxPhysDisks);
}
- } while (req->debug == REQ_ON_CHIP);
+ }
- reply = (MSG_CONFIG_REPLY *) MPT_REPLY_PTOV(mpt, req->sequence);
- if ((reply->IOCStatus & MPI_IOCSTATUS_MASK) != MPI_IOCSTATUS_SUCCESS) {
- mpt_prt(mpt, "mpt_write_cfg_page: Config Info Status %x",
- reply->IOCStatus);
- mpt_free_reply(mpt, (req->sequence << 1));
+ len = mpt->ioc_page2->MaxVolumes * sizeof(struct mpt_raid_volume);
+ mpt->raid_volumes = malloc(len, M_DEVBUF, M_NOWAIT);
+ if (mpt->raid_volumes == NULL) {
+ mpt_prt(mpt, "Could not allocate RAID volume data\n");
+ } else {
+ memset(mpt->raid_volumes, 0, len);
+ }
+
+ /*
+ * Copy critical data out of ioc_page2 so that we can
+ * safely refresh the page without windows of unreliable
+ * data.
+ */
+ mpt->raid_max_volumes = mpt->ioc_page2->MaxVolumes;
+
+ len = sizeof(*mpt->raid_volumes->config_page)
+ + (sizeof(RAID_VOL0_PHYS_DISK)*(mpt->ioc_page2->MaxPhysDisks - 1));
+ for (i = 0; i < mpt->ioc_page2->MaxVolumes; i++) {
+ mpt_raid = &mpt->raid_volumes[i];
+ mpt_raid->config_page = malloc(len, M_DEVBUF, M_NOWAIT);
+ if (mpt_raid->config_page == NULL) {
+ mpt_prt(mpt, "Could not allocate RAID page data\n");
+ break;
+ }
+ memset(mpt_raid->config_page, 0, len);
+ }
+ mpt->raid_page0_len = len;
+
+ len = mpt->ioc_page2->MaxPhysDisks * sizeof(struct mpt_raid_disk);
+ mpt->raid_disks = malloc(len, M_DEVBUF, M_NOWAIT);
+ if (mpt->raid_disks == NULL) {
+ mpt_prt(mpt, "Could not allocate RAID disk data\n");
+ } else {
+ memset(mpt->raid_disks, 0, len);
+ }
+
+ mpt->raid_max_disks = mpt->ioc_page2->MaxPhysDisks;
+
+ rv = mpt_read_cfg_header(mpt, MPI_CONFIG_PAGETYPE_IOC,
+ /*PageNumber*/3, /*PageAddress*/0, &hdr,
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv)
+ return (EIO);
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "IOC Page 3 Header: %x %x %x %x\n",
+ hdr.PageVersion, hdr.PageLength, hdr.PageNumber, hdr.PageType);
+
+ if (mpt->ioc_page3 != NULL)
+ free(mpt->ioc_page3, M_DEVBUF);
+ len = hdr.PageLength * sizeof(uint32_t);
+ mpt->ioc_page3 = malloc(len, M_DEVBUF, M_NOWAIT);
+ if (mpt->ioc_page3 == NULL)
return (-1);
+ memset(mpt->ioc_page3, 0, sizeof(*mpt->ioc_page3));
+ memcpy(&mpt->ioc_page3->Header, &hdr, sizeof(hdr));
+ rv = mpt_read_cur_cfg_page(mpt, /*PageAddress*/0,
+ &mpt->ioc_page3->Header, len,
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv) {
+ mpt_prt(mpt, "failed to read IOC Page 3\n");
}
- mpt_free_reply(mpt, (req->sequence << 1));
- mpt_free_request(mpt, req);
+ mpt_raid_wakeup(mpt);
+
return (0);
}
@@ -720,78 +1576,73 @@ mpt_write_cfg_page(mpt_softc_t *mpt, int PageAddress, CONFIG_PAGE_HEADER *hdr)
* Read SCSI configuration information
*/
static int
-mpt_read_config_info_spi(mpt_softc_t *mpt)
+mpt_read_config_info_spi(struct mpt_softc *mpt)
{
int rv, i;
rv = mpt_read_cfg_header(mpt, MPI_CONFIG_PAGETYPE_SCSI_PORT, 0,
- 0, &mpt->mpt_port_page0.Header);
- if (rv) {
+ 0, &mpt->mpt_port_page0.Header,
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv)
return (-1);
- }
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "SPI Port Page 0 Header: %x %x %x %x",
- mpt->mpt_port_page0.Header.PageVersion,
- mpt->mpt_port_page0.Header.PageLength,
- mpt->mpt_port_page0.Header.PageNumber,
- mpt->mpt_port_page0.Header.PageType);
- }
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Port Page 0 Header: %x %x %x %x\n",
+ mpt->mpt_port_page0.Header.PageVersion,
+ mpt->mpt_port_page0.Header.PageLength,
+ mpt->mpt_port_page0.Header.PageNumber,
+ mpt->mpt_port_page0.Header.PageType);
rv = mpt_read_cfg_header(mpt, MPI_CONFIG_PAGETYPE_SCSI_PORT, 1,
- 0, &mpt->mpt_port_page1.Header);
- if (rv) {
+ 0, &mpt->mpt_port_page1.Header,
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv)
return (-1);
- }
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "SPI Port Page 1 Header: %x %x %x %x",
- mpt->mpt_port_page1.Header.PageVersion,
- mpt->mpt_port_page1.Header.PageLength,
- mpt->mpt_port_page1.Header.PageNumber,
- mpt->mpt_port_page1.Header.PageType);
- }
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "SPI Port Page 1 Header: %x %x %x %x\n",
+ mpt->mpt_port_page1.Header.PageVersion,
+ mpt->mpt_port_page1.Header.PageLength,
+ mpt->mpt_port_page1.Header.PageNumber,
+ mpt->mpt_port_page1.Header.PageType);
rv = mpt_read_cfg_header(mpt, MPI_CONFIG_PAGETYPE_SCSI_PORT, 2,
- 0, &mpt->mpt_port_page2.Header);
- if (rv) {
+ /*PageAddress*/0, &mpt->mpt_port_page2.Header,
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv)
return (-1);
- }
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "SPI Port Page 2 Header: %x %x %x %x",
- mpt->mpt_port_page1.Header.PageVersion,
- mpt->mpt_port_page1.Header.PageLength,
- mpt->mpt_port_page1.Header.PageNumber,
- mpt->mpt_port_page1.Header.PageType);
- }
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Port Page 2 Header: %x %x %x %x\n",
+ mpt->mpt_port_page1.Header.PageVersion,
+ mpt->mpt_port_page1.Header.PageLength,
+ mpt->mpt_port_page1.Header.PageNumber,
+ mpt->mpt_port_page1.Header.PageType);
for (i = 0; i < 16; i++) {
rv = mpt_read_cfg_header(mpt, MPI_CONFIG_PAGETYPE_SCSI_DEVICE,
- 0, i, &mpt->mpt_dev_page0[i].Header);
- if (rv) {
+ 0, i, &mpt->mpt_dev_page0[i].Header,
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv)
return (-1);
- }
- if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "SPI Target %d Device Page 0 Header: %x %x %x %x",
- i, mpt->mpt_dev_page0[i].Header.PageVersion,
- mpt->mpt_dev_page0[i].Header.PageLength,
- mpt->mpt_dev_page0[i].Header.PageNumber,
- mpt->mpt_dev_page0[i].Header.PageType);
- }
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Target %d Device Page 0 Header: %x %x %x %x\n",
+ i, mpt->mpt_dev_page0[i].Header.PageVersion,
+ mpt->mpt_dev_page0[i].Header.PageLength,
+ mpt->mpt_dev_page0[i].Header.PageNumber,
+ mpt->mpt_dev_page0[i].Header.PageType);
rv = mpt_read_cfg_header(mpt, MPI_CONFIG_PAGETYPE_SCSI_DEVICE,
- 1, i, &mpt->mpt_dev_page1[i].Header);
- if (rv) {
+ 1, i, &mpt->mpt_dev_page1[i].Header,
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv)
return (-1);
- }
- if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "SPI Target %d Device Page 1 Header: %x %x %x %x",
- i, mpt->mpt_dev_page1[i].Header.PageVersion,
- mpt->mpt_dev_page1[i].Header.PageLength,
- mpt->mpt_dev_page1[i].Header.PageNumber,
- mpt->mpt_dev_page1[i].Header.PageType);
- }
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Target %d Device Page 1 Header: %x %x %x %x\n",
+ i, mpt->mpt_dev_page1[i].Header.PageVersion,
+ mpt->mpt_dev_page1[i].Header.PageLength,
+ mpt->mpt_dev_page1[i].Header.PageNumber,
+ mpt->mpt_dev_page1[i].Header.PageType);
}
/*
@@ -800,37 +1651,46 @@ mpt_read_config_info_spi(mpt_softc_t *mpt)
* along.
*/
- rv = mpt_read_cfg_page(mpt, 0, &mpt->mpt_port_page0.Header);
+ rv = mpt_read_cur_cfg_page(mpt, /*PageAddress*/0,
+ &mpt->mpt_port_page0.Header,
+ sizeof(mpt->mpt_port_page0),
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
if (rv) {
- mpt_prt(mpt, "failed to read SPI Port Page 0");
- } else if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "SPI Port Page 0: Capabilities %x PhysicalInterface %x",
+ mpt_prt(mpt, "failed to read SPI Port Page 0\n");
+ } else {
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Port Page 0: Capabilities %x PhysicalInterface %x\n",
mpt->mpt_port_page0.Capabilities,
mpt->mpt_port_page0.PhysicalInterface);
}
- rv = mpt_read_cfg_page(mpt, 0, &mpt->mpt_port_page1.Header);
+ rv = mpt_read_cur_cfg_page(mpt, /*PageAddress*/0,
+ &mpt->mpt_port_page1.Header,
+ sizeof(mpt->mpt_port_page1),
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
if (rv) {
- mpt_prt(mpt, "failed to read SPI Port Page 1");
- } else if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "SPI Port Page 1: Configuration %x OnBusTimerValue %x",
+ mpt_prt(mpt, "failed to read SPI Port Page 1\n");
+ } else {
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Port Page 1: Configuration %x OnBusTimerValue %x\n",
mpt->mpt_port_page1.Configuration,
mpt->mpt_port_page1.OnBusTimerValue);
}
- rv = mpt_read_cfg_page(mpt, 0, &mpt->mpt_port_page2.Header);
+ rv = mpt_read_cur_cfg_page(mpt, /*PageAddress*/0,
+ &mpt->mpt_port_page2.Header,
+ sizeof(mpt->mpt_port_page2),
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
if (rv) {
- mpt_prt(mpt, "failed to read SPI Port Page 2");
- } else if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "SPI Port Page 2: Flags %x Settings %x",
+ mpt_prt(mpt, "failed to read SPI Port Page 2\n");
+ } else {
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Port Page 2: Flags %x Settings %x\n",
mpt->mpt_port_page2.PortFlags,
mpt->mpt_port_page2.PortSettings);
for (i = 0; i < 16; i++) {
- mpt_prt(mpt,
- "SPI Port Page 2 Tgt %d: timo %x SF %x Flags %x",
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Port Page 2 Tgt %d: timo %x SF %x Flags %x\n",
i, mpt->mpt_port_page2.DeviceSettings[i].Timeout,
mpt->mpt_port_page2.DeviceSettings[i].SyncFactor,
mpt->mpt_port_page2.DeviceSettings[i].DeviceFlags);
@@ -838,28 +1698,35 @@ mpt_read_config_info_spi(mpt_softc_t *mpt)
}
for (i = 0; i < 16; i++) {
- rv = mpt_read_cfg_page(mpt, i, &mpt->mpt_dev_page0[i].Header);
+ rv = mpt_read_cur_cfg_page(mpt, /*PageAddress*/i,
+ &mpt->mpt_dev_page0[i].Header,
+ sizeof(*mpt->mpt_dev_page0),
+ /*sleep_ok*/FALSE,
+ /*timeout_ms*/5000);
if (rv) {
- mpt_prt(mpt, "cannot read SPI Tgt %d Device Page 0", i);
- continue;
- }
- if (mpt->verbose > 1) {
mpt_prt(mpt,
- "SPI Tgt %d Page 0: NParms %x Information %x",
- i, mpt->mpt_dev_page0[i].NegotiatedParameters,
- mpt->mpt_dev_page0[i].Information);
- }
- rv = mpt_read_cfg_page(mpt, i, &mpt->mpt_dev_page1[i].Header);
- if (rv) {
- mpt_prt(mpt, "cannot read SPI Tgt %d Device Page 1", i);
+ "cannot read SPI Tgt %d Device Page 0\n", i);
continue;
}
- if (mpt->verbose > 1) {
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Tgt %d Page 0: NParms %x Information %x",
+ i, mpt->mpt_dev_page0[i].NegotiatedParameters,
+ mpt->mpt_dev_page0[i].Information);
+
+ rv = mpt_read_cur_cfg_page(mpt, /*PageAddress*/i,
+ &mpt->mpt_dev_page1[i].Header,
+ sizeof(*mpt->mpt_dev_page1),
+ /*sleep_ok*/FALSE,
+ /*timeout_ms*/5000);
+ if (rv) {
mpt_prt(mpt,
- "SPI Tgt %d Page 1: RParms %x Configuration %x",
- i, mpt->mpt_dev_page1[i].RequestedParameters,
- mpt->mpt_dev_page1[i].Configuration);
+ "cannot read SPI Tgt %d Device Page 1\n", i);
+ continue;
}
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Tgt %d Page 1: RParms %x Configuration %x\n",
+ i, mpt->mpt_dev_page1[i].RequestedParameters,
+ mpt->mpt_dev_page1[i].Configuration);
}
return (0);
}
@@ -870,29 +1737,37 @@ mpt_read_config_info_spi(mpt_softc_t *mpt)
* In particular, validate SPI Port Page 1.
*/
static int
-mpt_set_initial_config_spi(mpt_softc_t *mpt)
+mpt_set_initial_config_spi(struct mpt_softc *mpt)
{
int i, pp1val = ((1 << mpt->mpt_ini_id) << 16) | mpt->mpt_ini_id;
+ int error;
mpt->mpt_disc_enable = 0xff;
mpt->mpt_tag_enable = 0;
if (mpt->mpt_port_page1.Configuration != pp1val) {
CONFIG_PAGE_SCSI_PORT_1 tmp;
+
mpt_prt(mpt,
- "SPI Port Page 1 Config value bad (%x)- should be %x",
+ "SPI Port Page 1 Config value bad (%x)- should be %x\n",
mpt->mpt_port_page1.Configuration, pp1val);
tmp = mpt->mpt_port_page1;
tmp.Configuration = pp1val;
- if (mpt_write_cfg_page(mpt, 0, &tmp.Header)) {
+ error = mpt_write_cur_cfg_page(mpt, /*PageAddress*/0,
+ &tmp.Header, sizeof(tmp),
+ /*sleep_ok*/FALSE,
+ /*timeout_ms*/5000);
+ if (error)
return (-1);
- }
- if (mpt_read_cfg_page(mpt, 0, &tmp.Header)) {
+ error = mpt_read_cur_cfg_page(mpt, /*PageAddress*/0,
+ &tmp.Header, sizeof(tmp),
+ /*sleep_ok*/FALSE,
+ /*timeout_ms*/5000);
+ if (error)
return (-1);
- }
if (tmp.Configuration != pp1val) {
mpt_prt(mpt,
- "failed to reset SPI Port Page 1 Config value");
+ "failed to reset SPI Port Page 1 Config value\n");
return (-1);
}
mpt->mpt_port_page1 = tmp;
@@ -903,24 +1778,26 @@ mpt_set_initial_config_spi(mpt_softc_t *mpt)
tmp = mpt->mpt_dev_page1[i];
tmp.RequestedParameters = 0;
tmp.Configuration = 0;
- if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "Set Tgt %d SPI DevicePage 1 values to %x 0 %x",
- i, tmp.RequestedParameters, tmp.Configuration);
- }
- if (mpt_write_cfg_page(mpt, i, &tmp.Header)) {
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "Set Tgt %d SPI DevicePage 1 values to %x 0 %x\n",
+ i, tmp.RequestedParameters, tmp.Configuration);
+ error = mpt_write_cur_cfg_page(mpt, /*PageAddress*/i,
+ &tmp.Header, sizeof(tmp),
+ /*sleep_ok*/FALSE,
+ /*timeout_ms*/5000);
+ if (error)
return (-1);
- }
- if (mpt_read_cfg_page(mpt, i, &tmp.Header)) {
+ error = mpt_read_cur_cfg_page(mpt, /*PageAddress*/i,
+ &tmp.Header, sizeof(tmp),
+ /*sleep_ok*/FALSE,
+ /*timeout_ms*/5000);
+ if (error)
return (-1);
- }
mpt->mpt_dev_page1[i] = tmp;
- if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "SPI Tgt %d Page 1: RParm %x Configuration %x", i,
- mpt->mpt_dev_page1[i].RequestedParameters,
- mpt->mpt_dev_page1[i].Configuration);
- }
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Tgt %d Page 1: RParm %x Configuration %x\n", i,
+ mpt->mpt_dev_page1[i].RequestedParameters,
+ mpt->mpt_dev_page1[i].Configuration);
}
return (0);
}
@@ -929,36 +1806,33 @@ mpt_set_initial_config_spi(mpt_softc_t *mpt)
* Enable IOC port
*/
static int
-mpt_send_port_enable(mpt_softc_t *mpt, int port)
+mpt_send_port_enable(struct mpt_softc *mpt, int port)
{
- int count;
- request_t *req;
+ request_t *req;
MSG_PORT_ENABLE *enable_req;
+ int error;
- req = mpt_get_request(mpt);
+ req = mpt_get_request(mpt, /*sleep_ok*/FALSE);
+ if (req == NULL)
+ return (-1);
enable_req = req->req_vbuf;
bzero(enable_req, sizeof *enable_req);
enable_req->Function = MPI_FUNCTION_PORT_ENABLE;
- enable_req->MsgContext = req->index | 0x80000000;
+ enable_req->MsgContext = htole32(req->index | MPT_REPLY_HANDLER_CONFIG);
enable_req->PortNumber = port;
mpt_check_doorbell(mpt);
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "enabling port %d", port);
- }
- mpt_send_cmd(mpt, req);
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "enabling port %d\n", port);
- count = 0;
- do {
- DELAY(500);
- mpt_intr(mpt);
- if (++count == 100000) {
- mpt_prt(mpt, "port enable timed out");
- return (-1);
- }
- } while (req->debug == REQ_ON_CHIP);
+ mpt_send_cmd(mpt, req);
+ error = mpt_wait_req(mpt, req, REQ_STATE_DONE, REQ_STATE_DONE,
+ /*sleep_ok*/FALSE, /*time_ms*/500);
+ if (error != 0) {
+ mpt_prt(mpt, "port enable timed out");
+ return (-1);
+ }
mpt_free_request(mpt, req);
return (0);
}
@@ -970,24 +1844,23 @@ mpt_send_port_enable(mpt_softc_t *mpt, int port)
* instead of the handshake register.
*/
static int
-mpt_send_event_request(mpt_softc_t *mpt, int onoff)
+mpt_send_event_request(struct mpt_softc *mpt, int onoff)
{
request_t *req;
MSG_EVENT_NOTIFY *enable_req;
- req = mpt_get_request(mpt);
+ req = mpt_get_request(mpt, /*sleep_ok*/FALSE);
enable_req = req->req_vbuf;
bzero(enable_req, sizeof *enable_req);
enable_req->Function = MPI_FUNCTION_EVENT_NOTIFICATION;
- enable_req->MsgContext = req->index | 0x80000000;
+ enable_req->MsgContext = htole32(req->index | MPT_REPLY_HANDLER_EVENTS);
enable_req->Switch = onoff;
mpt_check_doorbell(mpt);
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "%sabling async events", onoff? "en" : "dis");
- }
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "%sabling async events\n", onoff ? "en" : "dis");
mpt_send_cmd(mpt, req);
return (0);
@@ -997,7 +1870,7 @@ mpt_send_event_request(mpt_softc_t *mpt, int onoff)
* Un-mask the interupts on the chip.
*/
void
-mpt_enable_ints(mpt_softc_t *mpt)
+mpt_enable_ints(struct mpt_softc *mpt)
{
/* Unmask every thing except door bell int */
mpt_write(mpt, MPT_OFFSET_INTR_MASK, MPT_INTR_DB_MASK);
@@ -1007,48 +1880,270 @@ mpt_enable_ints(mpt_softc_t *mpt)
* Mask the interupts on the chip.
*/
void
-mpt_disable_ints(mpt_softc_t *mpt)
+mpt_disable_ints(struct mpt_softc *mpt)
{
/* Mask all interrupts */
mpt_write(mpt, MPT_OFFSET_INTR_MASK,
MPT_INTR_REPLY_MASK | MPT_INTR_DB_MASK);
}
-/* (Re)Initialize the chip for use */
+static void
+mpt_sysctl_attach(struct mpt_softc *mpt)
+{
+ struct sysctl_ctx_list *ctx = device_get_sysctl_ctx(mpt->dev);
+ struct sysctl_oid *tree = device_get_sysctl_tree(mpt->dev);
+
+ SYSCTL_ADD_INT(ctx, SYSCTL_CHILDREN(tree), OID_AUTO,
+ "debug", CTLFLAG_RW, &mpt->verbose, 0,
+ "Debugging/Verbose level");
+}
+
int
-mpt_init(mpt_softc_t *mpt, u_int32_t who)
+mpt_attach(struct mpt_softc *mpt)
+{
+ int i;
+
+ for (i = 0; i < MPT_MAX_PERSONALITIES; i++) {
+ struct mpt_personality *pers;
+ int error;
+
+ pers = mpt_personalities[i];
+ if (pers == NULL)
+ continue;
+
+ if (pers->probe(mpt) == 0) {
+ error = pers->attach(mpt);
+ if (error != 0) {
+ mpt_detach(mpt);
+ return (error);
+ }
+ mpt->mpt_pers_mask |= (0x1 << pers->id);
+ pers->use_count++;
+ }
+ }
+ return (0);
+}
+
+int
+mpt_shutdown(struct mpt_softc *mpt)
+{
+ struct mpt_personality *pers;
+
+ MPT_PERS_FOREACH_REVERSE(mpt, pers)
+ pers->shutdown(mpt);
+
+ mpt_reset(mpt, /*reinit*/FALSE);
+ return (0);
+}
+
+int
+mpt_detach(struct mpt_softc *mpt)
+{
+ struct mpt_personality *pers;
+
+ MPT_PERS_FOREACH_REVERSE(mpt, pers) {
+ pers->detach(mpt);
+ mpt->mpt_pers_mask &= ~(0x1 << pers->id);
+ pers->use_count--;
+ }
+
+ return (0);
+}
+
+int
+mpt_core_load(struct mpt_personality *pers)
+{
+ int i;
+
+ /*
+ * Setup core handlers and insert the default handler
+ * into all "empty slots".
+ */
+ for (i = 0; i < MPT_NUM_REPLY_HANDLERS; i++)
+ mpt_reply_handlers[i] = mpt_default_reply_handler;
+
+ mpt_reply_handlers[MPT_CBI(MPT_REPLY_HANDLER_EVENTS)] =
+ mpt_event_reply_handler;
+ mpt_reply_handlers[MPT_CBI(MPT_REPLY_HANDLER_CONFIG)] =
+ mpt_config_reply_handler;
+ mpt_reply_handlers[MPT_CBI(MPT_REPLY_HANDLER_HANDSHAKE)] =
+ mpt_handshake_reply_handler;
+
+ return (0);
+}
+
+/*
+ * Initialize per-instance driver data and perform
+ * initial controller configuration.
+ */
+int
+mpt_core_attach(struct mpt_softc *mpt)
{
- int try;
- MSG_IOC_FACTS_REPLY facts;
- MSG_PORT_FACTS_REPLY pfp;
- u_int32_t pptr;
int val;
+ int error;
- /* Put all request buffers (back) on the free list */
- SLIST_INIT(&mpt->request_free_list);
- for (val = 0; val < MPT_MAX_REQUESTS(mpt); val++) {
+ LIST_INIT(&mpt->ack_frames);
+
+ /* Put all request buffers on the free list */
+ TAILQ_INIT(&mpt->request_pending_list);
+ TAILQ_INIT(&mpt->request_free_list);
+ for (val = 0; val < MPT_MAX_REQUESTS(mpt); val++)
mpt_free_request(mpt, &mpt->request_pool[val]);
+
+ mpt_sysctl_attach(mpt);
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "doorbell req = %s\n",
+ mpt_ioc_diag(mpt_read(mpt, MPT_OFFSET_DOORBELL)));
+
+ error = mpt_configure_ioc(mpt);
+
+ return (error);
+}
+
+void
+mpt_core_shutdown(struct mpt_softc *mpt)
+{
+}
+
+void
+mpt_core_detach(struct mpt_softc *mpt)
+{
+}
+
+int
+mpt_core_unload(struct mpt_personality *pers)
+{
+ /* Unload is always successfull. */
+ return (0);
+}
+
+#define FW_UPLOAD_REQ_SIZE \
+ (sizeof(MSG_FW_UPLOAD) - sizeof(SGE_MPI_UNION) \
+ + sizeof(FW_UPLOAD_TCSGE) + sizeof(SGE_SIMPLE32))
+
+static int
+mpt_upload_fw(struct mpt_softc *mpt)
+{
+ uint8_t fw_req_buf[FW_UPLOAD_REQ_SIZE];
+ MSG_FW_UPLOAD_REPLY fw_reply;
+ MSG_FW_UPLOAD *fw_req;
+ FW_UPLOAD_TCSGE *tsge;
+ SGE_SIMPLE32 *sge;
+ uint32_t flags;
+ int error;
+
+ memset(&fw_req_buf, 0, sizeof(fw_req_buf));
+ fw_req = (MSG_FW_UPLOAD *)fw_req_buf;
+ fw_req->ImageType = MPI_FW_UPLOAD_ITYPE_FW_IOC_MEM;
+ fw_req->Function = MPI_FUNCTION_FW_UPLOAD;
+ fw_req->MsgContext = htole32(MPT_REPLY_HANDLER_HANDSHAKE);
+ tsge = (FW_UPLOAD_TCSGE *)&fw_req->SGL;
+ tsge->DetailsLength = 12;
+ tsge->Flags = MPI_SGE_FLAGS_TRANSACTION_ELEMENT;
+ tsge->ImageSize = htole32(mpt->fw_image_size);
+ sge = (SGE_SIMPLE32 *)(tsge + 1);
+ flags = (MPI_SGE_FLAGS_LAST_ELEMENT | MPI_SGE_FLAGS_END_OF_BUFFER
+ | MPI_SGE_FLAGS_END_OF_LIST | MPI_SGE_FLAGS_SIMPLE_ELEMENT
+ | MPI_SGE_FLAGS_32_BIT_ADDRESSING | MPI_SGE_FLAGS_IOC_TO_HOST);
+ flags <<= MPI_SGE_FLAGS_SHIFT;
+ sge->FlagsLength = htole32(flags | mpt->fw_image_size);
+ sge->Address = htole32(mpt->fw_phys);
+ error = mpt_send_handshake_cmd(mpt, sizeof(fw_req_buf), &fw_req_buf);
+ if (error)
+ return(error);
+ error = mpt_recv_handshake_reply(mpt, sizeof(fw_reply), &fw_reply);
+ return (error);
+}
+
+static void
+mpt_diag_outsl(struct mpt_softc *mpt, uint32_t addr,
+ uint32_t *data, bus_size_t len)
+{
+ uint32_t *data_end;
+
+ data_end = data + (roundup2(len, sizeof(uint32_t)) / 4);
+ mpt_pio_write(mpt, MPT_OFFSET_DIAG_ADDR, addr);
+ while (data != data_end) {
+ mpt_pio_write(mpt, MPT_OFFSET_DIAG_DATA, *data);
+ data++;
}
+}
+
+static int
+mpt_download_fw(struct mpt_softc *mpt)
+{
+ MpiFwHeader_t *fw_hdr;
+ int error;
+ uint32_t ext_offset;
+ uint32_t data;
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "doorbell req = %s",
- mpt_ioc_diag(mpt_read(mpt, MPT_OFFSET_DOORBELL)));
+ mpt_prt(mpt, "Downloading Firmware - Image Size %d\n",
+ mpt->fw_image_size);
+
+ error = mpt_enable_diag_mode(mpt);
+ if (error != 0) {
+ mpt_prt(mpt, "Could not enter diagnostic mode!\n");
+ return (EIO);
}
+ mpt_write(mpt, MPT_OFFSET_DIAGNOSTIC,
+ MPI_DIAG_RW_ENABLE|MPI_DIAG_DISABLE_ARM);
+
+ fw_hdr = (MpiFwHeader_t *)mpt->fw_image;
+ mpt_diag_outsl(mpt, fw_hdr->LoadStartAddress, (uint32_t*)fw_hdr,
+ fw_hdr->ImageSize);
+
+ ext_offset = fw_hdr->NextImageHeaderOffset;
+ while (ext_offset != 0) {
+ MpiExtImageHeader_t *ext;
+
+ ext = (MpiExtImageHeader_t *)((uintptr_t)fw_hdr + ext_offset);
+ ext_offset = ext->NextImageHeaderOffset;
+
+ mpt_diag_outsl(mpt, ext->LoadStartAddress, (uint32_t*)ext,
+ ext->ImageSize);
+ }
+
+ /* Setup the address to jump to on reset. */
+ mpt_pio_write(mpt, MPT_OFFSET_DIAG_ADDR, fw_hdr->IopResetRegAddr);
+ mpt_pio_write(mpt, MPT_OFFSET_DIAG_DATA, fw_hdr->IopResetVectorValue);
+
/*
- * Start by making sure we're not at FAULT or RESET state
+ * The controller sets the "flash bad" status after attempting
+ * to auto-boot from flash. Clear the status so that the controller
+ * will continue the boot process with our newly installed firmware.
*/
- switch (mpt_rd_db(mpt) & MPT_DB_STATE_MASK) {
- case MPT_DB_STATE_RESET:
- case MPT_DB_STATE_FAULT:
- if (mpt_reset(mpt) != MPT_OK) {
- return (EIO);
- }
- default:
- break;
- }
-
+ mpt_pio_write(mpt, MPT_OFFSET_DIAG_ADDR, MPT_DIAG_MEM_CFG_BASE);
+ data = mpt_pio_read(mpt, MPT_OFFSET_DIAG_DATA) | MPT_DIAG_MEM_CFG_BADFL;
+ mpt_pio_write(mpt, MPT_OFFSET_DIAG_ADDR, MPT_DIAG_MEM_CFG_BASE);
+ mpt_pio_write(mpt, MPT_OFFSET_DIAG_DATA, data);
+
+ /*
+ * Re-enable the processor and clear the boot halt flag.
+ */
+ data = mpt_read(mpt, MPT_OFFSET_DIAGNOSTIC);
+ data &= ~(MPI_DIAG_PREVENT_IOC_BOOT|MPI_DIAG_DISABLE_ARM);
+ mpt_write(mpt, MPT_OFFSET_DIAGNOSTIC, data);
+
+ mpt_disable_diag_mode(mpt);
+ return (0);
+}
+
+/*
+ * Allocate/Initialize data structures for the controller. Called
+ * once at instance startup.
+ */
+static int
+mpt_configure_ioc(struct mpt_softc *mpt)
+{
+ MSG_PORT_FACTS_REPLY pfp;
+ MSG_IOC_FACTS_REPLY facts;
+ int try;
+ int needreset;
+
+ needreset = 0;
for (try = 0; try < MPT_MAX_TRYS; try++) {
+
/*
* No need to reset if the IOC is already in the READY state.
*
@@ -1058,48 +2153,111 @@ mpt_init(mpt_softc_t *mpt, u_int32_t who)
* first channel is ok, the second will not require a hard
* reset.
*/
- if ((mpt_rd_db(mpt) & MPT_DB_STATE_MASK) !=
+ if (needreset || (mpt_rd_db(mpt) & MPT_DB_STATE_MASK) !=
MPT_DB_STATE_READY) {
- if (mpt_reset(mpt) != MPT_OK) {
- DELAY(10000);
+ if (mpt_reset(mpt, /*reinit*/FALSE) != MPT_OK)
continue;
- }
}
+ needreset = 0;
if (mpt_get_iocfacts(mpt, &facts) != MPT_OK) {
- mpt_prt(mpt, "mpt_get_iocfacts failed");
+ mpt_prt(mpt, "mpt_get_iocfacts failed\n");
+ needreset = 1;
continue;
}
- if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "IOCFACTS: GlobalCredits=%d BlockSize=%u "
- "Request Frame Size %u\n", facts.GlobalCredits,
- facts.BlockSize, facts.RequestFrameSize);
+ mpt->mpt_global_credits = le16toh(facts.GlobalCredits);
+ mpt->request_frame_size = le16toh(facts.RequestFrameSize);
+ mpt_prt(mpt, "MPI Version=%d.%d.%d.%d\n",
+ le16toh(facts.MsgVersion) >> 8,
+ le16toh(facts.MsgVersion) & 0xFF,
+ le16toh(facts.HeaderVersion) >> 8,
+ le16toh(facts.HeaderVersion) & 0xFF);
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "MsgLength=%u IOCNumber = %d\n",
+ facts.MsgLength, facts.IOCNumber);
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "IOCFACTS: GlobalCredits=%d BlockSize=%u "
+ "Request Frame Size %u\n", mpt->mpt_global_credits,
+ facts.BlockSize * 8, mpt->request_frame_size * 8);
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "IOCFACTS: Num Ports %d, FWImageSize %d, "
+ "Flags=%#x\n", facts.NumberOfPorts,
+ le32toh(facts.FWImageSize), facts.Flags);
+
+ if ((facts.Flags & MPI_IOCFACTS_FLAGS_FW_DOWNLOAD_BOOT) != 0) {
+ struct mpt_map_info mi;
+ int error;
+
+ /*
+ * In some configurations, the IOC's firmware is
+ * stored in a shared piece of system NVRAM that
+ * is only accessable via the BIOS. In this
+ * case, the firmware keeps a copy of firmware in
+ * RAM until the OS driver retrieves it. Once
+ * retrieved, we are responsible for re-downloading
+ * the firmware after any hard-reset.
+ */
+ mpt->fw_image_size = le32toh(facts.FWImageSize);
+ error = mpt_dma_tag_create(mpt, mpt->parent_dmat,
+ /*alignment*/1, /*boundary*/0,
+ /*lowaddr*/BUS_SPACE_MAXADDR_32BIT,
+ /*highaddr*/BUS_SPACE_MAXADDR, /*filter*/NULL,
+ /*filterarg*/NULL, mpt->fw_image_size,
+ /*nsegments*/1, /*maxsegsz*/mpt->fw_image_size,
+ /*flags*/0, &mpt->fw_dmat);
+ if (error != 0) {
+ mpt_prt(mpt, "cannot create fw dma tag\n");
+ return (ENOMEM);
+ }
+ error = bus_dmamem_alloc(mpt->fw_dmat,
+ (void **)&mpt->fw_image, BUS_DMA_NOWAIT,
+ &mpt->fw_dmap);
+ if (error != 0) {
+ mpt_prt(mpt, "cannot allocate fw mem.\n");
+ bus_dma_tag_destroy(mpt->fw_dmat);
+ return (ENOMEM);
+ }
+ mi.mpt = mpt;
+ mi.error = 0;
+ bus_dmamap_load(mpt->fw_dmat, mpt->fw_dmap,
+ mpt->fw_image, mpt->fw_image_size, mpt_map_rquest,
+ &mi, 0);
+ mpt->fw_phys = mi.phys;
+
+ error = mpt_upload_fw(mpt);
+ if (error != 0) {
+ mpt_prt(mpt, "fw upload failed.\n");
+ bus_dmamap_unload(mpt->fw_dmat, mpt->fw_dmap);
+ bus_dmamem_free(mpt->fw_dmat, mpt->fw_image,
+ mpt->fw_dmap);
+ bus_dma_tag_destroy(mpt->fw_dmat);
+ mpt->fw_image = NULL;
+ return (EIO);
+ }
}
- mpt->mpt_global_credits = facts.GlobalCredits;
- mpt->request_frame_size = facts.RequestFrameSize;
if (mpt_get_portfacts(mpt, &pfp) != MPT_OK) {
- mpt_prt(mpt, "mpt_get_portfacts failed");
+ mpt_prt(mpt, "mpt_get_portfacts failed\n");
+ needreset = 1;
continue;
}
- if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "PORTFACTS: Type %x PFlags %x IID %d MaxDev %d\n",
- pfp.PortType, pfp.ProtocolFlags, pfp.PortSCSIID,
- pfp.MaxDevices);
- }
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "PORTFACTS: Type %x PFlags %x IID %d MaxDev %d\n",
+ pfp.PortType, pfp.ProtocolFlags, pfp.PortSCSIID,
+ pfp.MaxDevices);
+ mpt->mpt_port_type = pfp.PortType;
+ mpt->mpt_proto_flags = pfp.ProtocolFlags;
if (pfp.PortType != MPI_PORTFACTS_PORTTYPE_SCSI &&
pfp.PortType != MPI_PORTFACTS_PORTTYPE_FC) {
- mpt_prt(mpt, "Unsupported Port Type (%x)",
+ mpt_prt(mpt, "Unsupported Port Type (%x)\n",
pfp.PortType);
return (ENXIO);
}
if (!(pfp.ProtocolFlags & MPI_PORTFACTS_PROTOCOL_INITIATOR)) {
- mpt_prt(mpt, "initiator role unsupported");
+ mpt_prt(mpt, "initiator role unsupported\n");
return (ENXIO);
}
if (pfp.PortType == MPI_PORTFACTS_PORTTYPE_FC) {
@@ -1109,46 +2267,20 @@ mpt_init(mpt_softc_t *mpt, u_int32_t who)
}
mpt->mpt_ini_id = pfp.PortSCSIID;
- if (mpt_send_ioc_init(mpt, who) != MPT_OK) {
- mpt_prt(mpt, "mpt_send_ioc_init failed");
- continue;
- }
-
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "mpt_send_ioc_init ok");
- }
-
- if (mpt_wait_state(mpt, MPT_DB_STATE_RUNNING) != MPT_OK) {
- mpt_prt(mpt, "IOC failed to go to run state");
- continue;
- }
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "IOC now at RUNSTATE");
+ if (mpt_enable_ioc(mpt) != 0) {
+ mpt_prt(mpt, "Unable to initialize IOC\n");
+ return (ENXIO);
}
/*
- * Give it reply buffers
+ * Read and set up initial configuration information
+ * (IOC and SPI only for now)
*
- * Do *not* except global credits.
- */
- for (val = 0, pptr = mpt->reply_phys;
- (pptr + MPT_REPLY_SIZE) < (mpt->reply_phys + PAGE_SIZE);
- pptr += MPT_REPLY_SIZE) {
- mpt_free_reply(mpt, pptr);
- if (++val == mpt->mpt_global_credits - 1)
- break;
- }
-
- /*
- * Enable asynchronous event reporting
- */
- mpt_send_event_request(mpt, 1);
-
-
- /*
- * Read set up initial configuration information
- * (SPI only for now)
+ * XXX Should figure out what "personalities" are
+ * available and defer all initialization junk to
+ * them.
*/
+ mpt_read_config_info_ioc(mpt);
if (mpt->is_fc == 0) {
if (mpt_read_config_info_spi(mpt)) {
@@ -1159,18 +2291,6 @@ mpt_init(mpt_softc_t *mpt, u_int32_t who)
}
}
- /*
- * Now enable the port
- */
- if (mpt_send_port_enable(mpt, 0) != MPT_OK) {
- mpt_prt(mpt, "failed to enable port 0");
- continue;
- }
-
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "enabled port 0");
- }
-
/* Everything worked */
break;
}
@@ -1180,10 +2300,58 @@ mpt_init(mpt_softc_t *mpt, u_int32_t who)
return (EIO);
}
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "enabling interrupts");
- }
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "enabling interrupts\n");
mpt_enable_ints(mpt);
return (0);
}
+
+static int
+mpt_enable_ioc(struct mpt_softc *mpt)
+{
+ uint32_t pptr;
+ int val;
+
+ if (mpt_send_ioc_init(mpt, MPT_DB_INIT_HOST) != MPT_OK) {
+ mpt_prt(mpt, "mpt_send_ioc_init failed\n");
+ return (EIO);
+ }
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "mpt_send_ioc_init ok\n");
+
+ if (mpt_wait_state(mpt, MPT_DB_STATE_RUNNING) != MPT_OK) {
+ mpt_prt(mpt, "IOC failed to go to run state\n");
+ return (ENXIO);
+ }
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "IOC now at RUNSTATE");
+
+ /*
+ * Give it reply buffers
+ *
+ * Do *not* exceed global credits.
+ */
+ for (val = 0, pptr = mpt->reply_phys;
+ (pptr + MPT_REPLY_SIZE) < (mpt->reply_phys + PAGE_SIZE);
+ pptr += MPT_REPLY_SIZE) {
+ mpt_free_reply(mpt, pptr);
+ if (++val == mpt->mpt_global_credits - 1)
+ break;
+ }
+
+ /*
+ * Enable asynchronous event reporting
+ */
+ mpt_send_event_request(mpt, 1);
+
+ /*
+ * Now enable the port
+ */
+ if (mpt_send_port_enable(mpt, 0) != MPT_OK) {
+ mpt_prt(mpt, "failed to enable port 0\n");
+ return (ENXIO);
+ }
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "enabled port 0\n");
+
+ return (0);
+}
diff --git a/sys/dev/mpt/mpt.h b/sys/dev/mpt/mpt.h
index 8161d24..4d6e0ba 100644
--- a/sys/dev/mpt/mpt.h
+++ b/sys/dev/mpt/mpt.h
@@ -25,155 +25,828 @@
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
+ *
+ * Additional Copyright (c) 2002 by Matthew Jacob under same license.
*/
/*
- * Additional Copyright (c) 2002 by Matthew Jacob under same license.
+ * Copyright (c) 2004, Avid Technology, Inc. and its contributors.
+ * Copyright (c) 2004, 2005 Justin T. Gibbs
+ * Copyright (c) 2005, WHEEL Sp. z o.o.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _MPT_H_
#define _MPT_H_
-#include <dev/mpt/mpt_freebsd.h>
+
+/********************************* OS Includes ********************************/
+#include <sys/types.h>
+#include <sys/param.h>
+#include <sys/systm.h>
+#include <sys/endian.h>
+#include <sys/eventhandler.h>
+#if __FreeBSD_version < 500000
+#include <sys/kernel.h>
+#include <sys/queue.h>
+#include <sys/malloc.h>
+#else
+#include <sys/lock.h>
+#include <sys/kernel.h>
+#include <sys/queue.h>
+#include <sys/malloc.h>
+#include <sys/mutex.h>
+#include <sys/condvar.h>
+#endif
+#include <sys/proc.h>
+#include <sys/bus.h>
+#include <sys/module.h>
+
+#include <machine/bus.h>
+#include <machine/clock.h>
+#include <machine/cpu.h>
+#include <machine/resource.h>
+
+#include <sys/rman.h>
+
+#include "opt_ddb.h"
+
+/**************************** Register Definitions ****************************/
+#include <dev/mpt/mpt_reg.h>
+
+/******************************* MPI Definitions ******************************/
+#include <dev/mpt/mpilib/mpi_type.h>
+#include <dev/mpt/mpilib/mpi.h>
+#include <dev/mpt/mpilib/mpi_cnfg.h>
+#include <dev/mpt/mpilib/mpi_ioc.h>
+#include <dev/mpt/mpilib/mpi_raid.h>
+
+/* XXX For mpt_debug.c */
+#include <dev/mpt/mpilib/mpi_init.h>
+
+/****************************** Misc Definitions ******************************/
#define MPT_OK (0)
#define MPT_FAIL (0x10000)
-/* Register Offset to chip registers */
-#define MPT_OFFSET_DOORBELL 0x00
-#define MPT_OFFSET_SEQUENCE 0x04
-#define MPT_OFFSET_DIAGNOSTIC 0x08
-#define MPT_OFFSET_TEST 0x0C
-#define MPT_OFFSET_INTR_STATUS 0x30
-#define MPT_OFFSET_INTR_MASK 0x34
-#define MPT_OFFSET_REQUEST_Q 0x40
-#define MPT_OFFSET_REPLY_Q 0x44
-#define MPT_OFFSET_HOST_INDEX 0x50
-#define MPT_OFFSET_FUBAR 0x90
-
-#define MPT_DIAG_SEQUENCE_1 0x04
-#define MPT_DIAG_SEQUENCE_2 0x0b
-#define MPT_DIAG_SEQUENCE_3 0x02
-#define MPT_DIAG_SEQUENCE_4 0x07
-#define MPT_DIAG_SEQUENCE_5 0x0d
-
-/* Bit Maps for DOORBELL register */
-enum DB_STATE_BITS {
- MPT_DB_STATE_RESET = 0x00000000,
- MPT_DB_STATE_READY = 0x10000000,
- MPT_DB_STATE_RUNNING = 0x20000000,
- MPT_DB_STATE_FAULT = 0x40000000,
- MPT_DB_STATE_MASK = 0xf0000000
+#define NUM_ELEMENTS(array) (sizeof(array) / sizeof(*array))
+
+/**************************** Forward Declarations ****************************/
+struct mpt_softc;
+struct mpt_personality;
+typedef struct req_entry request_t;
+
+/************************* Personality Module Support *************************/
+typedef int mpt_load_handler_t(struct mpt_personality *);
+typedef int mpt_probe_handler_t(struct mpt_softc *);
+typedef int mpt_attach_handler_t(struct mpt_softc *);
+typedef int mpt_event_handler_t(struct mpt_softc *, request_t *,
+ MSG_EVENT_NOTIFY_REPLY *);
+typedef void mpt_reset_handler_t(struct mpt_softc *, int /*type*/);
+/* XXX Add return value and use for veto? */
+typedef void mpt_shutdown_handler_t(struct mpt_softc *);
+typedef void mpt_detach_handler_t(struct mpt_softc *);
+typedef int mpt_unload_handler_t(struct mpt_personality *);
+
+struct mpt_personality
+{
+ const char *name;
+ uint32_t id; /* Assigned identifier. */
+ u_int use_count; /* Instances using personality*/
+ mpt_load_handler_t *load; /* configure personailty */
+#define MPT_PERS_FIRST_HANDLER(pers) (&(pers)->load)
+ mpt_probe_handler_t *probe; /* configure personailty */
+ mpt_attach_handler_t *attach; /* initialize device instance */
+ mpt_event_handler_t *event; /* Handle MPI event. */
+ mpt_reset_handler_t *reset; /* Re-init after reset. */
+ mpt_shutdown_handler_t *shutdown; /* Shutdown instance. */
+ mpt_detach_handler_t *detach; /* release device instance */
+ mpt_unload_handler_t *unload; /* Shutdown personality */
+#define MPT_PERS_LAST_HANDLER(pers) (&(pers)->unload)
};
-#define MPT_STATE(v) ((enum DB_STATE_BITS)((v) & MPT_DB_STATE_MASK))
+int mpt_modevent(module_t, int, void *);
+
+/* Maximum supported number of personalities. */
+#define MPT_MAX_PERSONALITIES (15)
+
+#define MPT_PERSONALITY_DEPEND(name, dep, vmin, vpref, vmax) \
+ MODULE_DEPEND(name, dep, vmin, vpref, vmax)
+
+#define DECLARE_MPT_PERSONALITY(name, order) \
+ static moduledata_t name##_mod = { \
+ #name, mpt_modevent, &name##_personality \
+ }; \
+ DECLARE_MODULE(name, name##_mod, SI_SUB_DRIVERS, order); \
+ MODULE_VERSION(name, 1); \
+ MPT_PERSONALITY_DEPEND(name, mpt_core, 1, 1, 1)
+
+/******************************* Bus DMA Support ******************************/
+/* XXX Need to update bus_dmamap_sync to take a range argument. */
+#define bus_dmamap_sync_range(dma_tag, dmamap, offset, len, op) \
+ bus_dmamap_sync(dma_tag, dmamap, op)
+
+#if __FreeBSD_version >= 501102
+#define mpt_dma_tag_create(mpt, parent_tag, alignment, boundary, \
+ lowaddr, highaddr, filter, filterarg, \
+ maxsize, nsegments, maxsegsz, flags, \
+ dma_tagp) \
+ bus_dma_tag_create(parent_tag, alignment, boundary, \
+ lowaddr, highaddr, filter, filterarg, \
+ maxsize, nsegments, maxsegsz, flags, \
+ busdma_lock_mutex, &Giant, \
+ dma_tagp)
+#else
+#define mpt_dma_tag_create(mpt, parent_tag, alignment, boundary, \
+ lowaddr, highaddr, filter, filterarg, \
+ maxsize, nsegments, maxsegsz, flags, \
+ dma_tagp) \
+ bus_dma_tag_create(parent_tag, alignment, boundary, \
+ lowaddr, highaddr, filter, filterarg, \
+ maxsize, nsegments, maxsegsz, flags, \
+ dma_tagp)
+#endif
+
+struct mpt_map_info {
+ struct mpt_softc *mpt;
+ int error;
+ uint32_t phys;
+};
+
+void mpt_map_rquest(void *, bus_dma_segment_t *, int, int);
+
+/**************************** Kernel Thread Support ***************************/
+#if __FreeBSD_version > 500005
+#define mpt_kthread_create(func, farg, proc_ptr, flags, stackpgs, fmtstr, arg) \
+ kthread_create(func, farg, proc_ptr, flags, stackpgs, fmtstr, arg)
+#else
+#define mpt_kthread_create(func, farg, proc_ptr, flags, stackpgs, fmtstr, arg) \
+ kthread_create(func, farg, proc_ptr, fmtstr, arg)
+#endif
+
+/****************************** Timer Facilities ******************************/
+#if __FreeBSD_version > 500000
+#define mpt_callout_init(c) callout_init(c, /*mpsafe*/0);
+#else
+#define mpt_callout_init(c) callout_init(c);
+#endif
-#define MPT_DB_LENGTH_SHIFT (16)
-#define MPT_DB_DATA_MASK (0xffff)
+/********************************** Endianess *********************************/
+static __inline uint64_t
+u64toh(U64 s)
+{
+ uint64_t result;
-#define MPT_DB_DB_USED 0x08000000
-#define MPT_DB_IS_IN_USE(v) (((v) & MPT_DB_DB_USED) != 0)
+ result = le32toh(s.Low);
+ result |= ((uint64_t)le32toh(s.High)) << 32;
+ return (result);
+}
+/**************************** MPI Transaction State ***************************/
+typedef enum {
+ REQ_STATE_FREE = 0x00,
+ REQ_STATE_ALLOCATED = 0x01,
+ REQ_STATE_QUEUED = 0x02,
+ REQ_STATE_DONE = 0x04,
+ REQ_STATE_TIMEDOUT = 0x08,
+ REQ_STATE_NEED_WAKEUP = 0x10,
+ REQ_STATE_MASK = 0xFF
+} mpt_req_state_t;
+
+struct req_entry {
+ TAILQ_ENTRY(req_entry) links; /* Pointer to next in list */
+ mpt_req_state_t state; /* Request State Information */
+ uint16_t index; /* Index of this entry */
+ uint16_t IOCStatus; /* Completion status */
+ union ccb *ccb; /* CAM request */
+ void *req_vbuf; /* Virtual Address of Entry */
+ void *sense_vbuf; /* Virtual Address of sense data */
+ bus_addr_t req_pbuf; /* Physical Address of Entry */
+ bus_addr_t sense_pbuf; /* Physical Address of sense data */
+ bus_dmamap_t dmap; /* DMA map for data buffer */
+};
+
+/**************************** Handler Registration ****************************/
/*
- * "Whom" initializor values
+ * Global table of registered reply handlers. The
+ * handler is indicated by byte 3 of the request
+ * index submitted to the IOC. This allows the
+ * driver core to perform generic processing without
+ * any knowledge of per-personality behavior.
+ *
+ * MPT_NUM_REPLY_HANDLERS must be a power of 2
+ * to allow the easy generation of a mask.
+ *
+ * The handler offsets used by the core are hard coded
+ * allowing faster code generation when assigning a handler
+ * to a request. All "personalities" must use the
+ * the handler registration mechanism.
+ *
+ * The IOC handlers that are rarely executed are placed
+ * at the tail of the table to make it more likely that
+ * all commonly executed handlers fit in a single cache
+ * line.
*/
-#define MPT_DB_INIT_NOONE 0x00
-#define MPT_DB_INIT_BIOS 0x01
-#define MPT_DB_INIT_ROMBIOS 0x02
-#define MPT_DB_INIT_PCIPEER 0x03
-#define MPT_DB_INIT_HOST 0x04
-#define MPT_DB_INIT_MANUFACTURE 0x05
-
-#define MPT_WHO(v) \
- ((v & MPI_DOORBELL_WHO_INIT_MASK) >> MPI_DOORBELL_WHO_INIT_SHIFT)
-
-/* Function Maps for DOORBELL register */
-enum DB_FUNCTION_BITS {
- MPT_FUNC_IOC_RESET = 0x40000000,
- MPT_FUNC_UNIT_RESET = 0x41000000,
- MPT_FUNC_HANDSHAKE = 0x42000000,
- MPT_FUNC_REPLY_REMOVE = 0x43000000,
- MPT_FUNC_MASK = 0xff000000
+#define MPT_NUM_REPLY_HANDLERS (16)
+#define MPT_REPLY_HANDLER_EVENTS MPT_CBI_TO_HID(0)
+#define MPT_REPLY_HANDLER_CONFIG MPT_CBI_TO_HID(MPT_NUM_REPLY_HANDLERS-1)
+#define MPT_REPLY_HANDLER_HANDSHAKE MPT_CBI_TO_HID(MPT_NUM_REPLY_HANDLERS-2)
+typedef int mpt_reply_handler_t(struct mpt_softc *mpt, request_t *request,
+ MSG_DEFAULT_REPLY *reply_frame);
+typedef union {
+ mpt_reply_handler_t *reply_handler;
+} mpt_handler_t;
+
+typedef enum {
+ MPT_HANDLER_REPLY,
+ MPT_HANDLER_EVENT,
+ MPT_HANDLER_RESET,
+ MPT_HANDLER_SHUTDOWN
+} mpt_handler_type;
+
+struct mpt_handler_record
+{
+ LIST_ENTRY(mpt_handler_record) links;
+ mpt_handler_t handler;
+};
+
+LIST_HEAD(mpt_handler_list, mpt_handler_record);
+
+/*
+ * The handler_id is currently unused but would contain the
+ * handler ID used in the MsgContext field to allow direction
+ * of replies to the handler. Registrations that don't require
+ * a handler id can pass in NULL for the handler_id.
+ *
+ * Deregistrations for handlers without a handler id should
+ * pass in MPT_HANDLER_ID_NONE.
+ */
+#define MPT_HANDLER_ID_NONE (0xFFFFFFFF)
+int mpt_register_handler(struct mpt_softc *, mpt_handler_type,
+ mpt_handler_t, uint32_t *);
+int mpt_deregister_handler(struct mpt_softc *, mpt_handler_type,
+ mpt_handler_t, uint32_t);
+
+/******************* Per-Controller Instance Data Structures ******************/
+TAILQ_HEAD(req_queue, req_entry);
+
+/* Structure for saving proper values for modifyable PCI config registers */
+struct mpt_pci_cfg {
+ uint16_t Command;
+ uint16_t LatencyTimer_LineSize;
+ uint32_t IO_BAR;
+ uint32_t Mem0_BAR[2];
+ uint32_t Mem1_BAR[2];
+ uint32_t ROM_BAR;
+ uint8_t IntLine;
+ uint32_t PMCSR;
};
-/* Function Maps for INTERRUPT request register */
-enum _MPT_INTR_REQ_BITS {
- MPT_INTR_DB_BUSY = 0x80000000,
- MPT_INTR_REPLY_READY = 0x00000008,
- MPT_INTR_DB_READY = 0x00000001
+typedef enum {
+ MPT_RVF_NONE = 0x0,
+ MPT_RVF_ACTIVE = 0x1,
+ MPT_RVF_ANNOUNCED = 0x2,
+ MPT_RVF_UP2DATE = 0x4,
+ MPT_RVF_REFERENCED = 0x8,
+ MPT_RVF_WCE_CHANGED = 0x10
+} mpt_raid_volume_flags;
+
+struct mpt_raid_volume {
+ CONFIG_PAGE_RAID_VOL_0 *config_page;
+ MPI_RAID_VOL_INDICATOR sync_progress;
+ mpt_raid_volume_flags flags;
+ u_int quieced_disks;
};
-#define MPT_DB_IS_BUSY(v) (((v) & MPT_INTR_DB_BUSY) != 0)
-#define MPT_DB_INTR(v) (((v) & MPT_INTR_DB_READY) != 0)
-#define MPT_REPLY_INTR(v) (((v) & MPT_INTR_REPLY_READY) != 0)
+typedef enum {
+ MPT_RDF_NONE = 0x00,
+ MPT_RDF_ACTIVE = 0x01,
+ MPT_RDF_ANNOUNCED = 0x02,
+ MPT_RDF_UP2DATE = 0x04,
+ MPT_RDF_REFERENCED = 0x08,
+ MPT_RDF_QUIESCING = 0x10,
+ MPT_RDF_QUIESCED = 0x20
+} mpt_raid_disk_flags;
-/* Function Maps for INTERRUPT make register */
-enum _MPT_INTR_MASK_BITS {
- MPT_INTR_REPLY_MASK = 0x00000008,
- MPT_INTR_DB_MASK = 0x00000001
+struct mpt_raid_disk {
+ CONFIG_PAGE_RAID_PHYS_DISK_0 config_page;
+ struct mpt_raid_volume *volume;
+ u_int member_number;
+ u_int pass_thru_active;
+ mpt_raid_disk_flags flags;
};
-/* Function Maps for DIAGNOSTIC make register */
-enum _MPT_DIAG_BITS {
- MPT_DIAG_ENABLED = 0x00000080,
- MPT_DIAG_FLASHBAD = 0x00000040,
- MPT_DIAG_RESET_HIST = 0x00000020,
- MPT_DIAG_TTLI = 0x00000008,
- MPT_DIAG_RESET_IOC = 0x00000004,
- MPT_DIAG_ARM_DISABLE = 0x00000002,
- MPT_DIAG_DME = 0x00000001
+struct mpt_evtf_record {
+ MSG_EVENT_NOTIFY_REPLY reply;
+ uint32_t context;
+ LIST_ENTRY(mpt_evtf_record) links;
};
-/* Magic addresses in diagnostic memory space */
-#define MPT_DIAG_IOP_BASE (0x00000000)
-#define MPT_DIAG_IOP_SIZE (0x00002000)
-#define MPT_DIAG_GPIO (0x00030010)
-#define MPT_DIAG_IOPQ_REG_BASE0 (0x00050004)
-#define MPT_DIAG_IOPQ_REG_BASE1 (0x00051004)
-#define MPT_DIAG_MEM_CFG_BASE (0x00040000)
-#define MPT_DIAG_CTX0_BASE (0x000E0000)
-#define MPT_DIAG_CTX0_SIZE (0x00002000)
-#define MPT_DIAG_CTX1_BASE (0x001E0000)
-#define MPT_DIAG_CTX1_SIZE (0x00002000)
-#define MPT_DIAG_FLASH_BASE (0x00800000)
-#define MPT_DIAG_RAM_BASE (0x01000000)
-#define MPT_DIAG_RAM_SIZE (0x00400000)
-
-/* GPIO bit assignments */
-#define MPT_DIAG_GPIO_SCL (0x00010000)
-#define MPT_DIAG_GPIO_SDA_OUT (0x00008000)
-#define MPT_DIAG_GPIO_SDA_IN (0x00004000)
-
-#define MPT_REPLY_EMPTY (0xffffffff) /* Reply Queue Empty Symbol */
-#define MPT_CONTEXT_REPLY (0x80000000)
-#define MPT_CONTEXT_MASK (~0xE0000000)
-
-#ifdef _KERNEL
-int mpt_soft_reset(mpt_softc_t *mpt);
-void mpt_hard_reset(mpt_softc_t *mpt);
-int mpt_recv_handshake_reply(mpt_softc_t *mpt, size_t reply_len, void *reply);
-
-void mpt_send_cmd(mpt_softc_t *mpt, request_t *req);
-void mpt_free_reply(mpt_softc_t *mpt, u_int32_t ptr);
-void mpt_enable_ints(mpt_softc_t *mpt);
-void mpt_disable_ints(mpt_softc_t *mpt);
-u_int32_t mpt_pop_reply_queue(mpt_softc_t *mpt);
-int mpt_init(mpt_softc_t *mpt, u_int32_t who);
-int mpt_reset(mpt_softc_t *mpt);
-int mpt_send_handshake_cmd(mpt_softc_t *mpt, size_t len, void *cmd);
-request_t * mpt_get_request(mpt_softc_t *mpt);
-void mpt_free_request(mpt_softc_t *mpt, request_t *req);
-int mpt_intr(void *dummy);
-void mpt_check_doorbell(mpt_softc_t * mpt);
-
-int mpt_read_cfg_page(mpt_softc_t *, int, CONFIG_PAGE_HEADER *);
-int mpt_write_cfg_page(mpt_softc_t *, int, CONFIG_PAGE_HEADER *);
+LIST_HEAD(mpt_evtf_list, mpt_evtf_record);
+
+struct mpt_softc {
+ device_t dev;
+#if __FreeBSD_version < 500000
+ int mpt_splsaved;
+ uint32_t mpt_islocked;
+#else
+ struct mtx mpt_lock;
+#endif
+ uint32_t mpt_pers_mask;
+ uint32_t : 15,
+ raid_mwce_set : 1,
+ getreqwaiter : 1,
+ shutdwn_raid : 1,
+ shutdwn_recovery: 1,
+ unit : 8,
+ outofbeer : 1,
+ mpt_locksetup : 1,
+ disabled : 1,
+ is_fc : 1,
+ bus : 1; /* FC929/1030 have two busses */
+
+ u_int verbose;
+
+ /*
+ * IOC Facts
+ */
+ uint16_t mpt_global_credits;
+ uint16_t request_frame_size;
+ uint8_t mpt_max_devices;
+ uint8_t mpt_max_buses;
+
+ /*
+ * Port Facts
+ * XXX - Add multi-port support!.
+ */
+ uint16_t mpt_ini_id;
+ uint16_t mpt_port_type;
+ uint16_t mpt_proto_flags;
+
+ /*
+ * Device Configuration Information
+ */
+ union {
+ struct mpt_spi_cfg {
+ CONFIG_PAGE_SCSI_PORT_0 _port_page0;
+ CONFIG_PAGE_SCSI_PORT_1 _port_page1;
+ CONFIG_PAGE_SCSI_PORT_2 _port_page2;
+ CONFIG_PAGE_SCSI_DEVICE_0 _dev_page0[16];
+ CONFIG_PAGE_SCSI_DEVICE_1 _dev_page1[16];
+ uint16_t _tag_enable;
+ uint16_t _disc_enable;
+ uint16_t _update_params0;
+ uint16_t _update_params1;
+ } spi;
+#define mpt_port_page0 cfg.spi._port_page0
+#define mpt_port_page1 cfg.spi._port_page1
+#define mpt_port_page2 cfg.spi._port_page2
+#define mpt_dev_page0 cfg.spi._dev_page0
+#define mpt_dev_page1 cfg.spi._dev_page1
+#define mpt_tag_enable cfg.spi._tag_enable
+#define mpt_disc_enable cfg.spi._disc_enable
+#define mpt_update_params0 cfg.spi._update_params0
+#define mpt_update_params1 cfg.spi._update_params1
+ struct mpi_fc_cfg {
+ uint8_t nada;
+ } fc;
+ } cfg;
+
+ /* Controller Info */
+ CONFIG_PAGE_IOC_2 * ioc_page2;
+ CONFIG_PAGE_IOC_3 * ioc_page3;
+
+ /* Raid Data */
+ struct mpt_raid_volume* raid_volumes;
+ struct mpt_raid_disk* raid_disks;
+ u_int raid_max_volumes;
+ u_int raid_max_disks;
+ u_int raid_page0_len;
+ u_int raid_wakeup;
+ u_int raid_rescan;
+ u_int raid_resync_rate;
+ u_int raid_mwce_setting;
+ u_int raid_queue_depth;
+ struct proc *raid_thread;
+ struct callout raid_timer;
+
+ /*
+ * PCI Hardware info
+ */
+ struct resource * pci_irq; /* Interrupt map for chip */
+ void * ih; /* Interupt handle */
+ struct mpt_pci_cfg pci_cfg; /* saved PCI conf registers */
+
+ /*
+ * DMA Mapping Stuff
+ */
+ struct resource * pci_reg; /* Register map for chip */
+ int pci_mem_rid; /* Resource ID */
+ bus_space_tag_t pci_st; /* Bus tag for registers */
+ bus_space_handle_t pci_sh; /* Bus handle for registers */
+ /* PIO versions of above. */
+ int pci_pio_rid;
+ struct resource * pci_pio_reg;
+ bus_space_tag_t pci_pio_st;
+ bus_space_handle_t pci_pio_sh;
+
+ bus_dma_tag_t parent_dmat; /* DMA tag for parent PCI bus */
+ bus_dma_tag_t reply_dmat; /* DMA tag for reply memory */
+ bus_dmamap_t reply_dmap; /* DMA map for reply memory */
+ uint8_t *reply; /* KVA of reply memory */
+ bus_addr_t reply_phys; /* BusAddr of reply memory */
+
+ bus_dma_tag_t buffer_dmat; /* DMA tag for buffers */
+ bus_dma_tag_t request_dmat; /* DMA tag for request memroy */
+ bus_dmamap_t request_dmap; /* DMA map for request memroy */
+ uint8_t *request; /* KVA of Request memory */
+ bus_addr_t request_phys; /* BusADdr of request memory */
+
+ u_int reset_cnt;
+
+ /*
+ * CAM && Software Management
+ */
+ request_t *request_pool;
+ struct req_queue request_free_list;
+ struct req_queue request_pending_list;
+ struct req_queue request_timeout_list;
+
+ /*
+ * Deferred frame acks due to resource shortage.
+ */
+ struct mpt_evtf_list ack_frames;
+
+
+ struct cam_sim *sim;
+ struct cam_path *path;
+
+ struct cam_sim *phydisk_sim;
+ struct cam_path *phydisk_path;
+
+ struct proc *recovery_thread;
+ request_t *tmf_req;
+
+ uint32_t sequence; /* Sequence Number */
+ uint32_t timeouts; /* timeout count */
+ uint32_t success; /* successes afer timeout */
+
+ /* Opposing port in a 929 or 1030, or NULL */
+ struct mpt_softc * mpt2;
+
+ /* FW Image management */
+ uint32_t fw_image_size;
+ uint8_t *fw_image;
+ bus_dma_tag_t fw_dmat; /* DMA tag for firmware image */
+ bus_dmamap_t fw_dmap; /* DMA map for firmware image */
+ bus_addr_t fw_phys; /* BusAddr of request memory */
+
+ /* Shutdown Event Handler. */
+ eventhandler_tag eh;
+
+ TAILQ_ENTRY(mpt_softc) links;
+};
+
+/***************************** Locking Primatives *****************************/
+#if __FreeBSD_version < 500000
+#define MPT_IFLAGS INTR_TYPE_CAM
+#define MPT_LOCK(mpt) mpt_lockspl(mpt)
+#define MPT_UNLOCK(mpt) mpt_unlockspl(mpt)
+#define MPTLOCK_2_CAMLOCK MPT_UNLOCK
+#define CAMLOCK_2_MPTLOCK MPT_LOCK
+#define MPT_LOCK_SETUP(mpt)
+#define MPT_LOCK_DESTROY(mpt)
+
+static __inline void mpt_lockspl(struct mpt_softc *mpt);
+static __inline void mpt_unlockspl(struct mpt_softc *mpt);
+
+static __inline void
+mpt_lockspl(struct mpt_softc *mpt)
+{
+ int s;
+
+ s = splcam();
+ if (mpt->mpt_islocked++ == 0) {
+ mpt->mpt_splsaved = s;
+ } else {
+ splx(s);
+ panic("Recursed lock with mask: 0x%x\n", s);
+ }
+}
+
+static __inline void
+mpt_unlockspl(struct mpt_softc *mpt)
+{
+ if (mpt->mpt_islocked) {
+ if (--mpt->mpt_islocked == 0) {
+ splx(mpt->mpt_splsaved);
+ }
+ } else
+ panic("Negative lock count\n");
+}
+
+static __inline int
+mpt_sleep(struct mpt_softc *mpt, void *ident, int priority,
+ const char *wmesg, int timo)
+{
+ int saved_cnt;
+ int saved_spl;
+ int error;
+
+ KASSERT(mpt->mpt_islocked <= 1, ("Invalid lock count on tsleep"));
+ saved_cnt = mpt->mpt_islocked;
+ saved_spl = mpt->mpt_splsaved;
+ mpt->mpt_islocked = 0;
+ error = tsleep(ident, priority, wmesg, timo);
+ KASSERT(mpt->mpt_islocked = 0, ("Invalid lock count on wakeup"));
+ mpt->mpt_islocked = saved_cnt;
+ mpt->mpt_splsaved = saved_spl;
+ return (error);
+}
+
+#else
+#if LOCKING_WORKED_AS_IT_SHOULD
+#error "Shouldn't Be Here!"
+#define MPT_IFLAGS INTR_TYPE_CAM | INTR_ENTROPY | INTR_MPSAFE
+#define MPT_LOCK_SETUP(mpt) \
+ mtx_init(&mpt->mpt_lock, "mpt", NULL, MTX_DEF); \
+ mpt->mpt_locksetup = 1
+#define MPT_LOCK_DESTROY(mpt) \
+ if (mpt->mpt_locksetup) { \
+ mtx_destroy(&mpt->mpt_lock); \
+ mpt->mpt_locksetup = 0; \
+ }
+
+#define MPT_LOCK(mpt) mtx_lock(&(mpt)->mpt_lock)
+#define MPT_UNLOCK(mpt) mtx_unlock(&(mpt)->mpt_lock)
+#define MPTLOCK_2_CAMLOCK(mpt) \
+ mtx_unlock(&(mpt)->mpt_lock); mtx_lock(&Giant)
+#define CAMLOCK_2_MPTLOCK(mpt) \
+ mtx_unlock(&Giant); mtx_lock(&(mpt)->mpt_lock)
+#define mpt_sleep(mpt, ident, priority, wmesg, timo) \
+ msleep(ident, &(mpt)->mpt_lock, priority, wmesg, timo)
+#else
+#define MPT_IFLAGS INTR_TYPE_CAM | INTR_ENTROPY
+#define MPT_LOCK_SETUP(mpt) do { } while (0)
+#define MPT_LOCK_DESTROY(mpt) do { } while (0)
+#define MPT_LOCK(mpt) do { } while (0)
+#define MPT_UNLOCK(mpt) do { } while (0)
+#define MPTLOCK_2_CAMLOCK(mpt) do { } while (0)
+#define CAMLOCK_2_MPTLOCK(mpt) do { } while (0)
+#define mpt_sleep(mpt, ident, priority, wmesg, timo) \
+ tsleep(ident, priority, wmesg, timo)
+#endif
+#endif
+
+/******************************* Register Access ******************************/
+static __inline void mpt_write(struct mpt_softc *, size_t, uint32_t);
+static __inline uint32_t mpt_read(struct mpt_softc *, int);
+static __inline void mpt_pio_write(struct mpt_softc *, size_t, uint32_t);
+static __inline uint32_t mpt_pio_read(struct mpt_softc *, int);
+
+static __inline void
+mpt_write(struct mpt_softc *mpt, size_t offset, uint32_t val)
+{
+ bus_space_write_4(mpt->pci_st, mpt->pci_sh, offset, val);
+}
+
+static __inline uint32_t
+mpt_read(struct mpt_softc *mpt, int offset)
+{
+ return (bus_space_read_4(mpt->pci_st, mpt->pci_sh, offset));
+}
+
+/*
+ * Some operations (e.g. diagnostic register writes while the ARM proccessor
+ * is disabled), must be performed using "PCI pio" operations. On non-PCI
+ * busses, these operations likely map to normal register accesses.
+ */
+static __inline void
+mpt_pio_write(struct mpt_softc *mpt, size_t offset, uint32_t val)
+{
+ bus_space_write_4(mpt->pci_pio_st, mpt->pci_pio_sh, offset, val);
+}
+
+static __inline uint32_t
+mpt_pio_read(struct mpt_softc *mpt, int offset)
+{
+ return (bus_space_read_4(mpt->pci_pio_st, mpt->pci_pio_sh, offset));
+}
+/*********************** Reply Frame/Request Management ***********************/
+/* Max MPT Reply we are willing to accept (must be power of 2) */
+#define MPT_REPLY_SIZE 128
+
+#define MPT_MAX_REQUESTS(mpt) ((mpt)->is_fc ? 1024 : 256)
+#define MPT_REQUEST_AREA 512
+#define MPT_SENSE_SIZE 32 /* included in MPT_REQUEST_SIZE */
+#define MPT_REQ_MEM_SIZE(mpt) (MPT_MAX_REQUESTS(mpt) * MPT_REQUEST_AREA)
+
+#define MPT_CONTEXT_CB_SHIFT (16)
+#define MPT_CBI(handle) (handle >> MPT_CONTEXT_CB_SHIFT)
+#define MPT_CBI_TO_HID(cbi) ((cbi) << MPT_CONTEXT_CB_SHIFT)
+#define MPT_CONTEXT_TO_CBI(x) \
+ (((x) >> MPT_CONTEXT_CB_SHIFT) & (MPT_NUM_REPLY_HANDLERS - 1))
+#define MPT_CONTEXT_REQI_MASK 0xFFFF
+#define MPT_CONTEXT_TO_REQI(x) \
+ ((x) & MPT_CONTEXT_REQI_MASK)
+
+/*
+ * Convert a 32bit physical address returned from IOC to an
+ * offset into our reply frame memory or the kvm address needed
+ * to access the data. The returned address is only the low
+ * 32 bits, so mask our base physical address accordingly.
+ */
+#define MPT_REPLY_BADDR(x) \
+ (x << 1)
+#define MPT_REPLY_OTOV(m, i) \
+ ((void *)(&m->reply[i]))
+
+#define MPT_DUMP_REPLY_FRAME(mpt, reply_frame) \
+do { \
+ if (mpt->verbose >= MPT_PRT_DEBUG) \
+ mpt_dump_reply_frame(mpt, reply_frame); \
+} while(0)
+
+static __inline uint32_t mpt_pop_reply_queue(struct mpt_softc *mpt);
+static __inline void mpt_free_reply(struct mpt_softc *mpt, uint32_t ptr);
+
+/*
+ * Give the reply buffer back to the IOC after we have
+ * finished processing it.
+ */
+static __inline void
+mpt_free_reply(struct mpt_softc *mpt, uint32_t ptr)
+{
+ mpt_write(mpt, MPT_OFFSET_REPLY_Q, ptr);
+}
+
+/* Get a reply from the IOC */
+static __inline uint32_t
+mpt_pop_reply_queue(struct mpt_softc *mpt)
+{
+ return mpt_read(mpt, MPT_OFFSET_REPLY_Q);
+}
+
+void mpt_complete_request_chain(struct mpt_softc *mpt,
+ struct req_queue *chain, u_int iocstatus);
+/************************** Scatter Gather Managment **************************/
+/*
+ * We cannot tell prior to getting IOC facts how big the IOC's request
+ * area is. Because of this we cannot tell at compile time how many
+ * simple SG elements we can fit within an IOC request prior to having
+ * to put in a chain element.
+ *
+ * Experimentally we know that the Ultra4 parts have a 96 byte request
+ * element size and the Fibre Channel units have a 144 byte request
+ * element size. Therefore, if we have 512-32 (== 480) bytes of request
+ * area to play with, we have room for between 3 and 5 request sized
+ * regions- the first of which is the command plus a simple SG list,
+ * the rest of which are chained continuation SG lists. Given that the
+ * normal request we use is 48 bytes w/o the first SG element, we can
+ * assume we have 480-48 == 432 bytes to have simple SG elements and/or
+ * chain elements. If we assume 32 bit addressing, this works out to
+ * 54 SG or chain elements. If we assume 5 chain elements, then we have
+ * a maximum of 49 seperate actual SG segments.
+ */
+#define MPT_SGL_MAX 49
+
+#define MPT_RQSL(mpt) (mpt->request_frame_size << 2)
+#define MPT_NSGL(mpt) (MPT_RQSL(mpt) / sizeof (SGE_SIMPLE32))
+
+#define MPT_NSGL_FIRST(mpt) \
+ (((mpt->request_frame_size << 2) - \
+ sizeof (MSG_SCSI_IO_REQUEST) - \
+ sizeof (SGE_IO_UNION)) / sizeof (SGE_SIMPLE32))
+
+/***************************** IOC Initialization *****************************/
+int mpt_reset(struct mpt_softc *, int /*reinit*/);
+
+/****************************** Debugging/Logging *****************************/
+typedef struct mpt_decode_entry {
+ char *name;
+ u_int value;
+ u_int mask;
+} mpt_decode_entry_t;
+
+int mpt_decode_value(mpt_decode_entry_t *table, u_int num_entries,
+ const char *name, u_int value, u_int *cur_column,
+ u_int wrap_point);
+
+enum {
+ MPT_PRT_ALWAYS,
+ MPT_PRT_FATAL,
+ MPT_PRT_ERROR,
+ MPT_PRT_WARN,
+ MPT_PRT_INFO,
+ MPT_PRT_DEBUG,
+ MPT_PRT_TRACE
+};
+
+#define mpt_lprt(mpt, level, ...) \
+do { \
+ if (level <= (mpt)->verbose) \
+ mpt_prt(mpt, __VA_ARGS__); \
+} while (0)
+
+#define mpt_lprtc(mpt, level, ...) \
+do { \
+ if (level <= (mpt)->debug_level) \
+ mpt_prtc(mpt, __VA_ARGS__); \
+} while (0)
+
+void mpt_prt(struct mpt_softc *, const char *, ...);
+void mpt_prtc(struct mpt_softc *, const char *, ...);
+
+/**************************** Unclassified Routines ***************************/
+void mpt_send_cmd(struct mpt_softc *mpt, request_t *req);
+int mpt_recv_handshake_reply(struct mpt_softc *mpt,
+ size_t reply_len, void *reply);
+int mpt_wait_req(struct mpt_softc *mpt, request_t *req,
+ mpt_req_state_t state, mpt_req_state_t mask,
+ int sleep_ok, int time_ms);
+void mpt_enable_ints(struct mpt_softc *mpt);
+void mpt_disable_ints(struct mpt_softc *mpt);
+int mpt_attach(struct mpt_softc *mpt);
+int mpt_shutdown(struct mpt_softc *mpt);
+int mpt_detach(struct mpt_softc *mpt);
+int mpt_send_handshake_cmd(struct mpt_softc *mpt,
+ size_t len, void *cmd);
+request_t * mpt_get_request(struct mpt_softc *mpt, int sleep_ok);
+void mpt_free_request(struct mpt_softc *mpt, request_t *req);
+void mpt_intr(void *arg);
+void mpt_check_doorbell(struct mpt_softc *mpt);
+void mpt_dump_reply_frame(struct mpt_softc *mpt,
+ MSG_DEFAULT_REPLY *reply_frame);
+
+void mpt_set_config_regs(struct mpt_softc *);
+int mpt_issue_cfg_req(struct mpt_softc */*mpt*/, request_t */*req*/,
+ u_int /*Action*/, u_int /*PageVersion*/,
+ u_int /*PageLength*/, u_int /*PageNumber*/,
+ u_int /*PageType*/, uint32_t /*PageAddress*/,
+ bus_addr_t /*addr*/, bus_size_t/*len*/,
+ int /*sleep_ok*/, int /*timeout_ms*/);
+int mpt_read_cfg_header(struct mpt_softc *, int /*PageType*/,
+ int /*PageNumber*/,
+ uint32_t /*PageAddress*/,
+ CONFIG_PAGE_HEADER *,
+ int /*sleep_ok*/, int /*timeout_ms*/);
+int mpt_read_cfg_page(struct mpt_softc *t, int /*Action*/,
+ uint32_t /*PageAddress*/,
+ CONFIG_PAGE_HEADER *, size_t /*len*/,
+ int /*sleep_ok*/, int /*timeout_ms*/);
+int mpt_write_cfg_page(struct mpt_softc *, int /*Action*/,
+ uint32_t /*PageAddress*/,
+ CONFIG_PAGE_HEADER *, size_t /*len*/,
+ int /*sleep_ok*/, int /*timeout_ms*/);
+static __inline int
+mpt_read_cur_cfg_page(struct mpt_softc *mpt, uint32_t PageAddress,
+ CONFIG_PAGE_HEADER *hdr, size_t len,
+ int sleep_ok, int timeout_ms)
+{
+ return (mpt_read_cfg_page(mpt, MPI_CONFIG_ACTION_PAGE_READ_CURRENT,
+ PageAddress, hdr, len, sleep_ok, timeout_ms));
+}
+
+static __inline int
+mpt_write_cur_cfg_page(struct mpt_softc *mpt, uint32_t PageAddress,
+ CONFIG_PAGE_HEADER *hdr, size_t len, int sleep_ok,
+ int timeout_ms)
+{
+ return (mpt_write_cfg_page(mpt, MPI_CONFIG_ACTION_PAGE_WRITE_CURRENT,
+ PageAddress, hdr, len, sleep_ok,
+ timeout_ms));
+}
/* mpt_debug.c functions */
void mpt_print_reply(void *vmsg);
-void mpt_print_db(u_int32_t mb);
+void mpt_print_db(uint32_t mb);
void mpt_print_config_reply(void *vmsg);
-char *mpt_ioc_diag(u_int32_t diag);
-char *mpt_req_state(enum mpt_req_state state);
-void mpt_print_scsi_io_request(MSG_SCSI_IO_REQUEST *msg);
+char *mpt_ioc_diag(uint32_t diag);
+void mpt_req_state(mpt_req_state_t state);
void mpt_print_config_request(void *vmsg);
void mpt_print_request(void *vmsg);
-#endif
+void mpt_print_scsi_io_request(MSG_SCSI_IO_REQUEST *msg);
#endif /* _MPT_H_ */
diff --git a/sys/dev/mpt/mpt_cam.c b/sys/dev/mpt/mpt_cam.c
new file mode 100644
index 0000000..6277109
--- /dev/null
+++ b/sys/dev/mpt/mpt_cam.c
@@ -0,0 +1,1931 @@
+/*-
+ * FreeBSD/CAM specific routines for LSI '909 FC adapters.
+ * FreeBSD Version.
+ *
+ * Copyright (c) 2000, 2001 by Greg Ansley
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice immediately at the beginning of the file, without modification,
+ * this list of conditions, and the following disclaimer.
+ * 2. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
+ * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ * Additional Copyright (c) 2002 by Matthew Jacob under same license.
+ */
+/*-
+ * Copyright (c) 2004, Avid Technology, Inc. and its contributors.
+ * Copyright (c) 2005, WHEEL Sp. z o.o.
+ * Copyright (c) 2004, 2005 Justin T. Gibbs
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+#include <dev/mpt/mpt.h>
+#include <dev/mpt/mpt_cam.h>
+#include <dev/mpt/mpt_raid.h>
+
+#include "dev/mpt/mpilib/mpi_ioc.h" /* XXX Fix Event Handling!!! */
+#include "dev/mpt/mpilib/mpi_init.h"
+#include "dev/mpt/mpilib/mpi_targ.h"
+
+#include <sys/callout.h>
+#include <sys/kthread.h>
+
+static void mpt_poll(struct cam_sim *);
+static timeout_t mpt_timeout;
+static void mpt_action(struct cam_sim *, union ccb *);
+static int mpt_setwidth(struct mpt_softc *, int, int);
+static int mpt_setsync(struct mpt_softc *, int, int, int);
+static void mpt_calc_geometry(struct ccb_calc_geometry *ccg, int extended);
+static mpt_reply_handler_t mpt_scsi_reply_handler;
+static mpt_reply_handler_t mpt_scsi_tmf_reply_handler;
+static int mpt_scsi_reply_frame_handler(struct mpt_softc *mpt, request_t *req,
+ MSG_DEFAULT_REPLY *reply_frame);
+static int mpt_bus_reset(struct mpt_softc *, int /*sleep_ok*/);
+
+static int mpt_spawn_recovery_thread(struct mpt_softc *mpt);
+static void mpt_terminate_recovery_thread(struct mpt_softc *mpt);
+static void mpt_recovery_thread(void *arg);
+static int mpt_scsi_send_tmf(struct mpt_softc *, u_int /*type*/,
+ u_int /*flags*/, u_int /*channel*/,
+ u_int /*target*/, u_int /*lun*/,
+ u_int /*abort_ctx*/, int /*sleep_ok*/);
+static void mpt_recover_commands(struct mpt_softc *mpt);
+
+static uint32_t scsi_io_handler_id = MPT_HANDLER_ID_NONE;
+static uint32_t scsi_tmf_handler_id = MPT_HANDLER_ID_NONE;
+
+static mpt_probe_handler_t mpt_cam_probe;
+static mpt_attach_handler_t mpt_cam_attach;
+static mpt_event_handler_t mpt_cam_event;
+static mpt_reset_handler_t mpt_cam_ioc_reset;
+static mpt_detach_handler_t mpt_cam_detach;
+
+static struct mpt_personality mpt_cam_personality =
+{
+ .name = "mpt_cam",
+ .probe = mpt_cam_probe,
+ .attach = mpt_cam_attach,
+ .event = mpt_cam_event,
+ .reset = mpt_cam_ioc_reset,
+ .detach = mpt_cam_detach,
+};
+
+DECLARE_MPT_PERSONALITY(mpt_cam, SI_ORDER_SECOND);
+
+int
+mpt_cam_probe(struct mpt_softc *mpt)
+{
+ /*
+ * Only attach to nodes that support the initiator
+ * role or have RAID physical devices that need
+ * CAM pass-thru support.
+ */
+ if ((mpt->mpt_proto_flags & MPI_PORTFACTS_PROTOCOL_INITIATOR) != 0
+ || (mpt->ioc_page2 != NULL && mpt->ioc_page2->MaxPhysDisks != 0))
+ return (0);
+ return (ENODEV);
+}
+
+int
+mpt_cam_attach(struct mpt_softc *mpt)
+{
+ struct cam_devq *devq;
+ mpt_handler_t handler;
+ int maxq;
+ int error;
+
+ MPTLOCK_2_CAMLOCK(mpt);
+ TAILQ_INIT(&mpt->request_timeout_list);
+ mpt->bus = 0;
+ maxq = (mpt->mpt_global_credits < MPT_MAX_REQUESTS(mpt))?
+ mpt->mpt_global_credits : MPT_MAX_REQUESTS(mpt);
+
+ handler.reply_handler = mpt_scsi_reply_handler;
+ error = mpt_register_handler(mpt, MPT_HANDLER_REPLY, handler,
+ &scsi_io_handler_id);
+ if (error != 0)
+ goto cleanup;
+ handler.reply_handler = mpt_scsi_tmf_reply_handler;
+ error = mpt_register_handler(mpt, MPT_HANDLER_REPLY, handler,
+ &scsi_tmf_handler_id);
+ if (error != 0)
+ goto cleanup;
+
+ /*
+ * We keep one request reserved for timeout TMF requests.
+ */
+ mpt->tmf_req = mpt_get_request(mpt, /*sleep_ok*/FALSE);
+ if (mpt->tmf_req == NULL) {
+ mpt_prt(mpt, "Unable to allocate dedicated TMF request!\n");
+ error = ENOMEM;
+ goto cleanup;
+ }
+
+ /*
+ * Mark the request as free even though not on the free list.
+ * There is only one TMF request allowed to be outstanding at
+ * a time and the TMF routines perform their own allocation
+ * tracking using the standard state flags.
+ */
+ mpt->tmf_req->state = REQ_STATE_FREE;
+ maxq--;
+
+ if (mpt_spawn_recovery_thread(mpt) != 0) {
+ mpt_prt(mpt, "Unable to spawn recovery thread!\n");
+ error = ENOMEM;
+ goto cleanup;
+ }
+
+ /*
+ * Create the device queue for our SIM(s).
+ */
+ devq = cam_simq_alloc(maxq);
+ if (devq == NULL) {
+ mpt_prt(mpt, "Unable to allocate CAM SIMQ!\n");
+ error = ENOMEM;
+ goto cleanup;
+ }
+
+ /*
+ * Construct our SIM entry.
+ */
+ mpt->sim = cam_sim_alloc(mpt_action, mpt_poll, "mpt", mpt,
+ mpt->unit, 1, maxq, devq);
+ if (mpt->sim == NULL) {
+ mpt_prt(mpt, "Unable to allocate CAM SIM!\n");
+ cam_simq_free(devq);
+ error = ENOMEM;
+ goto cleanup;
+ }
+
+ /*
+ * Register exactly the bus.
+ */
+ if (xpt_bus_register(mpt->sim, 0) != CAM_SUCCESS) {
+ mpt_prt(mpt, "Bus registration Failed!\n");
+ error = ENOMEM;
+ goto cleanup;
+ }
+
+ if (xpt_create_path(&mpt->path, NULL, cam_sim_path(mpt->sim),
+ CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD) != CAM_REQ_CMP) {
+ mpt_prt(mpt, "Unable to allocate Path!\n");
+ error = ENOMEM;
+ goto cleanup;
+ }
+
+ /*
+ * Only register a second bus for RAID physical
+ * devices if the controller supports RAID.
+ */
+ if (mpt->ioc_page2 == NULL
+ || mpt->ioc_page2->MaxPhysDisks == 0)
+ return (0);
+
+ /*
+ * Create a "bus" to export all hidden disks to CAM.
+ */
+ mpt->phydisk_sim = cam_sim_alloc(mpt_action, mpt_poll, "mpt", mpt,
+ mpt->unit, 1, maxq, devq);
+ if (mpt->phydisk_sim == NULL) {
+ mpt_prt(mpt, "Unable to allocate Physical Disk CAM SIM!\n");
+ error = ENOMEM;
+ goto cleanup;
+ }
+
+ /*
+ * Register exactly the bus.
+ */
+ if (xpt_bus_register(mpt->phydisk_sim, 1) != CAM_SUCCESS) {
+ mpt_prt(mpt, "Physical Disk Bus registration Failed!\n");
+ error = ENOMEM;
+ goto cleanup;
+ }
+
+ if (xpt_create_path(&mpt->phydisk_path, NULL,
+ cam_sim_path(mpt->phydisk_sim),
+ CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD) != CAM_REQ_CMP) {
+ mpt_prt(mpt, "Unable to allocate Physical Disk Path!\n");
+ error = ENOMEM;
+ goto cleanup;
+ }
+
+ CAMLOCK_2_MPTLOCK(mpt);
+ return (0);
+cleanup:
+ CAMLOCK_2_MPTLOCK(mpt);
+ mpt_cam_detach(mpt);
+ return (error);
+}
+
+void
+mpt_cam_detach(struct mpt_softc *mpt)
+{
+ mpt_handler_t handler;
+
+ mpt_terminate_recovery_thread(mpt);
+
+ handler.reply_handler = mpt_scsi_reply_handler;
+ mpt_deregister_handler(mpt, MPT_HANDLER_REPLY, handler,
+ scsi_io_handler_id);
+ handler.reply_handler = mpt_scsi_tmf_reply_handler;
+ mpt_deregister_handler(mpt, MPT_HANDLER_REPLY, handler,
+ scsi_tmf_handler_id);
+
+ if (mpt->tmf_req != NULL) {
+ mpt_free_request(mpt, mpt->tmf_req);
+ mpt->tmf_req = NULL;
+ }
+
+ if (mpt->sim != NULL) {
+ xpt_free_path(mpt->path);
+ xpt_bus_deregister(cam_sim_path(mpt->sim));
+ cam_sim_free(mpt->sim, TRUE);
+ mpt->sim = NULL;
+ }
+
+ if (mpt->phydisk_sim != NULL) {
+ xpt_free_path(mpt->phydisk_path);
+ xpt_bus_deregister(cam_sim_path(mpt->phydisk_sim));
+ cam_sim_free(mpt->phydisk_sim, TRUE);
+ mpt->phydisk_sim = NULL;
+ }
+}
+
+/* This routine is used after a system crash to dump core onto the
+ * swap device.
+ */
+static void
+mpt_poll(struct cam_sim *sim)
+{
+ struct mpt_softc *mpt;
+
+ mpt = (struct mpt_softc *)cam_sim_softc(sim);
+ MPT_LOCK(mpt);
+ mpt_intr(mpt);
+ MPT_UNLOCK(mpt);
+}
+
+/*
+ * Watchdog timeout routine for SCSI requests.
+ */
+static void
+mpt_timeout(void *arg)
+{
+ union ccb *ccb;
+ struct mpt_softc *mpt;
+ request_t *req;
+
+ ccb = (union ccb *)arg;
+#if NOTYET
+ mpt = mpt_find_softc(mpt);
+ if (mpt == NULL)
+ return;
+#else
+ mpt = ccb->ccb_h.ccb_mpt_ptr;
+#endif
+
+ MPT_LOCK(mpt);
+ req = ccb->ccb_h.ccb_req_ptr;
+ mpt_prt(mpt, "Request %p Timed out.\n", req);
+ if ((req->state & REQ_STATE_QUEUED) == REQ_STATE_QUEUED) {
+ TAILQ_REMOVE(&mpt->request_pending_list, req, links);
+ TAILQ_INSERT_TAIL(&mpt->request_timeout_list, req, links);
+ req->state |= REQ_STATE_TIMEDOUT;
+ mpt_wakeup_recovery_thread(mpt);
+ }
+ MPT_UNLOCK(mpt);
+}
+
+/*
+ * Callback routine from "bus_dmamap_load" or, in simple cases, called directly.
+ *
+ * Takes a list of physical segments and builds the SGL for SCSI IO command
+ * and forwards the commard to the IOC after one last check that CAM has not
+ * aborted the transaction.
+ */
+static void
+mpt_execute_req(void *arg, bus_dma_segment_t *dm_segs, int nseg, int error)
+{
+ request_t *req;
+ union ccb *ccb;
+ struct mpt_softc *mpt;
+ MSG_SCSI_IO_REQUEST *mpt_req;
+ SGE_SIMPLE32 *se;
+
+ req = (request_t *)arg;
+ ccb = req->ccb;
+
+ mpt = ccb->ccb_h.ccb_mpt_ptr;
+ req = ccb->ccb_h.ccb_req_ptr;
+ mpt_req = req->req_vbuf;
+
+ if (error == 0 && nseg > MPT_SGL_MAX) {
+ error = EFBIG;
+ }
+
+ if (error != 0) {
+ if (error != EFBIG)
+ mpt_prt(mpt, "bus_dmamap_load returned %d\n", error);
+ if (ccb->ccb_h.status == CAM_REQ_INPROG) {
+ xpt_freeze_devq(ccb->ccb_h.path, 1);
+ ccb->ccb_h.status = CAM_DEV_QFRZN;
+ if (error == EFBIG)
+ ccb->ccb_h.status |= CAM_REQ_TOO_BIG;
+ else
+ ccb->ccb_h.status |= CAM_REQ_CMP_ERR;
+ }
+ ccb->ccb_h.status &= ~CAM_SIM_QUEUED;
+ xpt_done(ccb);
+ CAMLOCK_2_MPTLOCK(mpt);
+ mpt_free_request(mpt, req);
+ MPTLOCK_2_CAMLOCK(mpt);
+ return;
+ }
+
+ if (nseg > MPT_NSGL_FIRST(mpt)) {
+ int i, nleft = nseg;
+ uint32_t flags;
+ bus_dmasync_op_t op;
+ SGE_CHAIN32 *ce;
+
+ mpt_req->DataLength = ccb->csio.dxfer_len;
+ flags = MPI_SGE_FLAGS_SIMPLE_ELEMENT;
+ if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_OUT)
+ flags |= MPI_SGE_FLAGS_HOST_TO_IOC;
+
+ se = (SGE_SIMPLE32 *) &mpt_req->SGL;
+ for (i = 0; i < MPT_NSGL_FIRST(mpt) - 1; i++, se++, dm_segs++) {
+ uint32_t tf;
+
+ bzero(se, sizeof (*se));
+ se->Address = dm_segs->ds_addr;
+ MPI_pSGE_SET_LENGTH(se, dm_segs->ds_len);
+ tf = flags;
+ if (i == MPT_NSGL_FIRST(mpt) - 2) {
+ tf |= MPI_SGE_FLAGS_LAST_ELEMENT;
+ }
+ MPI_pSGE_SET_FLAGS(se, tf);
+ nleft -= 1;
+ }
+
+ /*
+ * Tell the IOC where to find the first chain element
+ */
+ mpt_req->ChainOffset = ((char *)se - (char *)mpt_req) >> 2;
+
+ /*
+ * Until we're finished with all segments...
+ */
+ while (nleft) {
+ int ntodo;
+ /*
+ * Construct the chain element that point to the
+ * next segment.
+ */
+ ce = (SGE_CHAIN32 *) se++;
+ if (nleft > MPT_NSGL(mpt)) {
+ ntodo = MPT_NSGL(mpt) - 1;
+ ce->NextChainOffset = (MPT_RQSL(mpt) -
+ sizeof (SGE_SIMPLE32)) >> 2;
+ ce->Length = MPT_NSGL(mpt) *
+ sizeof (SGE_SIMPLE32);
+ } else {
+ ntodo = nleft;
+ ce->NextChainOffset = 0;
+ ce->Length = ntodo * sizeof (SGE_SIMPLE32);
+ }
+ ce->Address = req->req_pbuf +
+ ((char *)se - (char *)mpt_req);
+ ce->Flags = MPI_SGE_FLAGS_CHAIN_ELEMENT;
+ for (i = 0; i < ntodo; i++, se++, dm_segs++) {
+ uint32_t tf;
+
+ bzero(se, sizeof (*se));
+ se->Address = dm_segs->ds_addr;
+ MPI_pSGE_SET_LENGTH(se, dm_segs->ds_len);
+ tf = flags;
+ if (i == ntodo - 1) {
+ tf |= MPI_SGE_FLAGS_LAST_ELEMENT;
+ if (ce->NextChainOffset == 0) {
+ tf |=
+ MPI_SGE_FLAGS_END_OF_LIST |
+ MPI_SGE_FLAGS_END_OF_BUFFER;
+ }
+ }
+ MPI_pSGE_SET_FLAGS(se, tf);
+ nleft -= 1;
+ }
+
+ }
+
+ if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_IN)
+ op = BUS_DMASYNC_PREREAD;
+ else
+ op = BUS_DMASYNC_PREWRITE;
+ if (!(ccb->ccb_h.flags & (CAM_SG_LIST_PHYS|CAM_DATA_PHYS))) {
+ bus_dmamap_sync(mpt->buffer_dmat, req->dmap, op);
+ }
+ } else if (nseg > 0) {
+ int i;
+ uint32_t flags;
+ bus_dmasync_op_t op;
+
+ mpt_req->DataLength = ccb->csio.dxfer_len;
+ flags = MPI_SGE_FLAGS_SIMPLE_ELEMENT;
+ if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_OUT)
+ flags |= MPI_SGE_FLAGS_HOST_TO_IOC;
+
+ /* Copy the segments into our SG list */
+ se = (SGE_SIMPLE32 *) &mpt_req->SGL;
+ for (i = 0; i < nseg; i++, se++, dm_segs++) {
+ uint32_t tf;
+
+ bzero(se, sizeof (*se));
+ se->Address = dm_segs->ds_addr;
+ MPI_pSGE_SET_LENGTH(se, dm_segs->ds_len);
+ tf = flags;
+ if (i == nseg - 1) {
+ tf |=
+ MPI_SGE_FLAGS_LAST_ELEMENT |
+ MPI_SGE_FLAGS_END_OF_BUFFER |
+ MPI_SGE_FLAGS_END_OF_LIST;
+ }
+ MPI_pSGE_SET_FLAGS(se, tf);
+ }
+
+ if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_IN)
+ op = BUS_DMASYNC_PREREAD;
+ else
+ op = BUS_DMASYNC_PREWRITE;
+ if (!(ccb->ccb_h.flags & (CAM_SG_LIST_PHYS|CAM_DATA_PHYS))) {
+ bus_dmamap_sync(mpt->buffer_dmat, req->dmap, op);
+ }
+ } else {
+ se = (SGE_SIMPLE32 *) &mpt_req->SGL;
+ /*
+ * No data to transfer so we just make a single simple SGL
+ * with zero length.
+ */
+ MPI_pSGE_SET_FLAGS(se,
+ (MPI_SGE_FLAGS_LAST_ELEMENT | MPI_SGE_FLAGS_END_OF_BUFFER |
+ MPI_SGE_FLAGS_SIMPLE_ELEMENT | MPI_SGE_FLAGS_END_OF_LIST));
+ }
+
+ /*
+ * Last time we need to check if this CCB needs to be aborted.
+ */
+ if (ccb->ccb_h.status != CAM_REQ_INPROG) {
+ if (nseg && (ccb->ccb_h.flags & CAM_SG_LIST_PHYS) == 0)
+ bus_dmamap_unload(mpt->buffer_dmat, req->dmap);
+ CAMLOCK_2_MPTLOCK(mpt);
+ mpt_free_request(mpt, req);
+ MPTLOCK_2_CAMLOCK(mpt);
+ xpt_done(ccb);
+ return;
+ }
+
+ ccb->ccb_h.status |= CAM_SIM_QUEUED;
+ CAMLOCK_2_MPTLOCK(mpt);
+ if (ccb->ccb_h.timeout != CAM_TIME_INFINITY) {
+ ccb->ccb_h.timeout_ch =
+ timeout(mpt_timeout, (caddr_t)ccb,
+ (ccb->ccb_h.timeout * hz) / 1000);
+ } else {
+ callout_handle_init(&ccb->ccb_h.timeout_ch);
+ }
+ if (mpt->verbose >= MPT_PRT_DEBUG)
+ mpt_print_scsi_io_request(mpt_req);
+ mpt_send_cmd(mpt, req);
+ MPTLOCK_2_CAMLOCK(mpt);
+}
+
+static void
+mpt_start(struct cam_sim *sim, union ccb *ccb)
+{
+ request_t *req;
+ struct mpt_softc *mpt;
+ MSG_SCSI_IO_REQUEST *mpt_req;
+ struct ccb_scsiio *csio = &ccb->csio;
+ struct ccb_hdr *ccbh = &ccb->ccb_h;
+ int raid_passthru;
+
+ /* Get the pointer for the physical addapter */
+ mpt = ccb->ccb_h.ccb_mpt_ptr;
+ raid_passthru = (sim == mpt->phydisk_sim);
+
+ CAMLOCK_2_MPTLOCK(mpt);
+ /* Get a request structure off the free list */
+ if ((req = mpt_get_request(mpt, /*sleep_ok*/FALSE)) == NULL) {
+ if (mpt->outofbeer == 0) {
+ mpt->outofbeer = 1;
+ xpt_freeze_simq(mpt->sim, 1);
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "FREEZEQ\n");
+ }
+ MPTLOCK_2_CAMLOCK(mpt);
+ ccb->ccb_h.status = CAM_REQUEUE_REQ;
+ xpt_done(ccb);
+ return;
+ }
+
+ MPTLOCK_2_CAMLOCK(mpt);
+
+#if 0
+ COWWWWW
+ if (raid_passthru) {
+ status = mpt_raid_quiesce_disk(mpt, mpt->raid_disks + ccb->ccb_h.target_id,
+ request_t *req)
+#endif
+
+ /*
+ * Link the ccb and the request structure so we can find
+ * the other knowing either the request or the ccb
+ */
+ req->ccb = ccb;
+ ccb->ccb_h.ccb_req_ptr = req;
+
+ /* Now we build the command for the IOC */
+ mpt_req = req->req_vbuf;
+ bzero(mpt_req, sizeof *mpt_req);
+
+ mpt_req->Function = MPI_FUNCTION_SCSI_IO_REQUEST;
+ if (raid_passthru)
+ mpt_req->Function = MPI_FUNCTION_RAID_SCSI_IO_PASSTHROUGH;
+
+ mpt_req->Bus = mpt->bus;
+
+ mpt_req->SenseBufferLength =
+ (csio->sense_len < MPT_SENSE_SIZE) ?
+ csio->sense_len : MPT_SENSE_SIZE;
+
+ /*
+ * We use the message context to find the request structure when we
+ * Get the command completion interrupt from the IOC.
+ */
+ mpt_req->MsgContext = htole32(req->index | scsi_io_handler_id);
+
+ /* Which physical device to do the I/O on */
+ mpt_req->TargetID = ccb->ccb_h.target_id;
+ /*
+ * XXX Assumes Single level, Single byte, CAM LUN type.
+ */
+ mpt_req->LUN[1] = ccb->ccb_h.target_lun;
+
+ /* Set the direction of the transfer */
+ if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_IN)
+ mpt_req->Control = MPI_SCSIIO_CONTROL_READ;
+ else if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_OUT)
+ mpt_req->Control = MPI_SCSIIO_CONTROL_WRITE;
+ else
+ mpt_req->Control = MPI_SCSIIO_CONTROL_NODATATRANSFER;
+
+ if ((ccb->ccb_h.flags & CAM_TAG_ACTION_VALID) != 0) {
+ switch(ccb->csio.tag_action) {
+ case MSG_HEAD_OF_Q_TAG:
+ mpt_req->Control |= MPI_SCSIIO_CONTROL_HEADOFQ;
+ break;
+ case MSG_ACA_TASK:
+ mpt_req->Control |= MPI_SCSIIO_CONTROL_ACAQ;
+ break;
+ case MSG_ORDERED_Q_TAG:
+ mpt_req->Control |= MPI_SCSIIO_CONTROL_ORDEREDQ;
+ break;
+ case MSG_SIMPLE_Q_TAG:
+ default:
+ mpt_req->Control |= MPI_SCSIIO_CONTROL_SIMPLEQ;
+ break;
+ }
+ } else {
+ if (mpt->is_fc)
+ mpt_req->Control |= MPI_SCSIIO_CONTROL_SIMPLEQ;
+ else
+ /* XXX No such thing for a target doing packetized. */
+ mpt_req->Control |= MPI_SCSIIO_CONTROL_UNTAGGED;
+ }
+
+ if (mpt->is_fc == 0) {
+ if (ccb->ccb_h.flags & CAM_DIS_DISCONNECT) {
+ mpt_req->Control |= MPI_SCSIIO_CONTROL_NO_DISCONNECT;
+ }
+ }
+
+ /* Copy the scsi command block into place */
+ if ((ccb->ccb_h.flags & CAM_CDB_POINTER) != 0)
+ bcopy(csio->cdb_io.cdb_ptr, mpt_req->CDB, csio->cdb_len);
+ else
+ bcopy(csio->cdb_io.cdb_bytes, mpt_req->CDB, csio->cdb_len);
+
+ mpt_req->CDBLength = csio->cdb_len;
+ mpt_req->DataLength = csio->dxfer_len;
+ mpt_req->SenseBufferLowAddr = req->sense_pbuf;
+
+ /*
+ * If we have any data to send with this command,
+ * map it into bus space.
+ */
+
+ if ((ccbh->flags & CAM_DIR_MASK) != CAM_DIR_NONE) {
+ if ((ccbh->flags & CAM_SCATTER_VALID) == 0) {
+ /*
+ * We've been given a pointer to a single buffer.
+ */
+ if ((ccbh->flags & CAM_DATA_PHYS) == 0) {
+ /*
+ * Virtual address that needs to translated into
+ * one or more physical address ranges.
+ */
+ int error;
+
+ error = bus_dmamap_load(mpt->buffer_dmat,
+ req->dmap, csio->data_ptr, csio->dxfer_len,
+ mpt_execute_req, req, 0);
+ if (error == EINPROGRESS) {
+ /*
+ * So as to maintain ordering,
+ * freeze the controller queue
+ * until our mapping is
+ * returned.
+ */
+ xpt_freeze_simq(mpt->sim, 1);
+ ccbh->status |= CAM_RELEASE_SIMQ;
+ }
+ } else {
+ /*
+ * We have been given a pointer to single
+ * physical buffer.
+ */
+ struct bus_dma_segment seg;
+ seg.ds_addr =
+ (bus_addr_t)(vm_offset_t)csio->data_ptr;
+ seg.ds_len = csio->dxfer_len;
+ mpt_execute_req(req, &seg, 1, 0);
+ }
+ } else {
+ /*
+ * We have been given a list of addresses.
+ * This case could be easily supported but they are not
+ * currently generated by the CAM subsystem so there
+ * is no point in wasting the time right now.
+ */
+ struct bus_dma_segment *segs;
+ if ((ccbh->flags & CAM_SG_LIST_PHYS) == 0) {
+ mpt_execute_req(req, NULL, 0, EFAULT);
+ } else {
+ /* Just use the segments provided */
+ segs = (struct bus_dma_segment *)csio->data_ptr;
+ mpt_execute_req(req, segs, csio->sglist_cnt,
+ (csio->sglist_cnt < MPT_SGL_MAX)?
+ 0 : EFBIG);
+ }
+ }
+ } else {
+ mpt_execute_req(req, NULL, 0, 0);
+ }
+}
+
+static int
+mpt_bus_reset(struct mpt_softc *mpt, int sleep_ok)
+{
+ int error;
+ u_int status;
+
+ error = mpt_scsi_send_tmf(mpt, MPI_SCSITASKMGMT_TASKTYPE_RESET_BUS,
+ mpt->is_fc ? MPI_SCSITASKMGMT_MSGFLAGS_LIP_RESET_OPTION : 0,
+ /*bus*/0, /*target_id*/0, /*target_lun*/0, /*abort_ctx*/0,
+ sleep_ok);
+
+ if (error != 0) {
+ /*
+ * mpt_scsi_send_tmf hard resets on failure, so no
+ * need to do so here.
+ */
+ mpt_prt(mpt,
+ "mpt_bus_reset: mpt_scsi_send_tmf returned %d\n", error);
+ return (EIO);
+ }
+
+ /* Wait for bus reset to be processed by the IOC. */
+ error = mpt_wait_req(mpt, mpt->tmf_req, REQ_STATE_DONE,
+ REQ_STATE_DONE, sleep_ok, /*time_ms*/5000);
+
+ status = mpt->tmf_req->IOCStatus;
+ mpt->tmf_req->state = REQ_STATE_FREE;
+ if (error) {
+ mpt_prt(mpt, "mpt_bus_reset: Reset timed-out."
+ "Resetting controller.\n");
+ mpt_reset(mpt, /*reinit*/TRUE);
+ return (ETIMEDOUT);
+ } else if ((status & MPI_IOCSTATUS_MASK) != MPI_SCSI_STATUS_SUCCESS) {
+ mpt_prt(mpt, "mpt_bus_reset: TMF Status %d."
+ "Resetting controller.\n", status);
+ mpt_reset(mpt, /*reinit*/TRUE);
+ return (EIO);
+ }
+ return (0);
+}
+
+static int
+mpt_cam_event(struct mpt_softc *mpt, request_t *req,
+ MSG_EVENT_NOTIFY_REPLY *msg)
+{
+ switch(msg->Event & 0xFF) {
+ case MPI_EVENT_UNIT_ATTENTION:
+ mpt_prt(mpt, "Bus: 0x%02x TargetID: 0x%02x\n",
+ (msg->Data[0] >> 8) & 0xff, msg->Data[0] & 0xff);
+ break;
+
+ case MPI_EVENT_IOC_BUS_RESET:
+ /* We generated a bus reset */
+ mpt_prt(mpt, "IOC Bus Reset Port: %d\n",
+ (msg->Data[0] >> 8) & 0xff);
+ xpt_async(AC_BUS_RESET, mpt->path, NULL);
+ break;
+
+ case MPI_EVENT_EXT_BUS_RESET:
+ /* Someone else generated a bus reset */
+ mpt_prt(mpt, "Ext Bus Reset\n");
+ /*
+ * These replies don't return EventData like the MPI
+ * spec says they do
+ */
+ xpt_async(AC_BUS_RESET, mpt->path, NULL);
+ break;
+
+ case MPI_EVENT_RESCAN:
+ /*
+ * In general this means a device has been added
+ * to the loop.
+ */
+ mpt_prt(mpt, "Rescan Port: %d\n", (msg->Data[0] >> 8) & 0xff);
+/* xpt_async(AC_FOUND_DEVICE, path, NULL); */
+ break;
+
+ case MPI_EVENT_LINK_STATUS_CHANGE:
+ mpt_prt(mpt, "Port %d: LinkState: %s\n",
+ (msg->Data[1] >> 8) & 0xff,
+ ((msg->Data[0] & 0xff) == 0)? "Failed" : "Active");
+ break;
+
+ case MPI_EVENT_LOOP_STATE_CHANGE:
+ switch ((msg->Data[0] >> 16) & 0xff) {
+ case 0x01:
+ mpt_prt(mpt,
+ "Port 0x%x: FC LinkEvent: LIP(%02x,%02x) "
+ "(Loop Initialization)\n",
+ (msg->Data[1] >> 8) & 0xff,
+ (msg->Data[0] >> 8) & 0xff,
+ (msg->Data[0] ) & 0xff);
+ switch ((msg->Data[0] >> 8) & 0xff) {
+ case 0xF7:
+ if ((msg->Data[0] & 0xff) == 0xF7) {
+ printf("Device needs AL_PA\n");
+ } else {
+ printf("Device %02x doesn't like "
+ "FC performance\n",
+ msg->Data[0] & 0xFF);
+ }
+ break;
+ case 0xF8:
+ if ((msg->Data[0] & 0xff) == 0xF7) {
+ printf("Device had loop failure at its "
+ "receiver prior to acquiring "
+ "AL_PA\n");
+ } else {
+ printf("Device %02x detected loop "
+ "failure at its receiver\n",
+ msg->Data[0] & 0xFF);
+ }
+ break;
+ default:
+ printf("Device %02x requests that device "
+ "%02x reset itself\n",
+ msg->Data[0] & 0xFF,
+ (msg->Data[0] >> 8) & 0xFF);
+ break;
+ }
+ break;
+ case 0x02:
+ mpt_prt(mpt, "Port 0x%x: FC LinkEvent: "
+ "LPE(%02x,%02x) (Loop Port Enable)\n",
+ (msg->Data[1] >> 8) & 0xff, /* Port */
+ (msg->Data[0] >> 8) & 0xff, /* Character 3 */
+ (msg->Data[0] ) & 0xff /* Character 4 */);
+ break;
+ case 0x03:
+ mpt_prt(mpt, "Port 0x%x: FC LinkEvent: "
+ "LPB(%02x,%02x) (Loop Port Bypass)\n",
+ (msg->Data[1] >> 8) & 0xff, /* Port */
+ (msg->Data[0] >> 8) & 0xff, /* Character 3 */
+ (msg->Data[0] ) & 0xff /* Character 4 */);
+ break;
+ default:
+ mpt_prt(mpt, "Port 0x%x: FC LinkEvent: Unknown "
+ "FC event (%02x %02x %02x)\n",
+ (msg->Data[1] >> 8) & 0xff, /* Port */
+ (msg->Data[0] >> 16) & 0xff, /* Event */
+ (msg->Data[0] >> 8) & 0xff, /* Character 3 */
+ (msg->Data[0] ) & 0xff /* Character 4 */);
+ }
+ break;
+
+ case MPI_EVENT_LOGOUT:
+ mpt_prt(mpt, "FC Logout Port: %d N_PortID: %02x\n",
+ (msg->Data[1] >> 8) & 0xff, msg->Data[0]);
+ break;
+ default:
+ return (/*handled*/0);
+ }
+ return (/*handled*/1);
+}
+
+/*
+ * Reply path for all SCSI I/O requests, called from our
+ * interrupt handler by extracting our handler index from
+ * the MsgContext field of the reply from the IOC.
+ *
+ * This routine is optimized for the common case of a
+ * completion without error. All exception handling is
+ * offloaded to non-inlined helper routines to minimize
+ * cache footprint.
+ */
+static int
+mpt_scsi_reply_handler(struct mpt_softc *mpt, request_t *req,
+ MSG_DEFAULT_REPLY *reply_frame)
+{
+ MSG_SCSI_IO_REQUEST *scsi_req;
+ union ccb *ccb;
+
+ scsi_req = (MSG_SCSI_IO_REQUEST *)req->req_vbuf;
+ ccb = req->ccb;
+ if (ccb == NULL) {
+ mpt_prt(mpt, "Completion without CCB. Flags %#x, Func %#x\n",
+ req->state, scsi_req->Function);
+ mpt_print_scsi_io_request(scsi_req);
+ return (/*free_reply*/TRUE);
+ }
+
+ untimeout(mpt_timeout, ccb, ccb->ccb_h.timeout_ch);
+
+ if ((ccb->ccb_h.flags & CAM_DIR_MASK) != CAM_DIR_NONE) {
+ bus_dmasync_op_t op;
+
+ if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_IN)
+ op = BUS_DMASYNC_POSTREAD;
+ else
+ op = BUS_DMASYNC_POSTWRITE;
+ bus_dmamap_sync(mpt->buffer_dmat, req->dmap, op);
+ bus_dmamap_unload(mpt->buffer_dmat, req->dmap);
+ }
+
+ if (reply_frame == NULL) {
+ /*
+ * Context only reply, completion
+ * without error status.
+ */
+ ccb->csio.resid = 0;
+ mpt_set_ccb_status(ccb, CAM_REQ_CMP);
+ ccb->csio.scsi_status = SCSI_STATUS_OK;
+ } else {
+ mpt_scsi_reply_frame_handler(mpt, req, reply_frame);
+ }
+
+ if (mpt->outofbeer) {
+ ccb->ccb_h.status |= CAM_RELEASE_SIMQ;
+ mpt->outofbeer = 0;
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "THAWQ\n");
+ }
+ ccb->ccb_h.status &= ~CAM_SIM_QUEUED;
+ MPTLOCK_2_CAMLOCK(mpt);
+ if (scsi_req->Function == MPI_FUNCTION_RAID_SCSI_IO_PASSTHROUGH
+ && scsi_req->CDB[0] == INQUIRY
+ && (scsi_req->CDB[1] & SI_EVPD) == 0) {
+ struct scsi_inquiry_data *inq;
+
+ /*
+ * Fake out the device type so that only the
+ * pass-thru device will attach.
+ */
+ inq = (struct scsi_inquiry_data *)ccb->csio.data_ptr;
+ inq->device &= ~0x1F;
+ inq->device |= T_NODEVICE;
+ }
+ xpt_done(ccb);
+ CAMLOCK_2_MPTLOCK(mpt);
+ if ((req->state & REQ_STATE_TIMEDOUT) == 0)
+ TAILQ_REMOVE(&mpt->request_pending_list, req, links);
+ else
+ TAILQ_REMOVE(&mpt->request_timeout_list, req, links);
+
+ if ((req->state & REQ_STATE_NEED_WAKEUP) == 0) {
+ mpt_free_request(mpt, req);
+ return (/*free_reply*/TRUE);
+ }
+ req->state &= ~REQ_STATE_QUEUED;
+ req->state |= REQ_STATE_DONE;
+ wakeup(req);
+ return (/*free_reply*/TRUE);
+}
+
+static int
+mpt_scsi_tmf_reply_handler(struct mpt_softc *mpt, request_t *req,
+ MSG_DEFAULT_REPLY *reply_frame)
+{
+ MSG_SCSI_TASK_MGMT_REPLY *tmf_reply;
+ u_int status;
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "TMF Complete: req %p, reply %p\n",
+ req, reply_frame);
+ KASSERT(req == mpt->tmf_req, ("TMF Reply not using mpt->tmf_req"));
+
+ tmf_reply = (MSG_SCSI_TASK_MGMT_REPLY *)reply_frame;
+
+ /* Record status of TMF for any waiters. */
+ req->IOCStatus = tmf_reply->IOCStatus;
+ status = le16toh(tmf_reply->IOCStatus);
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "TMF Complete: status 0x%x\n", status);
+ TAILQ_REMOVE(&mpt->request_pending_list, req, links);
+ if ((req->state & REQ_STATE_NEED_WAKEUP) != 0) {
+ req->state |= REQ_STATE_DONE;
+ wakeup(req);
+ } else
+ mpt->tmf_req->state = REQ_STATE_FREE;
+
+ return (/*free_reply*/TRUE);
+}
+
+/*
+ * Clean up all SCSI Initiator personality state in response
+ * to a controller reset.
+ */
+static void
+mpt_cam_ioc_reset(struct mpt_softc *mpt, int type)
+{
+ /*
+ * The pending list is already run down by
+ * the generic handler. Perform the same
+ * operation on the timed out request list.
+ */
+ mpt_complete_request_chain(mpt, &mpt->request_timeout_list,
+ MPI_IOCSTATUS_INVALID_STATE);
+
+ /*
+ * Inform the XPT that a bus reset has occurred.
+ */
+ xpt_async(AC_BUS_RESET, mpt->path, NULL);
+}
+
+/*
+ * Parse additional completion information in the reply
+ * frame for SCSI I/O requests.
+ */
+static int
+mpt_scsi_reply_frame_handler(struct mpt_softc *mpt, request_t *req,
+ MSG_DEFAULT_REPLY *reply_frame)
+{
+ union ccb *ccb;
+ MSG_SCSI_IO_REPLY *scsi_io_reply;
+ u_int ioc_status;
+ u_int sstate;
+ u_int loginfo;
+
+ MPT_DUMP_REPLY_FRAME(mpt, reply_frame);
+ KASSERT(reply_frame->Function == MPI_FUNCTION_SCSI_IO_REQUEST
+ || reply_frame->Function == MPI_FUNCTION_RAID_SCSI_IO_PASSTHROUGH,
+ ("MPT SCSI I/O Handler called with incorrect reply type"));
+ KASSERT((reply_frame->MsgFlags & MPI_MSGFLAGS_CONTINUATION_REPLY) == 0,
+ ("MPT SCSI I/O Handler called with continuation reply"));
+
+ scsi_io_reply = (MSG_SCSI_IO_REPLY *)reply_frame;
+ ioc_status = le16toh(scsi_io_reply->IOCStatus);
+ loginfo = ioc_status & MPI_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE;
+ ioc_status &= MPI_IOCSTATUS_MASK;
+ sstate = scsi_io_reply->SCSIState;
+
+ ccb = req->ccb;
+ ccb->csio.resid =
+ ccb->csio.dxfer_len - le32toh(scsi_io_reply->TransferCount);
+
+ if ((sstate & MPI_SCSI_STATE_AUTOSENSE_VALID) != 0
+ && (ccb->ccb_h.flags & (CAM_SENSE_PHYS | CAM_SENSE_PTR)) == 0) {
+ ccb->ccb_h.status |= CAM_AUTOSNS_VALID;
+ ccb->csio.sense_resid =
+ ccb->csio.sense_len - scsi_io_reply->SenseCount;
+ bcopy(req->sense_vbuf, &ccb->csio.sense_data,
+ min(ccb->csio.sense_len, scsi_io_reply->SenseCount));
+ }
+
+ if ((sstate & MPI_SCSI_STATE_QUEUE_TAG_REJECTED) != 0) {
+ /*
+ * Tag messages rejected, but non-tagged retry
+ * was successful.
+XXXX
+ mpt_set_tags(mpt, devinfo, MPT_QUEUE_NONE);
+ */
+ }
+
+ switch(ioc_status) {
+ case MPI_IOCSTATUS_SCSI_RESIDUAL_MISMATCH:
+ /*
+ * XXX
+ * Linux driver indicates that a zero
+ * transfer length with this error code
+ * indicates a CRC error.
+ *
+ * No need to swap the bytes for checking
+ * against zero.
+ */
+ if (scsi_io_reply->TransferCount == 0) {
+ mpt_set_ccb_status(ccb, CAM_UNCOR_PARITY);
+ break;
+ }
+ /* FALLTHROUGH */
+ case MPI_IOCSTATUS_SCSI_DATA_UNDERRUN:
+ case MPI_IOCSTATUS_SUCCESS:
+ case MPI_IOCSTATUS_SCSI_RECOVERED_ERROR:
+ if ((sstate & MPI_SCSI_STATE_NO_SCSI_STATUS) != 0) {
+ /*
+ * Status was never returned for this transaction.
+ */
+ mpt_set_ccb_status(ccb, CAM_UNEXP_BUSFREE);
+ } else if (scsi_io_reply->SCSIStatus != SCSI_STATUS_OK) {
+ ccb->csio.scsi_status = scsi_io_reply->SCSIStatus;
+ mpt_set_ccb_status(ccb, CAM_SCSI_STATUS_ERROR);
+ if ((sstate & MPI_SCSI_STATE_AUTOSENSE_FAILED) != 0)
+ mpt_set_ccb_status(ccb, CAM_AUTOSENSE_FAIL);
+ } else if ((sstate & MPI_SCSI_STATE_RESPONSE_INFO_VALID) != 0) {
+
+ /* XXX Handle SPI-Packet and FCP-2 reponse info. */
+ mpt_set_ccb_status(ccb, CAM_REQ_CMP_ERR);
+ } else
+ mpt_set_ccb_status(ccb, CAM_REQ_CMP);
+ break;
+ case MPI_IOCSTATUS_SCSI_DATA_OVERRUN:
+ mpt_set_ccb_status(ccb, CAM_DATA_RUN_ERR);
+ break;
+ case MPI_IOCSTATUS_SCSI_IO_DATA_ERROR:
+ mpt_set_ccb_status(ccb, CAM_UNCOR_PARITY);
+ break;
+ case MPI_IOCSTATUS_SCSI_DEVICE_NOT_THERE:
+ /*
+ * Since selection timeouts and "device really not
+ * there" are grouped into this error code, report
+ * selection timeout. Selection timeouts are
+ * typically retried before giving up on the device
+ * whereas "device not there" errors are considered
+ * unretryable.
+ */
+ mpt_set_ccb_status(ccb, CAM_SEL_TIMEOUT);
+ break;
+ case MPI_IOCSTATUS_SCSI_PROTOCOL_ERROR:
+ mpt_set_ccb_status(ccb, CAM_SEQUENCE_FAIL);
+ break;
+ case MPI_IOCSTATUS_SCSI_INVALID_BUS:
+ mpt_set_ccb_status(ccb, CAM_PATH_INVALID);
+ break;
+ case MPI_IOCSTATUS_SCSI_INVALID_TARGETID:
+ mpt_set_ccb_status(ccb, CAM_TID_INVALID);
+ break;
+ case MPI_IOCSTATUS_SCSI_TASK_MGMT_FAILED:
+ ccb->ccb_h.status = CAM_UA_TERMIO;
+ break;
+ case MPI_IOCSTATUS_INVALID_STATE:
+ /*
+ * The IOC has been reset. Emulate a bus reset.
+ */
+ /* FALLTHROUGH */
+ case MPI_IOCSTATUS_SCSI_EXT_TERMINATED:
+ ccb->ccb_h.status = CAM_SCSI_BUS_RESET;
+ break;
+ case MPI_IOCSTATUS_SCSI_TASK_TERMINATED:
+ case MPI_IOCSTATUS_SCSI_IOC_TERMINATED:
+ /*
+ * Don't clobber any timeout status that has
+ * already been set for this transaction. We
+ * want the SCSI layer to be able to differentiate
+ * between the command we aborted due to timeout
+ * and any innocent bystanders.
+ */
+ if ((ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_INPROG)
+ break;
+ mpt_set_ccb_status(ccb, CAM_REQ_TERMIO);
+ break;
+
+ case MPI_IOCSTATUS_INSUFFICIENT_RESOURCES:
+ mpt_set_ccb_status(ccb, CAM_RESRC_UNAVAIL);
+ break;
+ case MPI_IOCSTATUS_BUSY:
+ mpt_set_ccb_status(ccb, CAM_BUSY);
+ break;
+ case MPI_IOCSTATUS_INVALID_FUNCTION:
+ case MPI_IOCSTATUS_INVALID_SGL:
+ case MPI_IOCSTATUS_INTERNAL_ERROR:
+ case MPI_IOCSTATUS_INVALID_FIELD:
+ default:
+ /* XXX
+ * Some of the above may need to kick
+ * of a recovery action!!!!
+ */
+ ccb->ccb_h.status = CAM_UNREC_HBA_ERROR;
+ break;
+ }
+
+ if ((ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP)
+ mpt_freeze_ccb(ccb);
+
+ return (/*free_reply*/TRUE);
+}
+
+static void
+mpt_action(struct cam_sim *sim, union ccb *ccb)
+{
+ struct mpt_softc *mpt;
+ struct ccb_trans_settings *cts;
+ u_int tgt;
+ int raid_passthru;
+
+ CAM_DEBUG(ccb->ccb_h.path, CAM_DEBUG_TRACE, ("mpt_action\n"));
+
+ mpt = (struct mpt_softc *)cam_sim_softc(sim);
+ raid_passthru = (sim == mpt->phydisk_sim);
+
+ tgt = ccb->ccb_h.target_id;
+ if (raid_passthru
+ && ccb->ccb_h.func_code != XPT_PATH_INQ
+ && ccb->ccb_h.func_code != XPT_RESET_BUS) {
+ CAMLOCK_2_MPTLOCK(mpt);
+ if (mpt_map_physdisk(mpt, ccb, &tgt) != 0) {
+ ccb->ccb_h.status = CAM_DEV_NOT_THERE;
+ MPTLOCK_2_CAMLOCK(mpt);
+ xpt_done(ccb);
+ return;
+ }
+ MPTLOCK_2_CAMLOCK(mpt);
+ }
+
+ ccb->ccb_h.ccb_mpt_ptr = mpt;
+
+ switch (ccb->ccb_h.func_code) {
+ case XPT_SCSI_IO: /* Execute the requested I/O operation */
+ /*
+ * Do a couple of preliminary checks...
+ */
+ if ((ccb->ccb_h.flags & CAM_CDB_POINTER) != 0) {
+ if ((ccb->ccb_h.flags & CAM_CDB_PHYS) != 0) {
+ ccb->ccb_h.status = CAM_REQ_INVALID;
+ xpt_done(ccb);
+ break;
+ }
+ }
+ /* Max supported CDB length is 16 bytes */
+ /* XXX Unless we implement the new 32byte message type */
+ if (ccb->csio.cdb_len >
+ sizeof (((PTR_MSG_SCSI_IO_REQUEST)0)->CDB)) {
+ ccb->ccb_h.status = CAM_REQ_INVALID;
+ xpt_done(ccb);
+ return;
+ }
+ ccb->csio.scsi_status = SCSI_STATUS_OK;
+ mpt_start(sim, ccb);
+ break;
+
+ case XPT_RESET_BUS:
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "XPT_RESET_BUS\n");
+ if (!raid_passthru) {
+ CAMLOCK_2_MPTLOCK(mpt);
+ (void)mpt_bus_reset(mpt, /*sleep_ok*/FALSE);
+ MPTLOCK_2_CAMLOCK(mpt);
+ }
+ /*
+ * mpt_bus_reset is always successful in that it
+ * will fall back to a hard reset should a bus
+ * reset attempt fail.
+ */
+ mpt_set_ccb_status(ccb, CAM_REQ_CMP);
+ xpt_done(ccb);
+ break;
+
+ case XPT_ABORT:
+ /*
+ * XXX: Need to implement
+ */
+ ccb->ccb_h.status = CAM_UA_ABORT;
+ xpt_done(ccb);
+ break;
+
+#ifdef CAM_NEW_TRAN_CODE
+#define IS_CURRENT_SETTINGS(c) (c->type == CTS_TYPE_CURRENT_SETTINGS)
+#else
+#define IS_CURRENT_SETTINGS(c) (c->flags & CCB_TRANS_CURRENT_SETTINGS)
+#endif
+#define DP_DISC_ENABLE 0x1
+#define DP_DISC_DISABL 0x2
+#define DP_DISC (DP_DISC_ENABLE|DP_DISC_DISABL)
+
+#define DP_TQING_ENABLE 0x4
+#define DP_TQING_DISABL 0x8
+#define DP_TQING (DP_TQING_ENABLE|DP_TQING_DISABL)
+
+#define DP_WIDE 0x10
+#define DP_NARROW 0x20
+#define DP_WIDTH (DP_WIDE|DP_NARROW)
+
+#define DP_SYNC 0x40
+
+ case XPT_SET_TRAN_SETTINGS: /* Nexus Settings */
+ cts = &ccb->cts;
+ if (!IS_CURRENT_SETTINGS(cts)) {
+ mpt_prt(mpt, "Attempt to set User settings\n");
+ ccb->ccb_h.status = CAM_REQ_INVALID;
+ xpt_done(ccb);
+ break;
+ }
+ if (mpt->is_fc == 0) {
+ uint8_t dval = 0;
+ u_int period = 0, offset = 0;
+#ifndef CAM_NEW_TRAN_CODE
+ if (cts->valid & CCB_TRANS_DISC_VALID) {
+ dval |= DP_DISC_ENABLE;
+ }
+ if (cts->valid & CCB_TRANS_TQ_VALID) {
+ dval |= DP_TQING_ENABLE;
+ }
+ if (cts->valid & CCB_TRANS_BUS_WIDTH_VALID) {
+ if (cts->bus_width)
+ dval |= DP_WIDE;
+ else
+ dval |= DP_NARROW;
+ }
+ /*
+ * Any SYNC RATE of nonzero and SYNC_OFFSET
+ * of nonzero will cause us to go to the
+ * selected (from NVRAM) maximum value for
+ * this device. At a later point, we'll
+ * allow finer control.
+ */
+ if ((cts->valid & CCB_TRANS_SYNC_RATE_VALID) &&
+ (cts->valid & CCB_TRANS_SYNC_OFFSET_VALID)) {
+ dval |= DP_SYNC;
+ period = cts->sync_period;
+ offset = cts->sync_offset;
+ }
+#else
+ struct ccb_trans_settings_scsi *scsi =
+ &cts->proto_specific.scsi;
+ struct ccb_trans_settings_spi *spi =
+ &cts->xport_specific.spi;
+
+ if ((spi->valid & CTS_SPI_VALID_DISC) != 0) {
+ if ((spi->flags & CTS_SPI_FLAGS_DISC_ENB) != 0)
+ dval |= DP_DISC_ENABLE;
+ else
+ dval |= DP_DISC_DISABL;
+ }
+
+ if ((scsi->valid & CTS_SCSI_VALID_TQ) != 0) {
+ if ((scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) != 0)
+ dval |= DP_TQING_ENABLE;
+ else
+ dval |= DP_TQING_DISABL;
+ }
+
+ if ((spi->valid & CTS_SPI_VALID_BUS_WIDTH) != 0) {
+ if (spi->bus_width == MSG_EXT_WDTR_BUS_16_BIT)
+ dval |= DP_WIDE;
+ else
+ dval |= DP_NARROW;
+ }
+
+ if ((spi->valid & CTS_SPI_VALID_SYNC_OFFSET) &&
+ (spi->valid & CTS_SPI_VALID_SYNC_RATE) &&
+ (spi->sync_period && spi->sync_offset)) {
+ dval |= DP_SYNC;
+ period = spi->sync_period;
+ offset = spi->sync_offset;
+ }
+#endif
+ CAMLOCK_2_MPTLOCK(mpt);
+ if (dval & DP_DISC_ENABLE) {
+ mpt->mpt_disc_enable |= (1 << tgt);
+ } else if (dval & DP_DISC_DISABL) {
+ mpt->mpt_disc_enable &= ~(1 << tgt);
+ }
+ if (dval & DP_TQING_ENABLE) {
+ mpt->mpt_tag_enable |= (1 << tgt);
+ } else if (dval & DP_TQING_DISABL) {
+ mpt->mpt_tag_enable &= ~(1 << tgt);
+ }
+ if (dval & DP_WIDTH) {
+ if (mpt_setwidth(mpt, tgt, dval & DP_WIDE)) {
+mpt_prt(mpt, "Set width Failed!\n");
+ ccb->ccb_h.status = CAM_REQ_CMP_ERR;
+ MPTLOCK_2_CAMLOCK(mpt);
+ xpt_done(ccb);
+ break;
+ }
+ }
+ if (dval & DP_SYNC) {
+ if (mpt_setsync(mpt, tgt, period, offset)) {
+mpt_prt(mpt, "Set sync Failed!\n");
+ ccb->ccb_h.status = CAM_REQ_CMP_ERR;
+ MPTLOCK_2_CAMLOCK(mpt);
+ xpt_done(ccb);
+ break;
+ }
+ }
+ MPTLOCK_2_CAMLOCK(mpt);
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SET tgt %d flags %x period %x off %x\n",
+ tgt, dval, period, offset);
+ }
+ ccb->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(ccb);
+ break;
+
+ case XPT_GET_TRAN_SETTINGS:
+ cts = &ccb->cts;
+ if (mpt->is_fc) {
+#ifndef CAM_NEW_TRAN_CODE
+ /*
+ * a lot of normal SCSI things don't make sense.
+ */
+ cts->flags = CCB_TRANS_TAG_ENB | CCB_TRANS_DISC_ENB;
+ cts->valid = CCB_TRANS_DISC_VALID | CCB_TRANS_TQ_VALID;
+ /*
+ * How do you measure the width of a high
+ * speed serial bus? Well, in bytes.
+ *
+ * Offset and period make no sense, though, so we set
+ * (above) a 'base' transfer speed to be gigabit.
+ */
+ cts->bus_width = MSG_EXT_WDTR_BUS_8_BIT;
+#else
+ struct ccb_trans_settings_fc *fc =
+ &cts->xport_specific.fc;
+
+ cts->protocol = PROTO_SCSI;
+ cts->protocol_version = SCSI_REV_2;
+ cts->transport = XPORT_FC;
+ cts->transport_version = 0;
+
+ fc->valid = CTS_FC_VALID_SPEED;
+ fc->bitrate = 100000; /* XXX: Need for 2Gb/s */
+ /* XXX: need a port database for each target */
+#endif
+ } else {
+#ifdef CAM_NEW_TRAN_CODE
+ struct ccb_trans_settings_scsi *scsi =
+ &cts->proto_specific.scsi;
+ struct ccb_trans_settings_spi *spi =
+ &cts->xport_specific.spi;
+#endif
+ uint8_t dval, pval, oval;
+ int rv;
+
+ /*
+ * We aren't going off of Port PAGE2 params for
+ * tagged queuing or disconnect capabilities
+ * for current settings. For goal settings,
+ * we assert all capabilities- we've had some
+ * problems with reading NVRAM data.
+ */
+ if (IS_CURRENT_SETTINGS(cts)) {
+ CONFIG_PAGE_SCSI_DEVICE_0 tmp;
+ dval = 0;
+
+ tmp = mpt->mpt_dev_page0[tgt];
+ CAMLOCK_2_MPTLOCK(mpt);
+ rv = mpt_read_cur_cfg_page(mpt, tgt,
+ &tmp.Header,
+ sizeof(tmp),
+ /*sleep_ok*/FALSE,
+ /*timeout_ms*/5000);
+ if (rv) {
+ mpt_prt(mpt,
+ "cannot get target %d DP0\n", tgt);
+ }
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Tgt %d Page 0: NParms %x "
+ "Information %x\n", tgt,
+ tmp.NegotiatedParameters,
+ tmp.Information);
+ MPTLOCK_2_CAMLOCK(mpt);
+
+ if (tmp.NegotiatedParameters &
+ MPI_SCSIDEVPAGE0_NP_WIDE)
+ dval |= DP_WIDE;
+
+ if (mpt->mpt_disc_enable & (1 << tgt)) {
+ dval |= DP_DISC_ENABLE;
+ }
+ if (mpt->mpt_tag_enable & (1 << tgt)) {
+ dval |= DP_TQING_ENABLE;
+ }
+ oval = (tmp.NegotiatedParameters >> 16) & 0xff;
+ pval = (tmp.NegotiatedParameters >> 8) & 0xff;
+ } else {
+ /*
+ * XXX: Fix wrt NVRAM someday. Attempts
+ * XXX: to read port page2 device data
+ * XXX: just returns zero in these areas.
+ */
+ dval = DP_WIDE|DP_DISC|DP_TQING;
+ oval = (mpt->mpt_port_page0.Capabilities >> 16);
+ pval = (mpt->mpt_port_page0.Capabilities >> 8);
+ }
+#ifndef CAM_NEW_TRAN_CODE
+ cts->flags &= ~(CCB_TRANS_DISC_ENB|CCB_TRANS_TAG_ENB);
+ if (dval & DP_DISC_ENABLE) {
+ cts->flags |= CCB_TRANS_DISC_ENB;
+ }
+ if (dval & DP_TQING_ENABLE) {
+ cts->flags |= CCB_TRANS_TAG_ENB;
+ }
+ if (dval & DP_WIDE) {
+ cts->bus_width = MSG_EXT_WDTR_BUS_16_BIT;
+ } else {
+ cts->bus_width = MSG_EXT_WDTR_BUS_8_BIT;
+ }
+ cts->valid = CCB_TRANS_BUS_WIDTH_VALID |
+ CCB_TRANS_DISC_VALID | CCB_TRANS_TQ_VALID;
+ if (oval) {
+ cts->sync_period = pval;
+ cts->sync_offset = oval;
+ cts->valid |=
+ CCB_TRANS_SYNC_RATE_VALID |
+ CCB_TRANS_SYNC_OFFSET_VALID;
+ }
+#else
+ cts->protocol = PROTO_SCSI;
+ cts->protocol_version = SCSI_REV_2;
+ cts->transport = XPORT_SPI;
+ cts->transport_version = 2;
+
+ scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
+ spi->flags &= ~CTS_SPI_FLAGS_DISC_ENB;
+ if (dval & DP_DISC_ENABLE) {
+ spi->flags |= CTS_SPI_FLAGS_DISC_ENB;
+ }
+ if (dval & DP_TQING_ENABLE) {
+ scsi->flags |= CTS_SCSI_FLAGS_TAG_ENB;
+ }
+ if (oval && pval) {
+ spi->sync_offset = oval;
+ spi->sync_period = pval;
+ spi->valid |= CTS_SPI_VALID_SYNC_OFFSET;
+ spi->valid |= CTS_SPI_VALID_SYNC_RATE;
+ }
+ spi->valid |= CTS_SPI_VALID_BUS_WIDTH;
+ if (dval & DP_WIDE) {
+ spi->bus_width = MSG_EXT_WDTR_BUS_16_BIT;
+ } else {
+ spi->bus_width = MSG_EXT_WDTR_BUS_8_BIT;
+ }
+ if (cts->ccb_h.target_lun != CAM_LUN_WILDCARD) {
+ scsi->valid = CTS_SCSI_VALID_TQ;
+ spi->valid |= CTS_SPI_VALID_DISC;
+ } else {
+ scsi->valid = 0;
+ }
+#endif
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "GET %s tgt %d flags %x period %x offset %x\n",
+ IS_CURRENT_SETTINGS(cts)
+ ? "ACTIVE" : "NVRAM",
+ tgt, dval, pval, oval);
+ }
+ ccb->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(ccb);
+ break;
+
+ case XPT_CALC_GEOMETRY:
+ {
+ struct ccb_calc_geometry *ccg;
+
+ ccg = &ccb->ccg;
+ if (ccg->block_size == 0) {
+ ccb->ccb_h.status = CAM_REQ_INVALID;
+ xpt_done(ccb);
+ break;
+ }
+
+ mpt_calc_geometry(ccg, /*extended*/1);
+ xpt_done(ccb);
+ break;
+ }
+ case XPT_PATH_INQ: /* Path routing inquiry */
+ {
+ struct ccb_pathinq *cpi = &ccb->cpi;
+
+ cpi->version_num = 1;
+ cpi->target_sprt = 0;
+ cpi->hba_eng_cnt = 0;
+ cpi->max_lun = 7;
+ cpi->bus_id = cam_sim_bus(sim);
+ /* XXX Report base speed more accurately for FC/SAS, etc.*/
+ if (raid_passthru) {
+ cpi->max_target = mpt->ioc_page2->MaxPhysDisks;
+ cpi->hba_misc = PIM_NOBUSRESET;
+ cpi->initiator_id = cpi->max_target + 1;
+ cpi->hba_inquiry = PI_TAG_ABLE;
+ if (mpt->is_fc) {
+ cpi->base_transfer_speed = 100000;
+ } else {
+ cpi->base_transfer_speed = 3300;
+ cpi->hba_inquiry |=
+ PI_SDTR_ABLE|PI_TAG_ABLE|PI_WIDE_16;
+ }
+ } else if (mpt->is_fc) {
+ cpi->max_target = 255;
+ cpi->hba_misc = PIM_NOBUSRESET;
+ cpi->initiator_id = cpi->max_target + 1;
+ cpi->base_transfer_speed = 100000;
+ cpi->hba_inquiry = PI_TAG_ABLE;
+ } else {
+ cpi->initiator_id = mpt->mpt_ini_id;
+ cpi->base_transfer_speed = 3300;
+ cpi->hba_inquiry = PI_SDTR_ABLE|PI_TAG_ABLE|PI_WIDE_16;
+ cpi->hba_misc = 0;
+ cpi->max_target = 15;
+ }
+
+ strncpy(cpi->sim_vid, "FreeBSD", SIM_IDLEN);
+ strncpy(cpi->hba_vid, "LSI", HBA_IDLEN);
+ strncpy(cpi->dev_name, cam_sim_name(sim), DEV_IDLEN);
+ cpi->unit_number = cam_sim_unit(sim);
+ cpi->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(ccb);
+ break;
+ }
+ default:
+ ccb->ccb_h.status = CAM_REQ_INVALID;
+ xpt_done(ccb);
+ break;
+ }
+}
+
+static int
+mpt_setwidth(struct mpt_softc *mpt, int tgt, int onoff)
+{
+ CONFIG_PAGE_SCSI_DEVICE_1 tmp;
+ int rv;
+
+ tmp = mpt->mpt_dev_page1[tgt];
+ if (onoff) {
+ tmp.RequestedParameters |= MPI_SCSIDEVPAGE1_RP_WIDE;
+ } else {
+ tmp.RequestedParameters &= ~MPI_SCSIDEVPAGE1_RP_WIDE;
+ }
+ rv = mpt_write_cur_cfg_page(mpt, tgt, &tmp.Header, sizeof(tmp),
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv) {
+ mpt_prt(mpt, "mpt_setwidth: write cur page failed\n");
+ return (-1);
+ }
+ rv = mpt_read_cur_cfg_page(mpt, tgt, &tmp.Header, sizeof(tmp),
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv) {
+ mpt_prt(mpt, "mpt_setwidth: read cur page failed\n");
+ return (-1);
+ }
+ mpt->mpt_dev_page1[tgt] = tmp;
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Target %d Page 1: RequestedParameters %x Config %x\n",
+ tgt, mpt->mpt_dev_page1[tgt].RequestedParameters,
+ mpt->mpt_dev_page1[tgt].Configuration);
+ return (0);
+}
+
+static int
+mpt_setsync(struct mpt_softc *mpt, int tgt, int period, int offset)
+{
+ CONFIG_PAGE_SCSI_DEVICE_1 tmp;
+ int rv;
+
+ tmp = mpt->mpt_dev_page1[tgt];
+ tmp.RequestedParameters &= ~MPI_SCSIDEVPAGE1_RP_MIN_SYNC_PERIOD_MASK;
+ tmp.RequestedParameters &= ~MPI_SCSIDEVPAGE1_RP_MAX_SYNC_OFFSET_MASK;
+ tmp.RequestedParameters &= ~MPI_SCSIDEVPAGE1_RP_DT;
+ tmp.RequestedParameters &= ~MPI_SCSIDEVPAGE1_RP_QAS;
+ tmp.RequestedParameters &= ~MPI_SCSIDEVPAGE1_RP_IU;
+ /*
+ * XXX: For now, we're ignoring specific settings
+ */
+ if (period && offset) {
+ int factor, offset, np;
+ factor = (mpt->mpt_port_page0.Capabilities >> 8) & 0xff;
+ offset = (mpt->mpt_port_page0.Capabilities >> 16) & 0xff;
+ np = 0;
+ if (factor < 0x9) {
+ np |= MPI_SCSIDEVPAGE1_RP_QAS;
+ np |= MPI_SCSIDEVPAGE1_RP_IU;
+ }
+ if (factor < 0xa) {
+ np |= MPI_SCSIDEVPAGE1_RP_DT;
+ }
+ np |= (factor << 8) | (offset << 16);
+ tmp.RequestedParameters |= np;
+ }
+ rv = mpt_write_cur_cfg_page(mpt, tgt, &tmp.Header, sizeof(tmp),
+ /*sleep_ok*/FALSE, /*timeout_ms*/5000);
+ if (rv) {
+ mpt_prt(mpt, "mpt_setsync: write cur page failed\n");
+ return (-1);
+ }
+ rv = mpt_read_cur_cfg_page(mpt, tgt, &tmp.Header, sizeof(tmp),
+ /*sleep_ok*/FALSE, /*timeout_ms*/500);
+ if (rv) {
+ mpt_prt(mpt, "mpt_setsync: read cur page failed\n");
+ return (-1);
+ }
+ mpt->mpt_dev_page1[tgt] = tmp;
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "SPI Target %d Page 1: RParams %x Config %x\n",
+ tgt, mpt->mpt_dev_page1[tgt].RequestedParameters,
+ mpt->mpt_dev_page1[tgt].Configuration);
+ return (0);
+}
+
+static void
+mpt_calc_geometry(struct ccb_calc_geometry *ccg, int extended)
+{
+#if __FreeBSD_version >= 500000
+ cam_calc_geometry(ccg, extended);
+#else
+ uint32_t size_mb;
+ uint32_t secs_per_cylinder;
+
+ size_mb = ccg->volume_size / ((1024L * 1024L) / ccg->block_size);
+ if (size_mb > 1024 && extended) {
+ ccg->heads = 255;
+ ccg->secs_per_track = 63;
+ } else {
+ ccg->heads = 64;
+ ccg->secs_per_track = 32;
+ }
+ secs_per_cylinder = ccg->heads * ccg->secs_per_track;
+ ccg->cylinders = ccg->volume_size / secs_per_cylinder;
+ ccg->ccb_h.status = CAM_REQ_CMP;
+#endif
+}
+
+/****************************** Timeout Recovery ******************************/
+static int
+mpt_spawn_recovery_thread(struct mpt_softc *mpt)
+{
+ int error;
+
+ error = mpt_kthread_create(mpt_recovery_thread, mpt,
+ &mpt->recovery_thread, /*flags*/0,
+ /*altstack*/0, "mpt_recovery%d", mpt->unit);
+ return (error);
+}
+
+/*
+ * Lock is not held on entry.
+ */
+static void
+mpt_terminate_recovery_thread(struct mpt_softc *mpt)
+{
+
+ MPT_LOCK(mpt);
+ if (mpt->recovery_thread == NULL) {
+ MPT_UNLOCK(mpt);
+ return;
+ }
+ mpt->shutdwn_recovery = 1;
+ wakeup(mpt);
+ /*
+ * Sleep on a slightly different location
+ * for this interlock just for added safety.
+ */
+ mpt_sleep(mpt, &mpt->recovery_thread, PUSER, "thtrm", 0);
+ MPT_UNLOCK(mpt);
+}
+
+static void
+mpt_recovery_thread(void *arg)
+{
+ struct mpt_softc *mpt;
+
+#if __FreeBSD_version >= 500000
+ mtx_lock(&Giant);
+#endif
+ mpt = (struct mpt_softc *)arg;
+ MPT_LOCK(mpt);
+ for (;;) {
+
+ if (TAILQ_EMPTY(&mpt->request_timeout_list) != 0
+ && mpt->shutdwn_recovery == 0)
+ mpt_sleep(mpt, mpt, PUSER, "idle", 0);
+
+ if (mpt->shutdwn_recovery != 0)
+ break;
+
+ MPT_UNLOCK(mpt);
+ mpt_recover_commands(mpt);
+ MPT_LOCK(mpt);
+ }
+ mpt->recovery_thread = NULL;
+ wakeup(&mpt->recovery_thread);
+ MPT_UNLOCK(mpt);
+#if __FreeBSD_version >= 500000
+ mtx_unlock(&Giant);
+#endif
+ kthread_exit(0);
+}
+
+static int
+mpt_scsi_send_tmf(struct mpt_softc *mpt, u_int type,
+ u_int flags, u_int channel, u_int target, u_int lun,
+ u_int abort_ctx, int sleep_ok)
+{
+ MSG_SCSI_TASK_MGMT *tmf_req;
+ int error;
+
+ /*
+ * Wait for any current TMF request to complete.
+ * We're only allowed to issue one TMF at a time.
+ */
+ error = mpt_wait_req(mpt, mpt->tmf_req, REQ_STATE_FREE, REQ_STATE_MASK,
+ sleep_ok, MPT_TMF_MAX_TIMEOUT);
+ if (error != 0) {
+ mpt_reset(mpt, /*reinit*/TRUE);
+ return (ETIMEDOUT);
+ }
+
+ mpt->tmf_req->state = REQ_STATE_ALLOCATED|REQ_STATE_QUEUED;
+ TAILQ_INSERT_HEAD(&mpt->request_pending_list, mpt->tmf_req, links);
+
+ tmf_req = (MSG_SCSI_TASK_MGMT *)mpt->tmf_req->req_vbuf;
+ bzero(tmf_req, sizeof(*tmf_req));
+ tmf_req->TargetID = target;
+ tmf_req->Bus = channel;
+ tmf_req->ChainOffset = 0;
+ tmf_req->Function = MPI_FUNCTION_SCSI_TASK_MGMT;
+ tmf_req->Reserved = 0;
+ tmf_req->TaskType = type;
+ tmf_req->Reserved1 = 0;
+ tmf_req->MsgFlags = flags;
+ tmf_req->MsgContext =
+ htole32(mpt->tmf_req->index | scsi_tmf_handler_id);
+ bzero(&tmf_req->LUN, sizeof(tmf_req->LUN) + sizeof(tmf_req->Reserved2));
+ tmf_req->LUN[1] = lun;
+ tmf_req->TaskMsgContext = abort_ctx;
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "Issuing TMF %p with MsgContext of 0x%x\n", tmf_req,
+ tmf_req->MsgContext);
+ if (mpt->verbose > MPT_PRT_DEBUG)
+ mpt_print_request(tmf_req);
+
+ error = mpt_send_handshake_cmd(mpt, sizeof(*tmf_req), tmf_req);
+ if (error != 0)
+ mpt_reset(mpt, /*reinit*/TRUE);
+ return (error);
+}
+
+/*
+ * When a command times out, it is placed on the requeust_timeout_list
+ * and we wake our recovery thread. The MPT-Fusion architecture supports
+ * only a single TMF operation at a time, so we serially abort/bdr, etc,
+ * the timedout transactions. The next TMF is issued either by the
+ * completion handler of the current TMF waking our recovery thread,
+ * or the TMF timeout handler causing a hard reset sequence.
+ */
+static void
+mpt_recover_commands(struct mpt_softc *mpt)
+{
+ request_t *req;
+ union ccb *ccb;
+ int error;
+
+ MPT_LOCK(mpt);
+
+ /*
+ * Flush any commands whose completion coincides
+ * with their timeout.
+ */
+ mpt_intr(mpt);
+
+ if (TAILQ_EMPTY(&mpt->request_timeout_list) != 0) {
+ /*
+ * The timedout commands have already
+ * completed. This typically means
+ * that either the timeout value was on
+ * the hairy edge of what the device
+ * requires or - more likely - interrupts
+ * are not happening.
+ */
+ mpt_prt(mpt, "Timedout requests already complete. "
+ "Interrupts may not be functioning.\n");
+ MPT_UNLOCK(mpt);
+ return;
+ }
+
+ /*
+ * We have no visibility into the current state of the
+ * controller, so attempt to abort the commands in the
+ * order they timed-out.
+ */
+ while ((req = TAILQ_FIRST(&mpt->request_timeout_list)) != NULL) {
+ u_int status;
+
+ mpt_prt(mpt, "Attempting to Abort Req %p\n", req);
+
+ ccb = req->ccb;
+ mpt_set_ccb_status(ccb, CAM_CMD_TIMEOUT);
+ error = mpt_scsi_send_tmf(mpt,
+ MPI_SCSITASKMGMT_TASKTYPE_ABORT_TASK,
+ /*MsgFlags*/0, mpt->bus, ccb->ccb_h.target_id,
+ ccb->ccb_h.target_lun,
+ htole32(req->index | scsi_io_handler_id), /*sleep_ok*/TRUE);
+
+ if (error != 0) {
+ /*
+ * mpt_scsi_send_tmf hard resets on failure, so no
+ * need to do so here. Our queue should be emptied
+ * by the hard reset.
+ */
+ continue;
+ }
+
+ error = mpt_wait_req(mpt, mpt->tmf_req, REQ_STATE_DONE,
+ REQ_STATE_DONE, /*sleep_ok*/TRUE, /*time_ms*/5000);
+
+ status = mpt->tmf_req->IOCStatus;
+ if (error != 0) {
+
+ /*
+ * If we've errored out and the transaction is still
+ * pending, reset the controller.
+ */
+ mpt_prt(mpt, "mpt_recover_commands: Abort timed-out."
+ "Resetting controller\n");
+ mpt_reset(mpt, /*reinit*/TRUE);
+ continue;
+ }
+
+ /*
+ * TMF is complete.
+ */
+ mpt->tmf_req->state = REQ_STATE_FREE;
+ if ((status & MPI_IOCSTATUS_MASK) == MPI_SCSI_STATUS_SUCCESS)
+ continue;
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG,
+ "mpt_recover_commands: Abort Failed "
+ "with status 0x%x\n. Resetting bus", status);
+
+ /*
+ * If the abort attempt fails for any reason, reset the bus.
+ * We should find all of the timed-out commands on our
+ * list are in the done state after this completes.
+ */
+ mpt_bus_reset(mpt, /*sleep_ok*/TRUE);
+ }
+
+ MPT_UNLOCK(mpt);
+}
diff --git a/sys/dev/mpt/mpt_cam.h b/sys/dev/mpt/mpt_cam.h
new file mode 100644
index 0000000..558b32d
--- /dev/null
+++ b/sys/dev/mpt/mpt_cam.h
@@ -0,0 +1,110 @@
+/* $FreeBSD$ */
+/*-
+ * LSI MPT Host Adapter FreeBSD Wrapper Definitions (CAM version)
+ *
+ * Copyright (c) 2000, 2001 by Greg Ansley, Adam Prewett
+ *
+ * Partially derived from Matty Jacobs ISP driver.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice immediately at the beginning of the file, without modification,
+ * this list of conditions, and the following disclaimer.
+ * 2. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
+ * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ *
+ * Additional Copyright (c) 2002 by Matthew Jacob under same license.
+ */
+/*-
+ * Copyright (c) 2004, Avid Technology, Inc. and its contributors.
+ * Copyright (c) 2005, WHEEL Sp. z o.o.
+ * Copyright (c) 2004, 2005 Justin T. Gibbs
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _MPT_CAM_H_
+#define _MPT_CAM_H_
+
+#include <cam/cam.h>
+#include <cam/cam_debug.h>
+#include <cam/cam_ccb.h>
+#include <cam/cam_sim.h>
+#include <cam/cam_xpt.h>
+#include <cam/cam_xpt_sim.h>
+#include <cam/cam_debug.h>
+#include <cam/scsi/scsi_all.h>
+#include <cam/scsi/scsi_message.h>
+
+#define ccb_mpt_ptr sim_priv.entries[0].ptr
+#define ccb_req_ptr sim_priv.entries[1].ptr
+
+/************************** CCB Manipulation Routines *************************/
+static __inline void mpt_freeze_ccb(union ccb *ccb);
+static __inline void mpt_set_ccb_status(union ccb *ccb, cam_status status);
+
+static __inline void
+mpt_freeze_ccb(union ccb *ccb)
+{
+ if ((ccb->ccb_h.status & CAM_DEV_QFRZN) == 0) {
+ ccb->ccb_h.status |= CAM_DEV_QFRZN;
+ xpt_freeze_devq(ccb->ccb_h.path, /*count*/1);
+ }
+}
+
+static __inline void
+mpt_set_ccb_status(union ccb *ccb, cam_status status)
+{
+ ccb->ccb_h.status &= ~CAM_STATUS_MASK;
+ ccb->ccb_h.status |= status;
+}
+
+/****************************** Timeout Recovery ******************************/
+/*
+ * The longest timeout specified for a Task Managent command.
+ */
+#define MPT_TMF_MAX_TIMEOUT (20000)
+
+static __inline void
+mpt_wakeup_recovery_thread(struct mpt_softc *mpt)
+{
+ wakeup(mpt);
+}
+
+#endif /*_MPT_CAM_H_ */
diff --git a/sys/dev/mpt/mpt_debug.c b/sys/dev/mpt/mpt_debug.c
index 84ea8dd..0dce5f7 100644
--- a/sys/dev/mpt/mpt_debug.c
+++ b/sys/dev/mpt/mpt_debug.c
@@ -24,15 +24,21 @@
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
- */
-/*
+ *
* Additional Copyright (c) 2002 by Matthew Jacob under same license.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
-#include <dev/mpt/mpt_freebsd.h>
+#include <dev/mpt/mpt.h>
+
+#include <dev/mpt/mpilib/mpi_ioc.h>
+#include <dev/mpt/mpilib/mpi_init.h>
+#include <dev/mpt/mpilib/mpi_fc.h>
+
+#include <cam/scsi/scsi_all.h>
+
#include <machine/stdarg.h> /* for use by mpt_prt below */
struct Error_Map {
@@ -100,6 +106,7 @@ static const struct Error_Map IOC_Func[] = {
{ MPI_FUNCTION_PORT_FACTS, "Port Facts" },
{ MPI_FUNCTION_PORT_ENABLE, "Port Enable" },
{ MPI_FUNCTION_EVENT_NOTIFICATION, "Event Notification" },
+{ MPI_FUNCTION_EVENT_ACK, "Event Ack" },
{ MPI_FUNCTION_FW_DOWNLOAD, "FW Download" },
{ MPI_FUNCTION_TARGET_CMD_BUFFER_POST, "SCSI Target Command Buffer" },
{ MPI_FUNCTION_TARGET_ASSIST, "Target Assist" },
@@ -109,9 +116,25 @@ static const struct Error_Map IOC_Func[] = {
{ MPI_FUNCTION_TARGET_FC_RSP_LINK_SRVC, "FC: Link Service Response" },
{ MPI_FUNCTION_TARGET_FC_EX_SEND_LINK_SRVC, "FC: Send Extended Link Service" },
{ MPI_FUNCTION_TARGET_FC_ABORT, "FC: Abort" },
+{ MPI_FUNCTION_FC_LINK_SRVC_BUF_POST, "FC: Link Service Buffers" },
+{ MPI_FUNCTION_FC_LINK_SRVC_RSP, "FC: Link Server Response" },
+{ MPI_FUNCTION_FC_EX_LINK_SRVC_SEND, "FC: Send Extended Link Service" },
+{ MPI_FUNCTION_FC_ABORT, "FC: Abort" },
+{ MPI_FUNCTION_FW_UPLOAD, "FW Upload" },
+{ MPI_FUNCTION_FC_COMMON_TRANSPORT_SEND, "FC: Send Common Transport" },
+{ MPI_FUNCTION_FC_PRIMITIVE_SEND, "FC: Send Primitive" },
+{ MPI_FUNCTION_RAID_ACTION, "RAID Action" },
+{ MPI_FUNCTION_RAID_SCSI_IO_PASSTHROUGH, "RAID SCSI Pass-Through" },
+{ MPI_FUNCTION_TOOLBOX, "Toolbox Command" },
+{ MPI_FUNCTION_SCSI_ENCLOSURE_PROCESSOR, "SCSI Enclosure Proc. Command" },
+{ MPI_FUNCTION_MAILBOX, "Mailbox Command" },
{ MPI_FUNCTION_LAN_SEND, "LAN Send" },
{ MPI_FUNCTION_LAN_RECEIVE, "LAN Recieve" },
{ MPI_FUNCTION_LAN_RESET, "LAN Reset" },
+{ MPI_FUNCTION_IOC_MESSAGE_UNIT_RESET, "IOC Message Unit Reset" },
+{ MPI_FUNCTION_IO_UNIT_RESET, "IO Unit Reset" },
+{ MPI_FUNCTION_HANDSHAKE, "Handshake" },
+{ MPI_FUNCTION_REPLY_FRAME_REMOVAL, "Reply Frame Removal" },
{ -1, 0},
};
@@ -154,15 +177,23 @@ static const struct Error_Map IOC_SCSIStatus[] = {
};
static const struct Error_Map IOC_Diag[] = {
-{ MPT_DIAG_ENABLED, "DWE" },
-{ MPT_DIAG_FLASHBAD, "FLASH_Bad" },
-{ MPT_DIAG_TTLI, "TTLI" },
-{ MPT_DIAG_RESET_IOC, "Reset" },
-{ MPT_DIAG_ARM_DISABLE, "DisARM" },
-{ MPT_DIAG_DME, "DME" },
+{ MPI_DIAG_DRWE, "DWE" },
+{ MPI_DIAG_FLASH_BAD_SIG, "FLASH_Bad" },
+{ MPI_DIAGNOSTIC_OFFSET, "Offset" },
+{ MPI_DIAG_RESET_ADAPTER, "Reset" },
+{ MPI_DIAG_DISABLE_ARM, "DisARM" },
+{ MPI_DIAG_MEM_ENABLE, "DME" },
{ -1, 0 },
};
+static const struct Error_Map IOC_SCSITMType[] = {
+{ MPI_SCSITASKMGMT_TASKTYPE_ABORT_TASK, "Abort Task" },
+{ MPI_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET, "Abort Task Set" },
+{ MPI_SCSITASKMGMT_TASKTYPE_TARGET_RESET, "Target Reset" },
+{ MPI_SCSITASKMGMT_TASKTYPE_RESET_BUS, "Reset Bus" },
+{ MPI_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET, "Logical Unit Reset" },
+{ -1, 0 },
+};
static void mpt_dump_sgl(SGE_IO_UNION *sgl);
@@ -286,6 +317,20 @@ mpt_state(u_int32_t mb)
return text;
}
+static char *
+mpt_scsi_tm_type(int code)
+{
+ const struct Error_Map *status = IOC_SCSITMType;
+ static char buf[64];
+ while (status->Error_Code >= 0) {
+ if (status->Error_Code == code)
+ return status->Error_String;
+ status++;
+ }
+ snprintf(buf, sizeof buf, "Unknown (0x%08x)", code);
+ return buf;
+}
+
void
mpt_print_db(u_int32_t mb)
{
@@ -508,6 +553,15 @@ mpt_print_scsi_io_request(MSG_SCSI_IO_REQUEST *orig_msg)
mpt_dump_sgl(&orig_msg->SGL);
}
+static void
+mpt_print_scsi_tmf_request(MSG_SCSI_TASK_MGMT *msg)
+{
+ mpt_print_request_hdr((MSG_REQUEST_HEADER *)msg);
+ printf("\tLun 0x%02x\n", msg->LUN[1]);
+ printf("\tTaskType %s\n", mpt_scsi_tm_type(msg->TaskType));
+ printf("\tTaskMsgContext 0x%08x\n", msg->TaskMsgContext);
+}
+
void
mpt_print_request(void *vreq)
{
@@ -517,25 +571,81 @@ mpt_print_request(void *vreq)
case MPI_FUNCTION_SCSI_IO_REQUEST:
mpt_print_scsi_io_request((MSG_SCSI_IO_REQUEST *)req);
break;
+ case MPI_FUNCTION_SCSI_TASK_MGMT:
+ mpt_print_scsi_tmf_request((MSG_SCSI_TASK_MGMT *)req);
default:
mpt_print_request_hdr(req);
break;
}
}
-char *
-mpt_req_state(enum mpt_req_state state)
+int
+mpt_decode_value(mpt_decode_entry_t *table, u_int num_entries,
+ const char *name, u_int value, u_int *cur_column,
+ u_int wrap_point)
{
- char *text;
+ int printed;
+ u_int printed_mask;
+ u_int dummy_column;
- switch (state) {
- case REQ_FREE: text = "Free"; break;
- case REQ_IN_PROGRESS: text = "In Progress"; break;
- case REQ_ON_CHIP: text = "On Chip"; break;
- case REQ_TIMEOUT: text = "Timeout"; break;
- default: text = "Unknown"; break;
+ if (cur_column == NULL) {
+ dummy_column = 0;
+ cur_column = &dummy_column;
}
- return text;
+
+ if (*cur_column >= wrap_point) {
+ printf("\n");
+ *cur_column = 0;
+ }
+ printed = printf("%s[0x%x]", name, value);
+ if (table == NULL) {
+ printed += printf(" ");
+ *cur_column += printed;
+ return (printed);
+ }
+ printed_mask = 0;
+ while (printed_mask != 0xFF) {
+ int entry;
+
+ for (entry = 0; entry < num_entries; entry++) {
+ if (((value & table[entry].mask)
+ != table[entry].value)
+ || ((printed_mask & table[entry].mask)
+ == table[entry].mask))
+ continue;
+
+ printed += printf("%s%s",
+ printed_mask == 0 ? ":(" : "|",
+ table[entry].name);
+ printed_mask |= table[entry].mask;
+ break;
+ }
+ if (entry >= num_entries)
+ break;
+ }
+ if (printed_mask != 0)
+ printed += printf(") ");
+ else
+ printed += printf(" ");
+ *cur_column += printed;
+ return (printed);
+}
+
+static mpt_decode_entry_t req_state_parse_table[] = {
+ { "REQ_FREE", 0x00, 0xff },
+ { "REQ_ALLOCATED", 0x01, 0x01 },
+ { "REQ_QUEUED", 0x02, 0x02 },
+ { "REQ_DONE", 0x04, 0x04 },
+ { "REQ_TIMEDOUT", 0x08, 0x08 },
+ { "REQ_NEED_WAKEUP", 0x10, 0x10 }
+};
+
+void
+mpt_req_state(mpt_req_state_t state)
+{
+ mpt_decode_value(req_state_parse_table,
+ NUM_ELEMENTS(req_state_parse_table),
+ "REQ_STATE", state, NULL, 80);
}
static void
@@ -597,12 +707,22 @@ mpt_dump_sgl(SGE_IO_UNION *su)
}
void
-mpt_prt(mpt_softc_t *mpt, const char *fmt, ...)
+mpt_prt(struct mpt_softc *mpt, const char *fmt, ...)
{
va_list ap;
+
printf("%s: ", device_get_nameunit(mpt->dev));
va_start(ap, fmt);
vprintf(fmt, ap);
va_end(ap);
- printf("\n");
+}
+
+void
+mpt_prtc(struct mpt_softc *mpt, const char *fmt, ...)
+{
+ va_list ap;
+
+ va_start(ap, fmt);
+ vprintf(fmt, ap);
+ va_end(ap);
}
diff --git a/sys/dev/mpt/mpt_freebsd.c b/sys/dev/mpt/mpt_freebsd.c
deleted file mode 100644
index d7bb430..0000000
--- a/sys/dev/mpt/mpt_freebsd.c
+++ /dev/null
@@ -1,1530 +0,0 @@
-/*-
- * FreeBSD/CAM specific routines for LSI '909 FC adapters.
- * FreeBSD Version.
- *
- * Copyright (c) 2000, 2001 by Greg Ansley
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
- */
-/*
- * Additional Copyright (c) 2002 by Matthew Jacob under same license.
- */
-
-#include <sys/cdefs.h>
-__FBSDID("$FreeBSD$");
-
-#include <dev/mpt/mpt_freebsd.h>
-
-static void mpt_poll(struct cam_sim *);
-static timeout_t mpttimeout;
-static timeout_t mpttimeout2;
-static void mpt_action(struct cam_sim *, union ccb *);
-static int mpt_setwidth(mpt_softc_t *, int, int);
-static int mpt_setsync(mpt_softc_t *, int, int, int);
-
-void
-mpt_cam_attach(mpt_softc_t *mpt)
-{
- struct cam_devq *devq;
- struct cam_sim *sim;
- int maxq;
-
- mpt->bus = 0;
- maxq = (mpt->mpt_global_credits < MPT_MAX_REQUESTS(mpt))?
- mpt->mpt_global_credits : MPT_MAX_REQUESTS(mpt);
-
-
- /*
- * Create the device queue for our SIM(s).
- */
-
- devq = cam_simq_alloc(maxq);
- if (devq == NULL) {
- return;
- }
-
- /*
- * Construct our SIM entry.
- */
- sim = cam_sim_alloc(mpt_action, mpt_poll, "mpt", mpt,
- mpt->unit, 1, maxq, devq);
- if (sim == NULL) {
- cam_simq_free(devq);
- return;
- }
-
- /*
- * Register exactly the bus.
- */
-
- if (xpt_bus_register(sim, 0) != CAM_SUCCESS) {
- cam_sim_free(sim, TRUE);
- return;
- }
-
- if (xpt_create_path(&mpt->path, NULL, cam_sim_path(sim),
- CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD) != CAM_REQ_CMP) {
- xpt_bus_deregister(cam_sim_path(sim));
- cam_sim_free(sim, TRUE);
- return;
- }
- mpt->sim = sim;
-}
-
-void
-mpt_cam_detach(mpt_softc_t *mpt)
-{
- if (mpt->sim != NULL) {
- xpt_free_path(mpt->path);
- xpt_bus_deregister(cam_sim_path(mpt->sim));
- cam_sim_free(mpt->sim, TRUE);
- mpt->sim = NULL;
- }
-}
-
-/* This routine is used after a system crash to dump core onto the
- * swap device.
- */
-static void
-mpt_poll(struct cam_sim *sim)
-{
- mpt_softc_t *mpt = (mpt_softc_t *) cam_sim_softc(sim);
- MPT_LOCK(mpt);
- mpt_intr(mpt);
- MPT_UNLOCK(mpt);
-}
-
-/*
- * This routine is called if the 9x9 does not return completion status
- * for a command after a CAM specified time.
- */
-static void
-mpttimeout(void *arg)
-{
- request_t *req;
- union ccb *ccb = arg;
- u_int32_t oseq;
- mpt_softc_t *mpt;
-
- mpt = ccb->ccb_h.ccb_mpt_ptr;
- MPT_LOCK(mpt);
- req = ccb->ccb_h.ccb_req_ptr;
- oseq = req->sequence;
- mpt->timeouts++;
- if (mpt_intr(mpt)) {
- if (req->sequence != oseq) {
- mpt_prt(mpt, "bullet missed in timeout");
- MPT_UNLOCK(mpt);
- return;
- }
- mpt_prt(mpt, "bullet U-turned in timeout: got us");
- }
- mpt_prt(mpt,
- "time out on request index = 0x%02x sequence = 0x%08x",
- req->index, req->sequence);
- mpt_check_doorbell(mpt);
- mpt_prt(mpt, "Status %08x; Mask %08x; Doorbell %08x",
- mpt_read(mpt, MPT_OFFSET_INTR_STATUS),
- mpt_read(mpt, MPT_OFFSET_INTR_MASK),
- mpt_read(mpt, MPT_OFFSET_DOORBELL) );
- printf("request state %s\n", mpt_req_state(req->debug));
- if (ccb != req->ccb) {
- printf("time out: ccb %p != req->ccb %p\n",
- ccb,req->ccb);
- }
- mpt_print_scsi_io_request((MSG_SCSI_IO_REQUEST *)req->req_vbuf);
- req->debug = REQ_TIMEOUT;
- req->ccb = NULL;
- req->link.sle_next = (void *) mpt;
- (void) timeout(mpttimeout2, (caddr_t)req, hz / 10);
- ccb->ccb_h.status = CAM_CMD_TIMEOUT;
- ccb->ccb_h.status |= CAM_RELEASE_SIMQ;
- mpt->outofbeer = 0;
- MPTLOCK_2_CAMLOCK(mpt);
- xpt_done(ccb);
- CAMLOCK_2_MPTLOCK(mpt);
- MPT_UNLOCK(mpt);
-}
-
-static void
-mpttimeout2(void *arg)
-{
- request_t *req = arg;
- if (req->debug == REQ_TIMEOUT) {
- mpt_softc_t *mpt = (mpt_softc_t *) req->link.sle_next;
- MPT_LOCK(mpt);
- mpt_free_request(mpt, req);
- MPT_UNLOCK(mpt);
- }
-}
-
-/*
- * Callback routine from "bus_dmamap_load" or in simple case called directly.
- *
- * Takes a list of physical segments and builds the SGL for SCSI IO command
- * and forwards the commard to the IOC after one last check that CAM has not
- * aborted the transaction.
- */
-static void
-mpt_execute_req(void *arg, bus_dma_segment_t *dm_segs, int nseg, int error)
-{
- request_t *req;
- union ccb *ccb;
- mpt_softc_t *mpt;
- MSG_SCSI_IO_REQUEST *mpt_req;
- SGE_SIMPLE32 *se;
-
- req = (request_t *)arg;
- ccb = req->ccb;
-
- mpt = ccb->ccb_h.ccb_mpt_ptr;
- req = ccb->ccb_h.ccb_req_ptr;
- mpt_req = req->req_vbuf;
-
- if (error == 0 && nseg > MPT_SGL_MAX) {
- error = EFBIG;
- }
-
- if (error != 0) {
- if (error != EFBIG)
- mpt_prt(mpt, "bus_dmamap_load returned %d", error);
- if (ccb->ccb_h.status == CAM_REQ_INPROG) {
- xpt_freeze_devq(ccb->ccb_h.path, 1);
- ccb->ccb_h.status = CAM_DEV_QFRZN;
- if (error == EFBIG)
- ccb->ccb_h.status |= CAM_REQ_TOO_BIG;
- else
- ccb->ccb_h.status |= CAM_REQ_CMP_ERR;
- }
- ccb->ccb_h.status &= ~CAM_SIM_QUEUED;
- xpt_done(ccb);
- CAMLOCK_2_MPTLOCK(mpt);
- mpt_free_request(mpt, req);
- MPTLOCK_2_CAMLOCK(mpt);
- return;
- }
-
- if (nseg > MPT_NSGL_FIRST(mpt)) {
- int i, nleft = nseg;
- u_int32_t flags;
- bus_dmasync_op_t op;
- SGE_CHAIN32 *ce;
-
- mpt_req->DataLength = ccb->csio.dxfer_len;
- flags = MPI_SGE_FLAGS_SIMPLE_ELEMENT;
- if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_OUT)
- flags |= MPI_SGE_FLAGS_HOST_TO_IOC;
-
- se = (SGE_SIMPLE32 *) &mpt_req->SGL;
- for (i = 0; i < MPT_NSGL_FIRST(mpt) - 1; i++, se++, dm_segs++) {
- u_int32_t tf;
-
- bzero(se, sizeof (*se));
- se->Address = dm_segs->ds_addr;
- MPI_pSGE_SET_LENGTH(se, dm_segs->ds_len);
- tf = flags;
- if (i == MPT_NSGL_FIRST(mpt) - 2) {
- tf |= MPI_SGE_FLAGS_LAST_ELEMENT;
- }
- MPI_pSGE_SET_FLAGS(se, tf);
- nleft -= 1;
- }
-
- /*
- * Tell the IOC where to find the first chain element
- */
- mpt_req->ChainOffset = ((char *)se - (char *)mpt_req) >> 2;
-
- /*
- * Until we're finished with all segments...
- */
- while (nleft) {
- int ntodo;
- /*
- * Construct the chain element that point to the
- * next segment.
- */
- ce = (SGE_CHAIN32 *) se++;
- if (nleft > MPT_NSGL(mpt)) {
- ntodo = MPT_NSGL(mpt) - 1;
- ce->NextChainOffset = (MPT_RQSL(mpt) -
- sizeof (SGE_SIMPLE32)) >> 2;
- ce->Length = MPT_NSGL(mpt) *
- sizeof (SGE_SIMPLE32);
- } else {
- ntodo = nleft;
- ce->NextChainOffset = 0;
- ce->Length = ntodo * sizeof (SGE_SIMPLE32);
- }
- ce->Address = req->req_pbuf +
- ((char *)se - (char *)mpt_req);
- ce->Flags = MPI_SGE_FLAGS_CHAIN_ELEMENT;
- for (i = 0; i < ntodo; i++, se++, dm_segs++) {
- u_int32_t tf;
-
- bzero(se, sizeof (*se));
- se->Address = dm_segs->ds_addr;
- MPI_pSGE_SET_LENGTH(se, dm_segs->ds_len);
- tf = flags;
- if (i == ntodo - 1) {
- tf |= MPI_SGE_FLAGS_LAST_ELEMENT;
- if (ce->NextChainOffset == 0) {
- tf |=
- MPI_SGE_FLAGS_END_OF_LIST |
- MPI_SGE_FLAGS_END_OF_BUFFER;
- }
- }
- MPI_pSGE_SET_FLAGS(se, tf);
- nleft -= 1;
- }
-
- }
-
- if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_IN)
- op = BUS_DMASYNC_PREREAD;
- else
- op = BUS_DMASYNC_PREWRITE;
- if (!(ccb->ccb_h.flags & (CAM_SG_LIST_PHYS|CAM_DATA_PHYS))) {
- bus_dmamap_sync(mpt->buffer_dmat, req->dmap, op);
- }
- } else if (nseg > 0) {
- int i;
- u_int32_t flags;
- bus_dmasync_op_t op;
-
- mpt_req->DataLength = ccb->csio.dxfer_len;
- flags = MPI_SGE_FLAGS_SIMPLE_ELEMENT;
- if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_OUT)
- flags |= MPI_SGE_FLAGS_HOST_TO_IOC;
-
- /* Copy the segments into our SG list */
- se = (SGE_SIMPLE32 *) &mpt_req->SGL;
- for (i = 0; i < nseg; i++, se++, dm_segs++) {
- u_int32_t tf;
-
- bzero(se, sizeof (*se));
- se->Address = dm_segs->ds_addr;
- MPI_pSGE_SET_LENGTH(se, dm_segs->ds_len);
- tf = flags;
- if (i == nseg - 1) {
- tf |=
- MPI_SGE_FLAGS_LAST_ELEMENT |
- MPI_SGE_FLAGS_END_OF_BUFFER |
- MPI_SGE_FLAGS_END_OF_LIST;
- }
- MPI_pSGE_SET_FLAGS(se, tf);
- }
-
- if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_IN)
- op = BUS_DMASYNC_PREREAD;
- else
- op = BUS_DMASYNC_PREWRITE;
- if (!(ccb->ccb_h.flags & (CAM_SG_LIST_PHYS|CAM_DATA_PHYS))) {
- bus_dmamap_sync(mpt->buffer_dmat, req->dmap, op);
- }
- } else {
- se = (SGE_SIMPLE32 *) &mpt_req->SGL;
- /*
- * No data to transfer so we just make a single simple SGL
- * with zero length.
- */
- MPI_pSGE_SET_FLAGS(se,
- (MPI_SGE_FLAGS_LAST_ELEMENT | MPI_SGE_FLAGS_END_OF_BUFFER |
- MPI_SGE_FLAGS_SIMPLE_ELEMENT | MPI_SGE_FLAGS_END_OF_LIST));
- }
-
- /*
- * Last time we need to check if this CCB needs to be aborted.
- */
- if (ccb->ccb_h.status != CAM_REQ_INPROG) {
- if (nseg && (ccb->ccb_h.flags & CAM_SG_LIST_PHYS) == 0)
- bus_dmamap_unload(mpt->buffer_dmat, req->dmap);
- CAMLOCK_2_MPTLOCK(mpt);
- mpt_free_request(mpt, req);
- MPTLOCK_2_CAMLOCK(mpt);
- xpt_done(ccb);
- return;
- }
-
- ccb->ccb_h.status |= CAM_SIM_QUEUED;
- MPTLOCK_2_CAMLOCK(mpt);
- if (ccb->ccb_h.timeout != CAM_TIME_INFINITY) {
- ccb->ccb_h.timeout_ch =
- timeout(mpttimeout, (caddr_t)ccb,
- (ccb->ccb_h.timeout * hz) / 1000);
- } else {
- callout_handle_init(&ccb->ccb_h.timeout_ch);
- }
- if (mpt->verbose > 1)
- mpt_print_scsi_io_request(mpt_req);
- mpt_send_cmd(mpt, req);
- MPTLOCK_2_CAMLOCK(mpt);
-}
-
-static void
-mpt_start(union ccb *ccb)
-{
- request_t *req;
- struct mpt_softc *mpt;
- MSG_SCSI_IO_REQUEST *mpt_req;
- struct ccb_scsiio *csio = &ccb->csio;
- struct ccb_hdr *ccbh = &ccb->ccb_h;
-
- /* Get the pointer for the physical addapter */
- mpt = ccb->ccb_h.ccb_mpt_ptr;
-
- CAMLOCK_2_MPTLOCK(mpt);
- /* Get a request structure off the free list */
- if ((req = mpt_get_request(mpt)) == NULL) {
- if (mpt->outofbeer == 0) {
- mpt->outofbeer = 1;
- xpt_freeze_simq(mpt->sim, 1);
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "FREEZEQ");
- }
- }
- MPTLOCK_2_CAMLOCK(mpt);
- ccb->ccb_h.status = CAM_REQUEUE_REQ;
- xpt_done(ccb);
- return;
- }
- MPTLOCK_2_CAMLOCK(mpt);
-
- /* Link the ccb and the request structure so we can find */
- /* the other knowing either the request or the ccb */
- req->ccb = ccb;
- ccb->ccb_h.ccb_req_ptr = req;
-
- /* Now we build the command for the IOC */
- mpt_req = req->req_vbuf;
- bzero(mpt_req, sizeof *mpt_req);
-
- mpt_req->Function = MPI_FUNCTION_SCSI_IO_REQUEST;
- mpt_req->Bus = mpt->bus;
-
- mpt_req->SenseBufferLength =
- (csio->sense_len < MPT_SENSE_SIZE) ?
- csio->sense_len : MPT_SENSE_SIZE;
-
- /* We use the message context to find the request structure when we */
- /* Get the command competion interrupt from the FC IOC. */
- mpt_req->MsgContext = req->index;
-
- /* Which physical device to do the I/O on */
- mpt_req->TargetID = ccb->ccb_h.target_id;
- mpt_req->LUN[1] = ccb->ccb_h.target_lun;
-
- /* Set the direction of the transfer */
- if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_IN)
- mpt_req->Control = MPI_SCSIIO_CONTROL_READ;
- else if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_OUT)
- mpt_req->Control = MPI_SCSIIO_CONTROL_WRITE;
- else
- mpt_req->Control = MPI_SCSIIO_CONTROL_NODATATRANSFER;
-
- if ((ccb->ccb_h.flags & CAM_TAG_ACTION_VALID) != 0) {
- switch(ccb->csio.tag_action) {
- case MSG_HEAD_OF_Q_TAG:
- mpt_req->Control |= MPI_SCSIIO_CONTROL_HEADOFQ;
- break;
- case MSG_ACA_TASK:
- mpt_req->Control |= MPI_SCSIIO_CONTROL_ACAQ;
- break;
- case MSG_ORDERED_Q_TAG:
- mpt_req->Control |= MPI_SCSIIO_CONTROL_ORDEREDQ;
- break;
- case MSG_SIMPLE_Q_TAG:
- default:
- mpt_req->Control |= MPI_SCSIIO_CONTROL_SIMPLEQ;
- break;
- }
- } else {
- if (mpt->is_fc)
- mpt_req->Control |= MPI_SCSIIO_CONTROL_SIMPLEQ;
- else
- mpt_req->Control |= MPI_SCSIIO_CONTROL_UNTAGGED;
- }
-
- if (mpt->is_fc == 0) {
- if (ccb->ccb_h.flags & CAM_DIS_DISCONNECT) {
- mpt_req->Control |= MPI_SCSIIO_CONTROL_NO_DISCONNECT;
- }
- }
-
- /* Copy the scsi command block into place */
- if ((ccb->ccb_h.flags & CAM_CDB_POINTER) != 0)
- bcopy(csio->cdb_io.cdb_ptr, mpt_req->CDB, csio->cdb_len);
- else
- bcopy(csio->cdb_io.cdb_bytes, mpt_req->CDB, csio->cdb_len);
-
- mpt_req->CDBLength = csio->cdb_len;
- mpt_req->DataLength = csio->dxfer_len;
- mpt_req->SenseBufferLowAddr = req->sense_pbuf;
-
- /*
- * If we have any data to send with this command,
- * map it into bus space.
- */
-
- if ((ccbh->flags & CAM_DIR_MASK) != CAM_DIR_NONE) {
- if ((ccbh->flags & CAM_SCATTER_VALID) == 0) {
- /*
- * We've been given a pointer to a single buffer.
- */
- if ((ccbh->flags & CAM_DATA_PHYS) == 0) {
- /*
- * Virtual address that needs to translated into
- * one or more physical pages.
- */
- int error;
-
- error = bus_dmamap_load(mpt->buffer_dmat,
- req->dmap, csio->data_ptr, csio->dxfer_len,
- mpt_execute_req, req, 0);
- if (error == EINPROGRESS) {
- /*
- * So as to maintain ordering,
- * freeze the controller queue
- * until our mapping is
- * returned.
- */
- xpt_freeze_simq(mpt->sim, 1);
- ccbh->status |= CAM_RELEASE_SIMQ;
- }
- } else {
- /*
- * We have been given a pointer to single
- * physical buffer.
- */
- struct bus_dma_segment seg;
- seg.ds_addr =
- (bus_addr_t)(vm_offset_t)csio->data_ptr;
- seg.ds_len = csio->dxfer_len;
- mpt_execute_req(req, &seg, 1, 0);
- }
- } else {
- /*
- * We have been given a list of addresses.
- * These case could be easily done but they are not
- * currently generated by the CAM subsystem so there
- * is no point in wasting the time right now.
- */
- struct bus_dma_segment *segs;
- if ((ccbh->flags & CAM_SG_LIST_PHYS) == 0) {
- mpt_execute_req(req, NULL, 0, EFAULT);
- } else {
- /* Just use the segments provided */
- segs = (struct bus_dma_segment *)csio->data_ptr;
- mpt_execute_req(req, segs, csio->sglist_cnt,
- (csio->sglist_cnt < MPT_SGL_MAX)?
- 0 : EFBIG);
- }
- }
- } else {
- mpt_execute_req(req, NULL, 0, 0);
- }
-}
-
-static int
-mpt_bus_reset(union ccb *ccb)
-{
- int error;
- request_t *req;
- mpt_softc_t *mpt;
- MSG_SCSI_TASK_MGMT *reset_req;
-
- /* Get the pointer for the physical adapter */
- mpt = ccb->ccb_h.ccb_mpt_ptr;
-
- /* Get a request structure off the free list */
- if ((req = mpt_get_request(mpt)) == NULL) {
- return (CAM_REQUEUE_REQ);
- }
-
- /* Link the ccb and the request structure so we can find */
- /* the other knowing either the request or the ccb */
- req->ccb = ccb;
- ccb->ccb_h.ccb_req_ptr = req;
-
- reset_req = req->req_vbuf;
- bzero(reset_req, sizeof *reset_req);
-
- reset_req->Function = MPI_FUNCTION_SCSI_TASK_MGMT;
- reset_req->MsgContext = req->index;
- reset_req->TaskType = MPI_SCSITASKMGMT_TASKTYPE_RESET_BUS;
- if (mpt->is_fc) {
- /*
- * Should really be TARGET_RESET_OPTION
- */
- reset_req->MsgFlags =
- MPI_SCSITASKMGMT_MSGFLAGS_LIP_RESET_OPTION;
- }
- /* Which physical device Reset */
- reset_req->TargetID = ccb->ccb_h.target_id;
- reset_req->LUN[1] = ccb->ccb_h.target_lun;
-
- ccb->ccb_h.status |= CAM_SIM_QUEUED;
-
- error = mpt_send_handshake_cmd(mpt,
- sizeof (MSG_SCSI_TASK_MGMT), reset_req);
- if (error) {
- mpt_prt(mpt,
- "mpt_bus_reset: mpt_send_handshake return %d", error);
- return (CAM_REQ_CMP_ERR);
- } else {
- return (CAM_REQ_CMP);
- }
-}
-
-/*
- * Process an asynchronous event from the IOC.
- */
-static void mpt_ctlop(mpt_softc_t *, void *, u_int32_t);
-static void mpt_event_notify_reply(mpt_softc_t *mpt, MSG_EVENT_NOTIFY_REPLY *);
-
-static void
-mpt_ctlop(mpt_softc_t *mpt, void *vmsg, u_int32_t reply)
-{
- MSG_DEFAULT_REPLY *dmsg = vmsg;
-
- if (dmsg->Function == MPI_FUNCTION_EVENT_NOTIFICATION) {
- mpt_event_notify_reply(mpt, vmsg);
- mpt_free_reply(mpt, (reply << 1));
- } else if (dmsg->Function == MPI_FUNCTION_EVENT_ACK) {
- mpt_free_reply(mpt, (reply << 1));
- } else if (dmsg->Function == MPI_FUNCTION_PORT_ENABLE) {
- MSG_PORT_ENABLE_REPLY *msg = vmsg;
- int index = msg->MsgContext & ~0x80000000;
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "enable port reply idx %d", index);
- }
- if (index >= 0 && index < MPT_MAX_REQUESTS(mpt)) {
- request_t *req = &mpt->request_pool[index];
- req->debug = REQ_DONE;
- }
- mpt_free_reply(mpt, (reply << 1));
- } else if (dmsg->Function == MPI_FUNCTION_CONFIG) {
- MSG_CONFIG_REPLY *msg = vmsg;
- int index = msg->MsgContext & ~0x80000000;
- if (index >= 0 && index < MPT_MAX_REQUESTS(mpt)) {
- request_t *req = &mpt->request_pool[index];
- req->debug = REQ_DONE;
- req->sequence = reply;
- } else {
- mpt_free_reply(mpt, (reply << 1));
- }
- } else {
- mpt_prt(mpt, "unknown mpt_ctlop: %x", dmsg->Function);
- }
-}
-
-static void
-mpt_event_notify_reply(mpt_softc_t *mpt, MSG_EVENT_NOTIFY_REPLY *msg)
-{
- switch(msg->Event) {
- case MPI_EVENT_LOG_DATA:
- /* Some error occured that LSI wants logged */
- printf("\tEvtLogData: IOCLogInfo: 0x%08x\n", msg->IOCLogInfo);
- printf("\tEvtLogData: Event Data:");
- {
- int i;
- for (i = 0; i < msg->EventDataLength; i++) {
- printf(" %08x", msg->Data[i]);
- }
- }
- printf("\n");
- break;
-
- case MPI_EVENT_UNIT_ATTENTION:
- mpt_prt(mpt, "Bus: 0x%02x TargetID: 0x%02x",
- (msg->Data[0] >> 8) & 0xff, msg->Data[0] & 0xff);
- break;
-
- case MPI_EVENT_IOC_BUS_RESET:
- /* We generated a bus reset */
- mpt_prt(mpt, "IOC Bus Reset Port: %d",
- (msg->Data[0] >> 8) & 0xff);
- break;
-
- case MPI_EVENT_EXT_BUS_RESET:
- /* Someone else generated a bus reset */
- mpt_prt(mpt, "Ext Bus Reset");
- /*
- * These replies don't return EventData like the MPI
- * spec says they do
- */
-/* xpt_async(AC_BUS_RESET, path, NULL); */
- break;
-
- case MPI_EVENT_RESCAN:
- /*
- * In general this means a device has been added
- * to the loop.
- */
- mpt_prt(mpt, "Rescan Port: %d", (msg->Data[0] >> 8) & 0xff);
-/* xpt_async(AC_FOUND_DEVICE, path, NULL); */
- break;
-
- case MPI_EVENT_LINK_STATUS_CHANGE:
- mpt_prt(mpt, "Port %d: LinkState: %s",
- (msg->Data[1] >> 8) & 0xff,
- ((msg->Data[0] & 0xff) == 0)? "Failed" : "Active");
- break;
-
- case MPI_EVENT_LOOP_STATE_CHANGE:
- switch ((msg->Data[0] >> 16) & 0xff) {
- case 0x01:
- mpt_prt(mpt,
- "Port 0x%x: FC LinkEvent: LIP(%02x,%02x) (Loop Initialization)\n",
- (msg->Data[1] >> 8) & 0xff,
- (msg->Data[0] >> 8) & 0xff,
- (msg->Data[0] ) & 0xff);
- switch ((msg->Data[0] >> 8) & 0xff) {
- case 0xF7:
- if ((msg->Data[0] & 0xff) == 0xF7) {
- printf("Device needs AL_PA\n");
- } else {
- printf("Device %02x doesn't like FC performance\n",
- msg->Data[0] & 0xFF);
- }
- break;
- case 0xF8:
- if ((msg->Data[0] & 0xff) == 0xF7) {
- printf("Device had loop failure at its receiver prior to acquiring AL_PA\n");
- } else {
- printf("Device %02x detected loop failure at its receiver\n",
- msg->Data[0] & 0xFF);
- }
- break;
- default:
- printf("Device %02x requests that device %02x reset itself\n",
- msg->Data[0] & 0xFF,
- (msg->Data[0] >> 8) & 0xFF);
- break;
- }
- break;
- case 0x02:
- mpt_prt(mpt, "Port 0x%x: FC LinkEvent: LPE(%02x,%02x) (Loop Port Enable)",
- (msg->Data[1] >> 8) & 0xff, /* Port */
- (msg->Data[0] >> 8) & 0xff, /* Character 3 */
- (msg->Data[0] ) & 0xff /* Character 4 */
- );
- break;
- case 0x03:
- mpt_prt(mpt, "Port 0x%x: FC LinkEvent: LPB(%02x,%02x) (Loop Port Bypass)",
- (msg->Data[1] >> 8) & 0xff, /* Port */
- (msg->Data[0] >> 8) & 0xff, /* Character 3 */
- (msg->Data[0] ) & 0xff /* Character 4 */
- );
- break;
- default:
- mpt_prt(mpt, "Port 0x%x: FC LinkEvent: Unknown FC event (%02x %02x %02x)",
- (msg->Data[1] >> 8) & 0xff, /* Port */
- (msg->Data[0] >> 16) & 0xff, /* Event */
- (msg->Data[0] >> 8) & 0xff, /* Character 3 */
- (msg->Data[0] ) & 0xff /* Character 4 */
- );
- }
- break;
-
- case MPI_EVENT_LOGOUT:
- mpt_prt(mpt, "FC Logout Port: %d N_PortID: %02x",
- (msg->Data[1] >> 8) & 0xff, msg->Data[0]);
- break;
- case MPI_EVENT_EVENT_CHANGE:
- /* This is just an acknowledgement of our
- mpt_send_event_request */
- break;
- default:
- mpt_prt(mpt, "Unknown event 0x%x\n", msg->Event);
- }
- if (msg->AckRequired) {
- MSG_EVENT_ACK *ackp;
- request_t *req;
- if ((req = mpt_get_request(mpt)) == NULL) {
- panic("unable to get request to acknowledge notify");
- }
- ackp = (MSG_EVENT_ACK *) req->req_vbuf;
- bzero(ackp, sizeof *ackp);
- ackp->Function = MPI_FUNCTION_EVENT_ACK;
- ackp->Event = msg->Event;
- ackp->EventContext = msg->EventContext;
- ackp->MsgContext = req->index | 0x80000000;
- mpt_check_doorbell(mpt);
- mpt_send_cmd(mpt, req);
- }
-}
-
-void
-mpt_done(mpt_softc_t *mpt, u_int32_t reply)
-{
- int index;
- request_t *req;
- union ccb *ccb;
- MSG_REQUEST_HEADER *mpt_req;
- MSG_SCSI_IO_REPLY *mpt_reply;
-
- index = -1; /* Shutup the complier */
-
- if ((reply & MPT_CONTEXT_REPLY) == 0) {
- /* context reply */
- mpt_reply = NULL;
- index = reply & MPT_CONTEXT_MASK;
- } else {
- unsigned *pReply;
-
- bus_dmamap_sync(mpt->reply_dmat, mpt->reply_dmap,
- BUS_DMASYNC_POSTREAD);
- /* address reply (Error) */
- mpt_reply = MPT_REPLY_PTOV(mpt, reply);
- if (mpt->verbose > 1) {
- pReply = (unsigned *) mpt_reply;
- mpt_prt(mpt, "Address Reply (index %u)",
- mpt_reply->MsgContext & 0xffff);
- printf("%08x %08x %08x %08x\n",
- pReply[0], pReply[1], pReply[2], pReply[3]);
- printf("%08x %08x %08x %08x\n",
- pReply[4], pReply[5], pReply[6], pReply[7]);
- printf("%08x %08x %08x %08x\n\n",
- pReply[8], pReply[9], pReply[10], pReply[11]);
- }
- index = mpt_reply->MsgContext;
- }
-
- /*
- * Address reply with MessageContext high bit set
- * This is most likely a notify message so we try
- * to process it then free it
- */
- if ((index & 0x80000000) != 0) {
- if (mpt_reply != NULL) {
- mpt_ctlop(mpt, mpt_reply, reply);
- } else {
- mpt_prt(mpt, "mpt_done: index 0x%x, NULL reply", index);
- }
- return;
- }
-
- /* Did we end up with a valid index into the table? */
- if (index < 0 || index >= MPT_MAX_REQUESTS(mpt)) {
- mpt_prt(mpt, "mpt_done: invalid index (%x) in reply", index);
- return;
- }
-
- req = &mpt->request_pool[index];
-
- /* Make sure memory hasn't been trashed */
- if (req->index != index) {
- printf("mpt_done: corrupted request struct");
- return;
- }
-
- /* Short cut for task management replys; nothing more for us to do */
- mpt_req = req->req_vbuf;
- if (mpt_req->Function == MPI_FUNCTION_SCSI_TASK_MGMT) {
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "mpt_done: TASK MGMT");
- }
- goto done;
- }
-
- if (mpt_req->Function == MPI_FUNCTION_PORT_ENABLE) {
- goto done;
- }
-
- /*
- * At this point it better be a SCSI IO command, but don't
- * crash if it isn't
- */
- if (mpt_req->Function != MPI_FUNCTION_SCSI_IO_REQUEST) {
- goto done;
- }
-
- /* Recover the CAM control block from the request structure */
- ccb = req->ccb;
-
- /* Can't have had a SCSI command with out a CAM control block */
- if (ccb == NULL || (ccb->ccb_h.status & CAM_SIM_QUEUED) == 0) {
- mpt_prt(mpt,
- "mpt_done: corrupted ccb, index = 0x%02x seq = 0x%08x",
- req->index, req->sequence);
- printf(" request state %s\nmpt_request:\n",
- mpt_req_state(req->debug));
- mpt_print_scsi_io_request((MSG_SCSI_IO_REQUEST *)req->req_vbuf);
-
- if (mpt_reply != NULL) {
- printf("\nmpt_done: reply:\n");
- mpt_print_reply(MPT_REPLY_PTOV(mpt, reply));
- } else {
- printf("\nmpt_done: context reply: 0x%08x\n", reply);
- }
- goto done;
- }
-
- untimeout(mpttimeout, ccb, ccb->ccb_h.timeout_ch);
-
- if ((ccb->ccb_h.flags & CAM_DIR_MASK) != CAM_DIR_NONE) {
- bus_dmasync_op_t op;
-
- if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_IN) {
- op = BUS_DMASYNC_POSTREAD;
- } else {
- op = BUS_DMASYNC_POSTWRITE;
- }
- bus_dmamap_sync(mpt->buffer_dmat, req->dmap, op);
- bus_dmamap_unload(mpt->buffer_dmat, req->dmap);
- }
- ccb->csio.resid = 0;
-
- if (mpt_reply == NULL) {
- /* Context reply; report that the command was successfull */
- ccb->ccb_h.status = CAM_REQ_CMP;
- ccb->csio.scsi_status = SCSI_STATUS_OK;
- ccb->ccb_h.status &= ~CAM_SIM_QUEUED;
- if (mpt->outofbeer) {
- ccb->ccb_h.status |= CAM_RELEASE_SIMQ;
- mpt->outofbeer = 0;
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "THAWQ");
- }
- }
- MPTLOCK_2_CAMLOCK(mpt);
- xpt_done(ccb);
- CAMLOCK_2_MPTLOCK(mpt);
- goto done;
- }
-
- ccb->csio.scsi_status = mpt_reply->SCSIStatus;
- switch(mpt_reply->IOCStatus) {
- case MPI_IOCSTATUS_SCSI_DATA_OVERRUN:
- ccb->ccb_h.status = CAM_DATA_RUN_ERR;
- break;
-
- case MPI_IOCSTATUS_SCSI_DATA_UNDERRUN:
- /*
- * Yikes, Tagged queue full comes through this path!
- *
- * So we'll change it to a status error and anything
- * that returns status should probably be a status
- * error as well.
- */
- ccb->csio.resid =
- ccb->csio.dxfer_len - mpt_reply->TransferCount;
- if (mpt_reply->SCSIState & MPI_SCSI_STATE_NO_SCSI_STATUS) {
- ccb->ccb_h.status = CAM_DATA_RUN_ERR;
- break;
- }
- /* Fall through */
- case MPI_IOCSTATUS_SUCCESS:
- case MPI_IOCSTATUS_SCSI_RECOVERED_ERROR:
- switch (ccb->csio.scsi_status) {
- case SCSI_STATUS_OK:
- ccb->ccb_h.status = CAM_REQ_CMP;
- break;
- default:
- ccb->ccb_h.status = CAM_SCSI_STATUS_ERROR;
- break;
- }
- break;
- case MPI_IOCSTATUS_BUSY:
- case MPI_IOCSTATUS_INSUFFICIENT_RESOURCES:
- ccb->ccb_h.status = CAM_BUSY;
- break;
-
- case MPI_IOCSTATUS_SCSI_INVALID_BUS:
- case MPI_IOCSTATUS_SCSI_INVALID_TARGETID:
- case MPI_IOCSTATUS_SCSI_DEVICE_NOT_THERE:
- ccb->ccb_h.status = CAM_DEV_NOT_THERE;
- break;
-
- case MPI_IOCSTATUS_SCSI_RESIDUAL_MISMATCH:
- ccb->ccb_h.status = CAM_DATA_RUN_ERR;
- break;
-
- case MPI_IOCSTATUS_SCSI_PROTOCOL_ERROR:
- case MPI_IOCSTATUS_SCSI_IO_DATA_ERROR:
- ccb->ccb_h.status = CAM_UNCOR_PARITY;
- break;
-
- case MPI_IOCSTATUS_SCSI_TASK_TERMINATED:
- ccb->ccb_h.status = CAM_REQ_CMP;
- break;
-
- case MPI_IOCSTATUS_SCSI_TASK_MGMT_FAILED:
- ccb->ccb_h.status = CAM_UA_TERMIO;
- break;
-
- case MPI_IOCSTATUS_SCSI_IOC_TERMINATED:
- ccb->ccb_h.status = CAM_REQ_TERMIO;
- break;
-
- case MPI_IOCSTATUS_SCSI_EXT_TERMINATED:
- ccb->ccb_h.status = CAM_SCSI_BUS_RESET;
- break;
-
- default:
- ccb->ccb_h.status = CAM_UNREC_HBA_ERROR;
- break;
- }
-
- if ((mpt_reply->SCSIState & MPI_SCSI_STATE_AUTOSENSE_VALID) != 0) {
- if (ccb->ccb_h.flags & (CAM_SENSE_PHYS | CAM_SENSE_PTR)) {
- ccb->ccb_h.status |= CAM_AUTOSENSE_FAIL;
- } else {
- ccb->ccb_h.status |= CAM_AUTOSNS_VALID;
- ccb->csio.sense_resid = mpt_reply->SenseCount;
- bcopy(req->sense_vbuf, &ccb->csio.sense_data,
- ccb->csio.sense_len);
- }
- } else if (mpt_reply->SCSIState & MPI_SCSI_STATE_AUTOSENSE_FAILED) {
- ccb->ccb_h.status &= ~CAM_STATUS_MASK;
- ccb->ccb_h.status |= CAM_AUTOSENSE_FAIL;
- }
-
- if ((ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
- if ((ccb->ccb_h.status & CAM_DEV_QFRZN) == 0) {
- ccb->ccb_h.status |= CAM_DEV_QFRZN;
- xpt_freeze_devq(ccb->ccb_h.path, 1);
- }
- }
-
-
- ccb->ccb_h.status &= ~CAM_SIM_QUEUED;
- if (mpt->outofbeer) {
- ccb->ccb_h.status |= CAM_RELEASE_SIMQ;
- mpt->outofbeer = 0;
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "THAWQ");
- }
- }
- MPTLOCK_2_CAMLOCK(mpt);
- xpt_done(ccb);
- CAMLOCK_2_MPTLOCK(mpt);
-
-done:
- /* If IOC done with this request free it up */
- if (mpt_reply == NULL || (mpt_reply->MsgFlags & 0x80) == 0)
- mpt_free_request(mpt, req);
-
- /* If address reply; give the buffer back to the IOC */
- if (mpt_reply != NULL)
- mpt_free_reply(mpt, (reply << 1));
-}
-
-static void
-mpt_action(struct cam_sim *sim, union ccb *ccb)
-{
- int tgt, error;
- mpt_softc_t *mpt;
- struct ccb_trans_settings *cts;
-
- CAM_DEBUG(ccb->ccb_h.path, CAM_DEBUG_TRACE, ("mpt_action\n"));
-
- mpt = (mpt_softc_t *)cam_sim_softc(sim);
-
- ccb->ccb_h.ccb_mpt_ptr = mpt;
-
- switch (ccb->ccb_h.func_code) {
- case XPT_RESET_BUS:
- if (mpt->verbose > 1)
- mpt_prt(mpt, "XPT_RESET_BUS");
- CAMLOCK_2_MPTLOCK(mpt);
- error = mpt_bus_reset(ccb);
- switch (error) {
- case CAM_REQ_INPROG:
- MPTLOCK_2_CAMLOCK(mpt);
- break;
- case CAM_REQUEUE_REQ:
- if (mpt->outofbeer == 0) {
- mpt->outofbeer = 1;
- xpt_freeze_simq(sim, 1);
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "FREEZEQ");
- }
- }
- ccb->ccb_h.status = CAM_REQUEUE_REQ;
- MPTLOCK_2_CAMLOCK(mpt);
- xpt_done(ccb);
- break;
-
- case CAM_REQ_CMP:
- ccb->ccb_h.status &= ~CAM_SIM_QUEUED;
- ccb->ccb_h.status |= CAM_REQ_CMP;
- if (mpt->outofbeer) {
- ccb->ccb_h.status |= CAM_RELEASE_SIMQ;
- mpt->outofbeer = 0;
- if (mpt->verbose > 1) {
- mpt_prt(mpt, "THAWQ");
- }
- }
- MPTLOCK_2_CAMLOCK(mpt);
- xpt_done(ccb);
- break;
-
- default:
- ccb->ccb_h.status = CAM_REQ_CMP_ERR;
- MPTLOCK_2_CAMLOCK(mpt);
- xpt_done(ccb);
- }
- break;
-
- case XPT_SCSI_IO: /* Execute the requested I/O operation */
- /*
- * Do a couple of preliminary checks...
- */
- if ((ccb->ccb_h.flags & CAM_CDB_POINTER) != 0) {
- if ((ccb->ccb_h.flags & CAM_CDB_PHYS) != 0) {
- ccb->ccb_h.status = CAM_REQ_INVALID;
- xpt_done(ccb);
- break;
- }
- }
- /* Max supported CDB length is 16 bytes */
- if (ccb->csio.cdb_len >
- sizeof (((PTR_MSG_SCSI_IO_REQUEST)0)->CDB)) {
- ccb->ccb_h.status = CAM_REQ_INVALID;
- xpt_done(ccb);
- return;
- }
- ccb->csio.scsi_status = SCSI_STATUS_OK;
- mpt_start(ccb);
- break;
-
- case XPT_ABORT:
- /*
- * XXX: Need to implement
- */
- ccb->ccb_h.status = CAM_UA_ABORT;
- xpt_done(ccb);
- break;
-
-#ifdef CAM_NEW_TRAN_CODE
-#define IS_CURRENT_SETTINGS(c) (c->type == CTS_TYPE_CURRENT_SETTINGS)
-#else
-#define IS_CURRENT_SETTINGS(c) (c->flags & CCB_TRANS_CURRENT_SETTINGS)
-#endif
-#define DP_DISC_ENABLE 0x1
-#define DP_DISC_DISABL 0x2
-#define DP_DISC (DP_DISC_ENABLE|DP_DISC_DISABL)
-
-#define DP_TQING_ENABLE 0x4
-#define DP_TQING_DISABL 0x8
-#define DP_TQING (DP_TQING_ENABLE|DP_TQING_DISABL)
-
-#define DP_WIDE 0x10
-#define DP_NARROW 0x20
-#define DP_WIDTH (DP_WIDE|DP_NARROW)
-
-#define DP_SYNC 0x40
-
- case XPT_SET_TRAN_SETTINGS: /* Nexus Settings */
- cts = &ccb->cts;
- if (!IS_CURRENT_SETTINGS(cts)) {
- ccb->ccb_h.status = CAM_REQ_INVALID;
- xpt_done(ccb);
- break;
- }
- tgt = cts->ccb_h.target_id;
- if (mpt->is_fc == 0) {
- u_int8_t dval = 0;
- u_int period = 0, offset = 0;
-#ifndef CAM_NEW_TRAN_CODE
- if (cts->valid & CCB_TRANS_DISC_VALID) {
- dval |= DP_DISC_ENABLE;
- }
- if (cts->valid & CCB_TRANS_TQ_VALID) {
- dval |= DP_TQING_ENABLE;
- }
- if (cts->valid & CCB_TRANS_BUS_WIDTH_VALID) {
- if (cts->bus_width)
- dval |= DP_WIDE;
- else
- dval |= DP_NARROW;
- }
- /*
- * Any SYNC RATE of nonzero and SYNC_OFFSET
- * of nonzero will cause us to go to the
- * selected (from NVRAM) maximum value for
- * this device. At a later point, we'll
- * allow finer control.
- */
- if ((cts->valid & CCB_TRANS_SYNC_RATE_VALID) &&
- (cts->valid & CCB_TRANS_SYNC_OFFSET_VALID)) {
- dval |= DP_SYNC;
- period = cts->sync_period;
- offset = cts->sync_offset;
- }
-#else
- struct ccb_trans_settings_scsi *scsi =
- &cts->proto_specific.scsi;
- struct ccb_trans_settings_spi *spi =
- &cts->xport_specific.spi;
-
- if ((spi->valid & CTS_SPI_VALID_DISC) != 0) {
- if ((spi->flags & CTS_SPI_FLAGS_DISC_ENB) != 0)
- dval |= DP_DISC_ENABLE;
- else
- dval |= DP_DISC_DISABL;
- }
-
- if ((scsi->valid & CTS_SCSI_VALID_TQ) != 0) {
- if ((scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) != 0)
- dval |= DP_TQING_ENABLE;
- else
- dval |= DP_TQING_DISABL;
- }
-
- if ((spi->valid & CTS_SPI_VALID_BUS_WIDTH) != 0) {
- if (spi->bus_width == MSG_EXT_WDTR_BUS_16_BIT)
- dval |= DP_WIDE;
- else
- dval |= DP_NARROW;
- }
-
- if ((spi->valid & CTS_SPI_VALID_SYNC_OFFSET) &&
- (spi->valid & CTS_SPI_VALID_SYNC_RATE) &&
- (spi->sync_period && spi->sync_offset)) {
- dval |= DP_SYNC;
- period = spi->sync_period;
- offset = spi->sync_offset;
- }
-#endif
- CAMLOCK_2_MPTLOCK(mpt);
- if (dval & DP_DISC_ENABLE) {
- mpt->mpt_disc_enable |= (1 << tgt);
- } else if (dval & DP_DISC_DISABL) {
- mpt->mpt_disc_enable &= ~(1 << tgt);
- }
- if (dval & DP_TQING_ENABLE) {
- mpt->mpt_tag_enable |= (1 << tgt);
- } else if (dval & DP_TQING_DISABL) {
- mpt->mpt_tag_enable &= ~(1 << tgt);
- }
- if (dval & DP_WIDTH) {
- if (mpt_setwidth(mpt, tgt, dval & DP_WIDE)) {
- ccb->ccb_h.status = CAM_REQ_CMP_ERR;
- MPTLOCK_2_CAMLOCK(mpt);
- xpt_done(ccb);
- break;
- }
- }
- if (dval & DP_SYNC) {
- if (mpt_setsync(mpt, tgt, period, offset)) {
- ccb->ccb_h.status = CAM_REQ_CMP_ERR;
- MPTLOCK_2_CAMLOCK(mpt);
- xpt_done(ccb);
- break;
- }
- }
- MPTLOCK_2_CAMLOCK(mpt);
- if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "SET tgt %d flags %x period %x off %x",
- tgt, dval, period, offset);
- }
- }
- ccb->ccb_h.status = CAM_REQ_CMP;
- xpt_done(ccb);
- break;
-
- case XPT_GET_TRAN_SETTINGS:
- cts = &ccb->cts;
- tgt = cts->ccb_h.target_id;
- if (mpt->is_fc) {
-#ifndef CAM_NEW_TRAN_CODE
- /*
- * a lot of normal SCSI things don't make sense.
- */
- cts->flags = CCB_TRANS_TAG_ENB | CCB_TRANS_DISC_ENB;
- cts->valid = CCB_TRANS_DISC_VALID | CCB_TRANS_TQ_VALID;
- /*
- * How do you measure the width of a high
- * speed serial bus? Well, in bytes.
- *
- * Offset and period make no sense, though, so we set
- * (above) a 'base' transfer speed to be gigabit.
- */
- cts->bus_width = MSG_EXT_WDTR_BUS_8_BIT;
-#else
- struct ccb_trans_settings_fc *fc =
- &cts->xport_specific.fc;
-
- cts->protocol = PROTO_SCSI;
- cts->protocol_version = SCSI_REV_2;
- cts->transport = XPORT_FC;
- cts->transport_version = 0;
-
- fc->valid = CTS_FC_VALID_SPEED;
- fc->bitrate = 100000; /* XXX: Need for 2Gb/s */
- /* XXX: need a port database for each target */
-#endif
- } else {
-#ifdef CAM_NEW_TRAN_CODE
- struct ccb_trans_settings_scsi *scsi =
- &cts->proto_specific.scsi;
- struct ccb_trans_settings_spi *spi =
- &cts->xport_specific.spi;
-#endif
- u_int8_t dval, pval, oval;
-
- /*
- * We aren't going off of Port PAGE2 params for
- * tagged queuing or disconnect capabilities
- * for current settings. For goal settings,
- * we assert all capabilities- we've had some
- * problems with reading NVRAM data.
- */
- if (IS_CURRENT_SETTINGS(cts)) {
- CONFIG_PAGE_SCSI_DEVICE_0 tmp;
- dval = 0;
-
- tmp = mpt->mpt_dev_page0[tgt];
- CAMLOCK_2_MPTLOCK(mpt);
- if (mpt_read_cfg_page(mpt, tgt, &tmp.Header)) {
- mpt_prt(mpt,
- "cannot get target %d DP0", tgt);
- } else {
- if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "SPI Tgt %d Page 0: NParms %x Information %x",
- tgt,
- tmp.NegotiatedParameters,
- tmp.Information);
- }
- }
- MPTLOCK_2_CAMLOCK(mpt);
-
- if (tmp.NegotiatedParameters &
- MPI_SCSIDEVPAGE0_NP_WIDE)
- dval |= DP_WIDE;
-
- if (mpt->mpt_disc_enable & (1 << tgt)) {
- dval |= DP_DISC_ENABLE;
- }
- if (mpt->mpt_tag_enable & (1 << tgt)) {
- dval |= DP_TQING_ENABLE;
- }
- oval = (tmp.NegotiatedParameters >> 16) & 0xff;
- pval = (tmp.NegotiatedParameters >> 8) & 0xff;
- } else {
- /*
- * XXX: Fix wrt NVRAM someday. Attempts
- * XXX: to read port page2 device data
- * XXX: just returns zero in these areas.
- */
- dval = DP_WIDE|DP_DISC|DP_TQING;
- oval = (mpt->mpt_port_page0.Capabilities >> 16);
- pval = (mpt->mpt_port_page0.Capabilities >> 8);
- }
-#ifndef CAM_NEW_TRAN_CODE
- cts->flags &= ~(CCB_TRANS_DISC_ENB|CCB_TRANS_TAG_ENB);
- if (dval & DP_DISC_ENABLE) {
- cts->flags |= CCB_TRANS_DISC_ENB;
- }
- if (dval & DP_TQING_ENABLE) {
- cts->flags |= CCB_TRANS_TAG_ENB;
- }
- if (dval & DP_WIDE) {
- cts->bus_width = MSG_EXT_WDTR_BUS_16_BIT;
- } else {
- cts->bus_width = MSG_EXT_WDTR_BUS_8_BIT;
- }
- cts->valid = CCB_TRANS_BUS_WIDTH_VALID |
- CCB_TRANS_DISC_VALID | CCB_TRANS_TQ_VALID;
- if (oval) {
- cts->sync_period = pval;
- cts->sync_offset = oval;
- cts->valid |=
- CCB_TRANS_SYNC_RATE_VALID |
- CCB_TRANS_SYNC_OFFSET_VALID;
- }
-#else
- cts->protocol = PROTO_SCSI;
- cts->protocol_version = SCSI_REV_2;
- cts->transport = XPORT_SPI;
- cts->transport_version = 2;
-
- scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
- spi->flags &= ~CTS_SPI_FLAGS_DISC_ENB;
- if (dval & DP_DISC_ENABLE) {
- spi->flags |= CTS_SPI_FLAGS_DISC_ENB;
- }
- if (dval & DP_TQING_ENABLE) {
- scsi->flags |= CTS_SCSI_FLAGS_TAG_ENB;
- }
- if (oval && pval) {
- spi->sync_offset = oval;
- spi->sync_period = pval;
- spi->valid |= CTS_SPI_VALID_SYNC_OFFSET;
- spi->valid |= CTS_SPI_VALID_SYNC_RATE;
- }
- spi->valid |= CTS_SPI_VALID_BUS_WIDTH;
- if (dval & DP_WIDE) {
- spi->bus_width = MSG_EXT_WDTR_BUS_16_BIT;
- } else {
- spi->bus_width = MSG_EXT_WDTR_BUS_8_BIT;
- }
- if (cts->ccb_h.target_lun != CAM_LUN_WILDCARD) {
- scsi->valid = CTS_SCSI_VALID_TQ;
- spi->valid |= CTS_SPI_VALID_DISC;
- } else {
- scsi->valid = 0;
- }
-#endif
- if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "GET %s tgt %d flags %x period %x off %x",
- IS_CURRENT_SETTINGS(cts)? "ACTIVE" :
- "NVRAM", tgt, dval, pval, oval);
- }
- }
- ccb->ccb_h.status = CAM_REQ_CMP;
- xpt_done(ccb);
- break;
-
- case XPT_CALC_GEOMETRY:
- {
- struct ccb_calc_geometry *ccg;
-
- ccg = &ccb->ccg;
- if (ccg->block_size == 0) {
- ccb->ccb_h.status = CAM_REQ_INVALID;
- xpt_done(ccb);
- break;
- }
-
- cam_calc_geometry(ccg, /*extended*/1);
- xpt_done(ccb);
- break;
- }
- case XPT_PATH_INQ: /* Path routing inquiry */
- {
- struct ccb_pathinq *cpi = &ccb->cpi;
-
- cpi->version_num = 1;
- cpi->target_sprt = 0;
- cpi->hba_eng_cnt = 0;
- cpi->max_lun = 7;
- cpi->bus_id = cam_sim_bus(sim);
- if (mpt->is_fc) {
- cpi->max_target = 255;
- cpi->hba_misc = PIM_NOBUSRESET;
- cpi->initiator_id = cpi->max_target + 1;
- cpi->base_transfer_speed = 100000;
- cpi->hba_inquiry = PI_TAG_ABLE;
- } else {
- cpi->initiator_id = mpt->mpt_ini_id;
- cpi->base_transfer_speed = 3300;
- cpi->hba_inquiry = PI_SDTR_ABLE|PI_TAG_ABLE|PI_WIDE_16;
- cpi->hba_misc = 0;
- cpi->max_target = 15;
- }
-
- strncpy(cpi->sim_vid, "FreeBSD", SIM_IDLEN);
- strncpy(cpi->hba_vid, "LSI", HBA_IDLEN);
- strncpy(cpi->dev_name, cam_sim_name(sim), DEV_IDLEN);
- cpi->unit_number = cam_sim_unit(sim);
- cpi->ccb_h.status = CAM_REQ_CMP;
- xpt_done(ccb);
- break;
- }
- default:
- ccb->ccb_h.status = CAM_REQ_INVALID;
- xpt_done(ccb);
- break;
- }
-}
-
-static int
-mpt_setwidth(mpt_softc_t *mpt, int tgt, int onoff)
-{
- CONFIG_PAGE_SCSI_DEVICE_1 tmp;
- tmp = mpt->mpt_dev_page1[tgt];
- if (onoff) {
- tmp.RequestedParameters |= MPI_SCSIDEVPAGE1_RP_WIDE;
- } else {
- tmp.RequestedParameters &= ~MPI_SCSIDEVPAGE1_RP_WIDE;
- }
- if (mpt_write_cfg_page(mpt, tgt, &tmp.Header)) {
- return (-1);
- }
- if (mpt_read_cfg_page(mpt, tgt, &tmp.Header)) {
- return (-1);
- }
- mpt->mpt_dev_page1[tgt] = tmp;
- if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "SPI Target %d Page 1: RequestedParameters %x Config %x",
- tgt, mpt->mpt_dev_page1[tgt].RequestedParameters,
- mpt->mpt_dev_page1[tgt].Configuration);
- }
- return (0);
-}
-
-static int
-mpt_setsync(mpt_softc_t *mpt, int tgt, int period, int offset)
-{
- CONFIG_PAGE_SCSI_DEVICE_1 tmp;
- tmp = mpt->mpt_dev_page1[tgt];
- tmp.RequestedParameters &=
- ~MPI_SCSIDEVPAGE1_RP_MIN_SYNC_PERIOD_MASK;
- tmp.RequestedParameters &=
- ~MPI_SCSIDEVPAGE1_RP_MAX_SYNC_OFFSET_MASK;
- tmp.RequestedParameters &=
- ~MPI_SCSIDEVPAGE1_RP_DT;
- tmp.RequestedParameters &=
- ~MPI_SCSIDEVPAGE1_RP_QAS;
- tmp.RequestedParameters &=
- ~MPI_SCSIDEVPAGE1_RP_IU;
- /*
- * XXX: For now, we're ignoring specific settings
- */
- if (period && offset) {
- int factor, offset, np;
- factor = (mpt->mpt_port_page0.Capabilities >> 8) & 0xff;
- offset = (mpt->mpt_port_page0.Capabilities >> 16) & 0xff;
- np = 0;
- if (factor < 0x9) {
- np |= MPI_SCSIDEVPAGE1_RP_QAS;
- np |= MPI_SCSIDEVPAGE1_RP_IU;
- }
- if (factor < 0xa) {
- np |= MPI_SCSIDEVPAGE1_RP_DT;
- }
- np |= (factor << 8) | (offset << 16);
- tmp.RequestedParameters |= np;
- }
- if (mpt_write_cfg_page(mpt, tgt, &tmp.Header)) {
- return (-1);
- }
- if (mpt_read_cfg_page(mpt, tgt, &tmp.Header)) {
- return (-1);
- }
- mpt->mpt_dev_page1[tgt] = tmp;
- if (mpt->verbose > 1) {
- mpt_prt(mpt,
- "SPI Target %d Page 1: RParams %x Config %x",
- tgt, mpt->mpt_dev_page1[tgt].RequestedParameters,
- mpt->mpt_dev_page1[tgt].Configuration);
- }
- return (0);
-}
diff --git a/sys/dev/mpt/mpt_freebsd.h b/sys/dev/mpt/mpt_freebsd.h
deleted file mode 100644
index 50cd282..0000000
--- a/sys/dev/mpt/mpt_freebsd.h
+++ /dev/null
@@ -1,357 +0,0 @@
-/* $FreeBSD$ */
-/*-
- * LSI MPT Host Adapter FreeBSD Wrapper Definitions (CAM version)
- *
- * Copyright (c) 2000, 2001 by Greg Ansley, Adam Prewett
- *
- * Partially derived from Matty Jacobs ISP driver.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice immediately at the beginning of the file, without modification,
- * this list of conditions, and the following disclaimer.
- * 2. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- */
-/*
- * Additional Copyright (c) 2002 by Matthew Jacob under same license.
- */
-
-#ifndef _MPT_FREEBSD_H_
-#define _MPT_FREEBSD_H_
-
-/* #define RELENG_4 1 */
-
-#include <sys/param.h>
-#include <sys/systm.h>
-#ifdef RELENG_4
-#include <sys/kernel.h>
-#include <sys/queue.h>
-#include <sys/malloc.h>
-#else
-#include <sys/endian.h>
-#include <sys/lock.h>
-#include <sys/kernel.h>
-#include <sys/queue.h>
-#include <sys/malloc.h>
-#include <sys/mutex.h>
-#include <sys/condvar.h>
-#endif
-#include <sys/proc.h>
-#include <sys/bus.h>
-
-#include <machine/bus.h>
-#include <machine/clock.h>
-#include <machine/cpu.h>
-
-#include <cam/cam.h>
-#include <cam/cam_debug.h>
-#include <cam/cam_ccb.h>
-#include <cam/cam_sim.h>
-#include <cam/cam_xpt.h>
-#include <cam/cam_xpt_sim.h>
-#include <cam/cam_debug.h>
-#include <cam/scsi/scsi_all.h>
-#include <cam/scsi/scsi_message.h>
-
-#include "opt_ddb.h"
-
-#include "dev/mpt/mpilib/mpi_type.h"
-#include "dev/mpt/mpilib/mpi.h"
-#include "dev/mpt/mpilib/mpi_cnfg.h"
-#include "dev/mpt/mpilib/mpi_fc.h"
-#include "dev/mpt/mpilib/mpi_init.h"
-#include "dev/mpt/mpilib/mpi_ioc.h"
-#include "dev/mpt/mpilib/mpi_lan.h"
-#include "dev/mpt/mpilib/mpi_targ.h"
-
-
-#define INLINE __inline
-
-#ifdef RELENG_4
-#define MPT_IFLAGS INTR_TYPE_CAM
-#define MPT_LOCK(mpt) mpt_lockspl(mpt)
-#define MPT_UNLOCK(mpt) mpt_unlockspl(mpt)
-#define MPTLOCK_2_CAMLOCK MPT_UNLOCK
-#define CAMLOCK_2_MPTLOCK MPT_LOCK
-#define MPT_LOCK_SETUP(mpt)
-#define MPT_LOCK_DESTROY(mpt)
-#else
-#if LOCKING_WORKED_AS_IT_SHOULD
-#define MPT_IFLAGS INTR_TYPE_CAM | INTR_ENTROPY | INTR_MPSAFE
-#define MPT_LOCK_SETUP(mpt) \
- mtx_init(&mpt->mpt_lock, "mpt", NULL, MTX_DEF); \
- mpt->mpt_locksetup = 1
-#define MPT_LOCK_DESTROY(mpt) \
- if (mpt->mpt_locksetup) { \
- mtx_destroy(&mpt->mpt_lock); \
- mpt->mpt_locksetup = 0; \
- }
-
-#define MPT_LOCK(mpt) mtx_lock(&(mpt)->mpt_lock)
-#define MPT_UNLOCK(mpt) mtx_unlock(&(mpt)->mpt_lock)
-#define MPTLOCK_2_CAMLOCK(mpt) \
- mtx_unlock(&(mpt)->mpt_lock); mtx_lock(&Giant)
-#define CAMLOCK_2_MPTLOCK(mpt) \
- mtx_unlock(&Giant); mtx_lock(&(mpt)->mpt_lock)
-#else
-#define MPT_IFLAGS INTR_TYPE_CAM | INTR_ENTROPY
-#define MPT_LOCK_SETUP(mpt) do { } while (0)
-#define MPT_LOCK_DESTROY(mpt) do { } while (0)
-#define MPT_LOCK(mpt) do { } while (0)
-#define MPT_UNLOCK(mpt) do { } while (0)
-#define MPTLOCK_2_CAMLOCK(mpt) do { } while (0)
-#define CAMLOCK_2_MPTLOCK(mpt) do { } while (0)
-#endif
-#endif
-
-
-
-/* Max MPT Reply we are willing to accept (must be power of 2) */
-#define MPT_REPLY_SIZE 128
-
-#define MPT_MAX_REQUESTS(mpt) ((mpt)->is_fc? 1024 : 256)
-#define MPT_REQUEST_AREA 512
-#define MPT_SENSE_SIZE 32 /* included in MPT_REQUEST_SIZE */
-#define MPT_REQ_MEM_SIZE(mpt) (MPT_MAX_REQUESTS(mpt) * MPT_REQUEST_AREA)
-
-/*
- * We cannot tell prior to getting IOC facts how big the IOC's request
- * area is. Because of this we cannot tell at compile time how many
- * simple SG elements we can fit within an IOC request prior to having
- * to put in a chain element.
- *
- * Experimentally we know that the Ultra4 parts have a 96 byte request
- * element size and the Fibre Channel units have a 144 byte request
- * element size. Therefore, if we have 512-32 (== 480) bytes of request
- * area to play with, we have room for between 3 and 5 request sized
- * regions- the first of which is the command plus a simple SG list,
- * the rest of which are chained continuation SG lists. Given that the
- * normal request we use is 48 bytes w/o the first SG element, we can
- * assume we have 480-48 == 432 bytes to have simple SG elements and/or
- * chain elements. If we assume 32 bit addressing, this works out to
- * 54 SG or chain elements. If we assume 5 chain elements, then we have
- * a maximum of 49 seperate actual SG segments.
- */
-
-#define MPT_SGL_MAX 49
-
-#define MPT_RQSL(mpt) (mpt->request_frame_size << 2)
-#define MPT_NSGL(mpt) (MPT_RQSL(mpt) / sizeof (SGE_SIMPLE32))
-
-#define MPT_NSGL_FIRST(mpt) \
- (((mpt->request_frame_size << 2) - \
- sizeof (MSG_SCSI_IO_REQUEST) - \
- sizeof (SGE_IO_UNION)) / sizeof (SGE_SIMPLE32))
-
-/*
- * Convert a physical address returned from IOC to kvm address
- * needed to access the data.
- */
-#define MPT_REPLY_PTOV(m, x) \
- ((void *)(&m->reply[((x << 1) - m->reply_phys)]))
-
-#define ccb_mpt_ptr sim_priv.entries[0].ptr
-#define ccb_req_ptr sim_priv.entries[1].ptr
-
-enum mpt_req_state {
- REQ_FREE, REQ_IN_PROGRESS, REQ_TIMEOUT, REQ_ON_CHIP, REQ_DONE
-};
-typedef struct req_entry {
- u_int16_t index; /* Index of this entry */
- union ccb * ccb; /* CAM request */
- void * req_vbuf; /* Virtual Address of Entry */
- void * sense_vbuf; /* Virtual Address of sense data */
- bus_addr_t req_pbuf; /* Physical Address of Entry */
- bus_addr_t sense_pbuf; /* Physical Address of sense data */
- bus_dmamap_t dmap; /* DMA map for data buffer */
- SLIST_ENTRY(req_entry) link; /* Pointer to next in list */
- enum mpt_req_state debug; /* Debugging */
- u_int32_t sequence; /* Sequence Number */
-} request_t;
-
-
-/* Structure for saving proper values for modifyable PCI configuration registers */
-struct mpt_pci_cfg {
- u_int16_t Command;
- u_int16_t LatencyTimer_LineSize;
- u_int32_t IO_BAR;
- u_int32_t Mem0_BAR[2];
- u_int32_t Mem1_BAR[2];
- u_int32_t ROM_BAR;
- u_int8_t IntLine;
- u_int32_t PMCSR;
-};
-
-typedef struct mpt_softc {
- device_t dev;
-#ifdef RELENG_4
- int mpt_splsaved;
- u_int32_t mpt_islocked;
-#else
- struct mtx mpt_lock;
-#endif
- u_int32_t : 16,
- unit : 8,
- verbose : 3,
- outofbeer : 1,
- mpt_locksetup : 1,
- disabled : 1,
- is_fc : 1,
- bus : 1; /* FC929/1030 have two busses */
-
- /*
- * IOC Facts
- */
- u_int16_t mpt_global_credits;
- u_int16_t request_frame_size;
- u_int8_t mpt_max_devices;
- u_int8_t mpt_max_buses;
-
- /*
- * Port Facts
- */
- u_int16_t mpt_ini_id;
-
-
- /*
- * Device Configuration Information
- */
- union {
- struct mpt_spi_cfg {
- CONFIG_PAGE_SCSI_PORT_0 _port_page0;
- CONFIG_PAGE_SCSI_PORT_1 _port_page1;
- CONFIG_PAGE_SCSI_PORT_2 _port_page2;
- CONFIG_PAGE_SCSI_DEVICE_0 _dev_page0[16];
- CONFIG_PAGE_SCSI_DEVICE_1 _dev_page1[16];
- uint16_t _tag_enable;
- uint16_t _disc_enable;
- uint16_t _update_params0;
- uint16_t _update_params1;
- } spi;
-#define mpt_port_page0 cfg.spi._port_page0
-#define mpt_port_page1 cfg.spi._port_page1
-#define mpt_port_page2 cfg.spi._port_page2
-#define mpt_dev_page0 cfg.spi._dev_page0
-#define mpt_dev_page1 cfg.spi._dev_page1
-#define mpt_tag_enable cfg.spi._tag_enable
-#define mpt_disc_enable cfg.spi._disc_enable
-#define mpt_update_params0 cfg.spi._update_params0
-#define mpt_update_params1 cfg.spi._update_params1
- struct mpi_fc_cfg {
- u_int8_t nada;
- } fc;
- } cfg;
-
- /*
- * PCI Hardware info
- */
- struct resource * pci_irq; /* Interrupt map for chip */
- void * ih; /* Interupt handle */
- struct mpt_pci_cfg pci_cfg; /* saved PCI conf registers */
-
- /*
- * DMA Mapping Stuff
- */
-
- struct resource * pci_reg; /* Register map for chip */
- int pci_reg_id; /* Resource ID */
- bus_space_tag_t pci_st; /* Bus tag for registers */
- bus_space_handle_t pci_sh; /* Bus handle for registers */
- vm_offset_t pci_pa; /* Physical Address */
-
- bus_dma_tag_t parent_dmat; /* DMA tag for parent PCI bus */
- bus_dma_tag_t reply_dmat; /* DMA tag for reply memory */
- bus_dmamap_t reply_dmap; /* DMA map for reply memory */
- char * reply; /* KVA of reply memory */
- bus_addr_t reply_phys; /* BusAddr of reply memory */
-
-
- bus_dma_tag_t buffer_dmat; /* DMA tag for buffers */
- bus_dma_tag_t request_dmat; /* DMA tag for request memroy */
- bus_dmamap_t request_dmap; /* DMA map for request memroy */
- char * request; /* KVA of Request memory */
- bus_addr_t request_phys; /* BusADdr of request memory */
-
- /*
- * CAM && Software Management
- */
-
- request_t * request_pool;
- SLIST_HEAD(req_queue, req_entry) request_free_list;
-
- struct cam_sim * sim;
- struct cam_path * path;
-
- u_int32_t sequence; /* Sequence Number */
- u_int32_t timeouts; /* timeout count */
- u_int32_t success; /* successes afer timeout */
-
- /* Opposing port in a 929 or 1030, or NULL */
- struct mpt_softc * mpt2;
-
-} mpt_softc_t;
-
-#include <dev/mpt/mpt.h>
-
-
-static INLINE void mpt_write(mpt_softc_t *, size_t, u_int32_t);
-static INLINE u_int32_t mpt_read(mpt_softc_t *, int);
-
-static INLINE void
-mpt_write(mpt_softc_t *mpt, size_t offset, u_int32_t val)
-{
- bus_space_write_4(mpt->pci_st, mpt->pci_sh, offset, val);
-}
-
-static INLINE u_int32_t
-mpt_read(mpt_softc_t *mpt, int offset)
-{
- return (bus_space_read_4(mpt->pci_st, mpt->pci_sh, offset));
-}
-
-void mpt_cam_attach(mpt_softc_t *);
-void mpt_cam_detach(mpt_softc_t *);
-void mpt_done(mpt_softc_t *, u_int32_t);
-void mpt_prt(mpt_softc_t *, const char *, ...);
-void mpt_set_config_regs(mpt_softc_t *);
-
-#ifdef RELENG_4
-static INLINE void mpt_lockspl(mpt_softc_t *);
-static INLINE void mpt_unlockspl(mpt_softc_t *);
-
-static INLINE void
-mpt_lockspl(mpt_softc_t *mpt)
-{
- int s = splcam();
- if (mpt->mpt_islocked++ == 0) {
- mpt->mpt_splsaved = s;
- } else {
- splx(s);
- }
-}
-
-static INLINE void
-mpt_unlockspl(mpt_softc_t *mpt)
-{
- if (mpt->mpt_islocked) {
- if (--mpt->mpt_islocked == 0) {
- splx(mpt->mpt_splsaved);
- }
- }
-}
-#endif
-
-#endif /* _MPT_FREEBSD_H */
diff --git a/sys/dev/mpt/mpt_pci.c b/sys/dev/mpt/mpt_pci.c
index 828d6e1..fba176c 100644
--- a/sys/dev/mpt/mpt_pci.c
+++ b/sys/dev/mpt/mpt_pci.c
@@ -29,25 +29,53 @@
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
+/*
+ * Copyright (c) 2004, Avid Technology, Inc. and its contributors.
+ * Copyright (c) 2005, WHEEL Sp. z o.o.
+ * Copyright (c) 2004, 2005 Justin T. Gibbs
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
-#include <sys/param.h>
-#include <sys/systm.h>
-#include <sys/kernel.h>
-#include <sys/module.h>
-#include <sys/bus.h>
+#include <dev/mpt/mpt.h>
+#include <dev/mpt/mpt_cam.h>
+#include <dev/mpt/mpt_raid.h>
+#if __FreeBSD_version < 500000
+#include <pci/pcireg.h>
+#include <pci/pcivar.h>
+#else
#include <dev/pci/pcireg.h>
#include <dev/pci/pcivar.h>
-
-#include <machine/bus.h>
-#include <machine/resource.h>
-#include <sys/rman.h>
-#include <sys/malloc.h>
-
-#include <dev/mpt/mpt_freebsd.h>
+#endif
#ifndef PCI_VENDOR_LSI
#define PCI_VENDOR_LSI 0x1000
@@ -69,10 +97,6 @@ __FBSDID("$FreeBSD$");
#define PCI_PRODUCT_LSI_FC929 0x0622
#endif
-#ifndef PCI_PRODUCT_LSI_FC929X
-#define PCI_PRODUCT_LSI_FC929X 0x0626
-#endif
-
#ifndef PCI_PRODUCT_LSI_1030
#define PCI_PRODUCT_LSI_1030 0x0030
#endif
@@ -83,64 +107,37 @@ __FBSDID("$FreeBSD$");
-#define MEM_MAP_REG 0x14
-#define MEM_MAP_SRAM 0x1C
+#define MPT_IO_BAR 0
+#define MPT_MEM_BAR 1
-static int mpt_probe(device_t);
-static int mpt_attach(device_t);
-static void mpt_free_bus_resources(mpt_softc_t *mpt);
-static int mpt_detach(device_t);
-static int mpt_shutdown(device_t);
-static int mpt_dma_mem_alloc(mpt_softc_t *mpt);
-static void mpt_dma_mem_free(mpt_softc_t *mpt);
-static void mpt_read_config_regs(mpt_softc_t *mpt);
+static int mpt_pci_probe(device_t);
+static int mpt_pci_attach(device_t);
+static void mpt_free_bus_resources(struct mpt_softc *mpt);
+static int mpt_pci_detach(device_t);
+static int mpt_pci_shutdown(device_t);
+static int mpt_dma_mem_alloc(struct mpt_softc *mpt);
+static void mpt_dma_mem_free(struct mpt_softc *mpt);
+static void mpt_read_config_regs(struct mpt_softc *mpt);
static void mpt_pci_intr(void *);
static device_method_t mpt_methods[] = {
/* Device interface */
- DEVMETHOD(device_probe, mpt_probe),
- DEVMETHOD(device_attach, mpt_attach),
- DEVMETHOD(device_detach, mpt_detach),
- DEVMETHOD(device_shutdown, mpt_shutdown),
+ DEVMETHOD(device_probe, mpt_pci_probe),
+ DEVMETHOD(device_attach, mpt_pci_attach),
+ DEVMETHOD(device_detach, mpt_pci_detach),
+ DEVMETHOD(device_shutdown, mpt_pci_shutdown),
{ 0, 0 }
};
static driver_t mpt_driver = {
- "mpt", mpt_methods, sizeof (mpt_softc_t)
+ "mpt", mpt_methods, sizeof(struct mpt_softc)
};
static devclass_t mpt_devclass;
DRIVER_MODULE(mpt, pci, mpt_driver, mpt_devclass, 0, 0);
MODULE_VERSION(mpt, 1);
-int
-mpt_intr(void *dummy)
-{
- int nrepl = 0;
- u_int32_t reply;
- mpt_softc_t *mpt = (mpt_softc_t *)dummy;
-
- if ((mpt_read(mpt, MPT_OFFSET_INTR_STATUS) & MPT_INTR_REPLY_READY) == 0)
- return (0);
- reply = mpt_pop_reply_queue(mpt);
- while (reply != MPT_REPLY_EMPTY) {
- nrepl++;
- if (mpt->verbose > 1) {
- if ((reply & MPT_CONTEXT_REPLY) != 0) {
- /* Address reply; IOC has something to say */
- mpt_print_reply(MPT_REPLY_PTOV(mpt, reply));
- } else {
- /* Context reply ; all went well */
- mpt_prt(mpt, "context %u reply OK", reply);
- }
- }
- mpt_done(mpt, reply);
- reply = mpt_pop_reply_queue(mpt);
- }
- return (nrepl != 0);
-}
-
static int
-mpt_probe(device_t dev)
+mpt_pci_probe(device_t dev)
{
char *desc;
@@ -160,9 +157,6 @@ mpt_probe(device_t dev)
case PCI_PRODUCT_LSI_FC929:
desc = "LSILogic FC929 FC Adapter";
break;
- case PCI_PRODUCT_LSI_FC929X:
- desc = "LSILogic FC929X FC Adapter";
- break;
case PCI_PRODUCT_LSI_1030:
desc = "LSILogic 1030 Ultra4 Adapter";
break;
@@ -171,12 +165,12 @@ mpt_probe(device_t dev)
}
device_set_desc(dev, desc);
- return (BUS_PROBE_DEFAULT);
+ return (0);
}
#ifdef RELENG_4
static void
-mpt_set_options(mpt_softc_t *mpt)
+mpt_set_options(struct mpt_softc *mpt)
{
int bitmap;
@@ -190,14 +184,14 @@ mpt_set_options(mpt_softc_t *mpt)
bitmap = 0;
if (getenv_int("mpt_debug", &bitmap)) {
if (bitmap & (1 << mpt->unit)) {
- mpt->verbose = 2;
+ mpt->verbose = MPT_PRT_DEBUG;
}
}
}
#else
static void
-mpt_set_options(mpt_softc_t *mpt)
+mpt_set_options(struct mpt_softc *mpt)
{
int tval;
@@ -216,18 +210,17 @@ mpt_set_options(mpt_softc_t *mpt)
static void
-mpt_link_peer(mpt_softc_t *mpt)
+mpt_link_peer(struct mpt_softc *mpt)
{
- mpt_softc_t *mpt2;
+ struct mpt_softc *mpt2;
- if (mpt->unit == 0) {
+ if (mpt->unit == 0)
return;
- }
/*
* XXX: depends on probe order
*/
- mpt2 = (mpt_softc_t *) devclass_get_softc(mpt_devclass, mpt->unit-1);
+ mpt2 = (struct mpt_softc *)devclass_get_softc(mpt_devclass,mpt->unit-1);
if (mpt2 == NULL) {
return;
@@ -240,27 +233,27 @@ mpt_link_peer(mpt_softc_t *mpt)
}
mpt->mpt2 = mpt2;
mpt2->mpt2 = mpt;
- if (mpt->verbose) {
- mpt_prt(mpt, "linking with peer (mpt%d)",
+ if (mpt->verbose >= MPT_PRT_DEBUG) {
+ mpt_prt(mpt, "linking with peer (mpt%d)\n",
device_get_unit(mpt2->dev));
}
}
static int
-mpt_attach(device_t dev)
+mpt_pci_attach(device_t dev)
{
- int iqd;
- u_int32_t data, cmd;
- mpt_softc_t *mpt;
+ struct mpt_softc *mpt;
+ int iqd;
+ uint32_t data, cmd;
/* Allocate the softc structure */
- mpt = (mpt_softc_t*) device_get_softc(dev);
+ mpt = (struct mpt_softc*)device_get_softc(dev);
if (mpt == NULL) {
device_printf(dev, "cannot allocate softc\n");
return (ENOMEM);
}
- bzero(mpt, sizeof (mpt_softc_t));
+ bzero(mpt, sizeof(struct mpt_softc));
switch ((pci_get_device(dev) & ~1)) {
case PCI_PRODUCT_LSI_FC909:
case PCI_PRODUCT_LSI_FC909A:
@@ -273,7 +266,11 @@ mpt_attach(device_t dev)
}
mpt->dev = dev;
mpt->unit = device_get_unit(dev);
+ mpt->raid_resync_rate = MPT_RAID_RESYNC_RATE_DEFAULT;
+ mpt->raid_mwce_setting = MPT_RAID_MWCE_DEFAULT;
+ mpt->raid_queue_depth = MPT_RAID_QUEUE_DEPTH_DEFAULT;
mpt_set_options(mpt);
+ mpt->verbose = MPT_PRT_INFO;
mpt->verbose += (bootverbose != 0)? 1 : 0;
/* Make sure memory access decoders are enabled */
@@ -298,7 +295,6 @@ mpt_attach(device_t dev)
data &= ~1;
pci_write_config(dev, PCIR_BIOS, data, 4);
-
/*
* Is this part a dual?
* If so, link with our partner (around yet)
@@ -308,29 +304,53 @@ mpt_attach(device_t dev)
mpt_link_peer(mpt);
}
- /* Set up the memory regions */
+ /*
+ * Set up register access. PIO mode is required for
+ * certain reset operations.
+ */
+ mpt->pci_pio_rid = PCIR_BAR(MPT_IO_BAR);
+ mpt->pci_pio_reg = bus_alloc_resource(dev, SYS_RES_IOPORT,
+ &mpt->pci_pio_rid, 0, ~0, 0, RF_ACTIVE);
+ if (mpt->pci_pio_reg == NULL) {
+ device_printf(dev, "unable to map registers in PIO mode\n");
+ goto bad;
+ }
+ mpt->pci_pio_st = rman_get_bustag(mpt->pci_pio_reg);
+ mpt->pci_pio_sh = rman_get_bushandle(mpt->pci_pio_reg);
+
/* Allocate kernel virtual memory for the 9x9's Mem0 region */
- mpt->pci_reg_id = MEM_MAP_REG;
+ mpt->pci_mem_rid = PCIR_BAR(MPT_MEM_BAR);
mpt->pci_reg = bus_alloc_resource(dev, SYS_RES_MEMORY,
- &mpt->pci_reg_id, 0, ~0, 0, RF_ACTIVE);
+ &mpt->pci_mem_rid, 0, ~0, 0, RF_ACTIVE);
if (mpt->pci_reg == NULL) {
- device_printf(dev, "unable to map any ports\n");
- goto bad;
+ device_printf(dev, "Unable to memory map registers.\n");
+ device_printf(dev, "Falling back to PIO mode.\n");
+ mpt->pci_st = mpt->pci_pio_st;
+ mpt->pci_sh = mpt->pci_pio_sh;
+ } else {
+ mpt->pci_st = rman_get_bustag(mpt->pci_reg);
+ mpt->pci_sh = rman_get_bushandle(mpt->pci_reg);
}
- mpt->pci_st = rman_get_bustag(mpt->pci_reg);
- mpt->pci_sh = rman_get_bushandle(mpt->pci_reg);
- /* Get the Physical Address */
- mpt->pci_pa = rman_get_start(mpt->pci_reg);
/* Get a handle to the interrupt */
iqd = 0;
+#if __FreeBSD_version < 500000
+ mpt->pci_irq = bus_alloc_resource(dev, SYS_RES_IRQ, &iqd, 0, ~0, 1,
+ RF_ACTIVE | RF_SHAREABLE);
+#else
mpt->pci_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ, &iqd,
RF_ACTIVE | RF_SHAREABLE);
+#endif
if (mpt->pci_irq == NULL) {
device_printf(dev, "could not allocate interrupt\n");
goto bad;
}
+ MPT_LOCK_SETUP(mpt);
+
+ /* Disable interrupts at the part */
+ mpt_disable_ints(mpt);
+
/* Register the interrupt handler */
if (bus_setup_intr(dev, mpt->pci_irq, MPT_IFLAGS, mpt_pci_intr,
mpt, &mpt->ih)) {
@@ -338,12 +358,8 @@ mpt_attach(device_t dev)
goto bad;
}
- MPT_LOCK_SETUP(mpt);
-
- /* Disable interrupts at the part */
- mpt_disable_ints(mpt);
-
/* Allocate dma memory */
+/* XXX JGibbs -Should really be done based on IOCFacts. */
if (mpt_dma_mem_alloc(mpt)) {
device_printf(dev, "Could not allocate DMA memory\n");
goto bad;
@@ -364,18 +380,10 @@ mpt_attach(device_t dev)
/* Initialize the hardware */
if (mpt->disabled == 0) {
MPT_LOCK(mpt);
- if (mpt_init(mpt, MPT_DB_INIT_HOST) != 0) {
+ if (mpt_attach(mpt) != 0) {
MPT_UNLOCK(mpt);
goto bad;
}
-
- /*
- * Attach to CAM
- */
- MPTLOCK_2_CAMLOCK(mpt);
- mpt_cam_attach(mpt);
- CAMLOCK_2_MPTLOCK(mpt);
- MPT_UNLOCK(mpt);
}
return (0);
@@ -394,7 +402,7 @@ bad:
* Free bus resources
*/
static void
-mpt_free_bus_resources(mpt_softc_t *mpt)
+mpt_free_bus_resources(struct mpt_softc *mpt)
{
if (mpt->ih) {
bus_teardown_intr(mpt->dev, mpt->pci_irq, mpt->ih);
@@ -406,8 +414,13 @@ mpt_free_bus_resources(mpt_softc_t *mpt)
mpt->pci_irq = 0;
}
+ if (mpt->pci_pio_reg) {
+ bus_release_resource(mpt->dev, SYS_RES_IOPORT, mpt->pci_pio_rid,
+ mpt->pci_pio_reg);
+ mpt->pci_pio_reg = 0;
+ }
if (mpt->pci_reg) {
- bus_release_resource(mpt->dev, SYS_RES_MEMORY, mpt->pci_reg_id,
+ bus_release_resource(mpt->dev, SYS_RES_MEMORY, mpt->pci_mem_rid,
mpt->pci_reg);
mpt->pci_reg = 0;
}
@@ -419,19 +432,41 @@ mpt_free_bus_resources(mpt_softc_t *mpt)
* Disconnect ourselves from the system.
*/
static int
-mpt_detach(device_t dev)
+mpt_pci_detach(device_t dev)
{
- mpt_softc_t *mpt;
- mpt = (mpt_softc_t*) device_get_softc(dev);
+ struct mpt_softc *mpt;
- mpt_prt(mpt, "mpt_detach");
+ mpt = (struct mpt_softc*)device_get_softc(dev);
+ mpt_prt(mpt, "mpt_detach\n");
if (mpt) {
mpt_disable_ints(mpt);
- mpt_cam_detach(mpt);
- mpt_reset(mpt);
+ mpt_detach(mpt);
+ mpt_reset(mpt, /*reinit*/FALSE);
mpt_dma_mem_free(mpt);
mpt_free_bus_resources(mpt);
+ if (mpt->raid_volumes != NULL
+ && mpt->ioc_page2 != NULL) {
+ int i;
+
+ for (i = 0; i < mpt->ioc_page2->MaxVolumes; i++) {
+ struct mpt_raid_volume *mpt_vol;
+
+ mpt_vol = &mpt->raid_volumes[i];
+ if (mpt_vol->config_page)
+ free(mpt_vol->config_page, M_DEVBUF);
+ }
+ }
+ if (mpt->ioc_page2 != NULL)
+ free(mpt->ioc_page2, M_DEVBUF);
+ if (mpt->ioc_page3 != NULL)
+ free(mpt->ioc_page3, M_DEVBUF);
+ if (mpt->raid_volumes != NULL)
+ free(mpt->raid_volumes, M_DEVBUF);
+ if (mpt->raid_disks != NULL)
+ free(mpt->raid_disks, M_DEVBUF);
+ if (mpt->eh != NULL)
+ EVENTHANDLER_DEREGISTER(shutdown_final, mpt->eh);
}
return(0);
}
@@ -439,43 +474,27 @@ mpt_detach(device_t dev)
/*
* Disable the hardware
+ * XXX - Called too early by New Bus!!! ???
*/
static int
-mpt_shutdown(device_t dev)
+mpt_pci_shutdown(device_t dev)
{
- mpt_softc_t *mpt;
- mpt = (mpt_softc_t*) device_get_softc(dev);
+ struct mpt_softc *mpt;
- if (mpt) {
- mpt_reset(mpt);
- }
+ mpt = (struct mpt_softc *)device_get_softc(dev);
+ if (mpt)
+ return (mpt_shutdown(mpt));
return(0);
}
-
-struct imush {
- mpt_softc_t *mpt;
- int error;
- u_int32_t phys;
-};
-
-static void
-mpt_map_rquest(void *arg, bus_dma_segment_t *segs, int nseg, int error)
-{
- struct imush *imushp = (struct imush *) arg;
- imushp->error = error;
- imushp->phys = segs->ds_addr;
-}
-
-
static int
-mpt_dma_mem_alloc(mpt_softc_t *mpt)
+mpt_dma_mem_alloc(struct mpt_softc *mpt)
{
int i, error;
- u_char *vptr;
- u_int32_t pptr, end;
+ uint8_t *vptr;
+ uint32_t pptr, end;
size_t len;
- struct imush im;
+ struct mpt_map_info mi;
device_t dev = mpt->dev;
/* Check if we alreay have allocated the reply memory */
@@ -483,17 +502,16 @@ mpt_dma_mem_alloc(mpt_softc_t *mpt)
return 0;
}
- len = sizeof (request_t *) * MPT_REQ_MEM_SIZE(mpt);
+ len = sizeof (request_t) * MPT_MAX_REQUESTS(mpt);
#ifdef RELENG_4
- mpt->request_pool = (request_t *) malloc(len, M_DEVBUF, M_WAITOK);
+ mpt->request_pool = (request_t *)malloc(len, M_DEVBUF, M_WAITOK);
if (mpt->request_pool == NULL) {
device_printf(dev, "cannot allocate request pool\n");
return (1);
}
bzero(mpt->request_pool, len);
#else
- mpt->request_pool = (request_t *)
- malloc(len, M_DEVBUF, M_WAITOK | M_ZERO);
+ mpt->request_pool = (request_t *)malloc(len, M_DEVBUF, M_WAITOK|M_ZERO);
if (mpt->request_pool == NULL) {
device_printf(dev, "cannot allocate request pool\n");
return (1);
@@ -501,24 +519,27 @@ mpt_dma_mem_alloc(mpt_softc_t *mpt)
#endif
/*
- * Create a dma tag for this device
+ * Create a parent dma tag for this device
*
- * Align at page boundaries, limit to 32-bit addressing
+ * Align at byte boundaries, limit to 32-bit addressing
* (The chip supports 64-bit addressing, but this driver doesn't)
*/
- if (bus_dma_tag_create(NULL, PAGE_SIZE, 0, BUS_SPACE_MAXADDR_32BIT,
- BUS_SPACE_MAXADDR, NULL, NULL, BUS_SPACE_MAXSIZE_32BIT,
- BUS_SPACE_MAXSIZE_32BIT, BUS_SPACE_UNRESTRICTED, 0,
- busdma_lock_mutex, &Giant, &mpt->parent_dmat) != 0) {
+ if (mpt_dma_tag_create(mpt, /*parent*/NULL, /*alignment*/1,
+ /*boundary*/0, /*lowaddr*/BUS_SPACE_MAXADDR_32BIT,
+ /*highaddr*/BUS_SPACE_MAXADDR, /*filter*/NULL, /*filterarg*/NULL,
+ /*maxsize*/BUS_SPACE_MAXSIZE_32BIT,
+ /*nsegments*/BUS_SPACE_MAXSIZE_32BIT,
+ /*maxsegsz*/BUS_SPACE_UNRESTRICTED, /*flags*/0,
+ &mpt->parent_dmat) != 0) {
device_printf(dev, "cannot create parent dma tag\n");
return (1);
}
/* Create a child tag for reply buffers */
- if (bus_dma_tag_create(mpt->parent_dmat, PAGE_SIZE,
+ if (mpt_dma_tag_create(mpt, mpt->parent_dmat, PAGE_SIZE,
0, BUS_SPACE_MAXADDR, BUS_SPACE_MAXADDR,
NULL, NULL, PAGE_SIZE, 1, BUS_SPACE_MAXSIZE_32BIT, 0,
- busdma_lock_mutex, &Giant, &mpt->reply_dmat) != 0) {
+ &mpt->reply_dmat) != 0) {
device_printf(dev, "cannot create a dma tag for replies\n");
return (1);
}
@@ -531,35 +552,35 @@ mpt_dma_mem_alloc(mpt_softc_t *mpt)
return (1);
}
- im.mpt = mpt;
- im.error = 0;
+ mi.mpt = mpt;
+ mi.error = 0;
/* Load and lock it into "bus space" */
bus_dmamap_load(mpt->reply_dmat, mpt->reply_dmap, mpt->reply,
- PAGE_SIZE, mpt_map_rquest, &im, 0);
+ PAGE_SIZE, mpt_map_rquest, &mi, 0);
- if (im.error) {
+ if (mi.error) {
device_printf(dev,
- "error %d loading dma map for DMA reply queue\n", im.error);
+ "error %d loading dma map for DMA reply queue\n", mi.error);
return (1);
}
- mpt->reply_phys = im.phys;
+ mpt->reply_phys = mi.phys;
/* Create a child tag for data buffers */
- if (bus_dma_tag_create(mpt->parent_dmat, PAGE_SIZE,
+ if (mpt_dma_tag_create(mpt, mpt->parent_dmat, 1,
0, BUS_SPACE_MAXADDR, BUS_SPACE_MAXADDR,
NULL, NULL, MAXBSIZE, MPT_SGL_MAX, BUS_SPACE_MAXSIZE_32BIT, 0,
- busdma_lock_mutex, &Giant, &mpt->buffer_dmat) != 0) {
+ &mpt->buffer_dmat) != 0) {
device_printf(dev,
"cannot create a dma tag for data buffers\n");
return (1);
}
/* Create a child tag for request buffers */
- if (bus_dma_tag_create(mpt->parent_dmat, PAGE_SIZE,
+ if (mpt_dma_tag_create(mpt, mpt->parent_dmat, PAGE_SIZE,
0, BUS_SPACE_MAXADDR, BUS_SPACE_MAXADDR,
NULL, NULL, MPT_REQ_MEM_SIZE(mpt), 1, BUS_SPACE_MAXSIZE_32BIT, 0,
- busdma_lock_mutex, &Giant, &mpt->request_dmat) != 0) {
+ &mpt->request_dmat) != 0) {
device_printf(dev, "cannot create a dma tag for requests\n");
return (1);
}
@@ -573,20 +594,20 @@ mpt_dma_mem_alloc(mpt_softc_t *mpt)
return (1);
}
- im.mpt = mpt;
- im.error = 0;
+ mi.mpt = mpt;
+ mi.error = 0;
/* Load and lock it into "bus space" */
bus_dmamap_load(mpt->request_dmat, mpt->request_dmap, mpt->request,
- MPT_REQ_MEM_SIZE(mpt), mpt_map_rquest, &im, 0);
+ MPT_REQ_MEM_SIZE(mpt), mpt_map_rquest, &mi, 0);
- if (im.error) {
+ if (mi.error) {
device_printf(dev,
"error %d loading dma map for DMA request queue\n",
- im.error);
+ mi.error);
return (1);
}
- mpt->request_phys = im.phys;
+ mpt->request_phys = mi.phys;
i = 0;
pptr = mpt->request_phys;
@@ -621,13 +642,13 @@ mpt_dma_mem_alloc(mpt_softc_t *mpt)
/* Deallocate memory that was allocated by mpt_dma_mem_alloc
*/
static void
-mpt_dma_mem_free(mpt_softc_t *mpt)
+mpt_dma_mem_free(struct mpt_softc *mpt)
{
int i;
/* Make sure we aren't double destroying */
if (mpt->reply_dmat == 0) {
- if (mpt->verbose)
+ if (mpt->verbose >= MPT_PRT_DEBUG)
device_printf(mpt->dev,"Already released dma memory\n");
return;
}
@@ -653,7 +674,7 @@ mpt_dma_mem_free(mpt_softc_t *mpt)
/* Reads modifiable (via PCI transactions) config registers */
static void
-mpt_read_config_regs(mpt_softc_t *mpt)
+mpt_read_config_regs(struct mpt_softc *mpt)
{
mpt->pci_cfg.Command = pci_read_config(mpt->dev, PCIR_COMMAND, 2);
mpt->pci_cfg.LatencyTimer_LineSize =
@@ -670,9 +691,9 @@ mpt_read_config_regs(mpt_softc_t *mpt)
/* Sets modifiable config registers */
void
-mpt_set_config_regs(mpt_softc_t *mpt)
+mpt_set_config_regs(struct mpt_softc *mpt)
{
- u_int32_t val;
+ uint32_t val;
#define MPT_CHECK(reg, offset, size) \
val = pci_read_config(mpt->dev, offset, size); \
@@ -682,7 +703,7 @@ mpt_set_config_regs(mpt_softc_t *mpt)
mpt->pci_cfg.reg, val); \
}
- if (mpt->verbose) {
+ if (mpt->verbose >= MPT_PRT_DEBUG) {
MPT_CHECK(Command, PCIR_COMMAND, 2);
MPT_CHECK(LatencyTimer_LineSize, PCIR_CACHELNSZ, 2);
MPT_CHECK(IO_BAR, PCIR_BAR(0), 4);
@@ -712,8 +733,10 @@ mpt_set_config_regs(mpt_softc_t *mpt)
static void
mpt_pci_intr(void *arg)
{
- mpt_softc_t *mpt = arg;
+ struct mpt_softc *mpt;
+
+ mpt = (struct mpt_softc *)arg;
MPT_LOCK(mpt);
- (void) mpt_intr(mpt);
+ mpt_intr(mpt);
MPT_UNLOCK(mpt);
}
diff --git a/sys/dev/mpt/mpt_raid.c b/sys/dev/mpt/mpt_raid.c
new file mode 100644
index 0000000..3c4a35c
--- /dev/null
+++ b/sys/dev/mpt/mpt_raid.c
@@ -0,0 +1,1674 @@
+/*-
+ * Routines for handling the integrated RAID features LSI MPT Fusion adapters.
+ *
+ * Copyright (c) 2005, WHEEL Sp. z o.o.
+ * Copyright (c) 2005 Justin T. Gibbs.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+#include <dev/mpt/mpt.h>
+#include <dev/mpt/mpt_raid.h>
+
+#include "dev/mpt/mpilib/mpi_ioc.h" /* XXX Fix Event Handling!!! */
+#include "dev/mpt/mpilib/mpi_raid.h"
+
+#include <cam/cam.h>
+#include <cam/cam_ccb.h>
+#include <cam/cam_sim.h>
+#include <cam/cam_xpt_sim.h>
+
+#include <cam/cam_periph.h>
+
+#include <sys/callout.h>
+#include <sys/kthread.h>
+#include <sys/sysctl.h>
+
+#include <machine/stdarg.h>
+
+struct mpt_raid_action_result
+{
+ union {
+ MPI_RAID_VOL_INDICATOR indicator_struct;
+ uint32_t new_settings;
+ uint8_t phys_disk_num;
+ } action_data;
+ uint16_t action_status;
+};
+
+#define REQ_TO_RAID_ACTION_RESULT(req) ((struct mpt_raid_action_result *) \
+ (((MSG_RAID_ACTION_REQUEST *)(req->req_vbuf)) + 1))
+
+#define REQ_IOCSTATUS(req) ((req)->IOCStatus & MPI_IOCSTATUS_MASK)
+
+
+static mpt_probe_handler_t mpt_raid_probe;
+static mpt_attach_handler_t mpt_raid_attach;
+static mpt_event_handler_t mpt_raid_event;
+static mpt_shutdown_handler_t mpt_raid_shutdown;
+static mpt_reset_handler_t mpt_raid_ioc_reset;
+static mpt_detach_handler_t mpt_raid_detach;
+
+static struct mpt_personality mpt_raid_personality =
+{
+ .name = "mpt_raid",
+ .probe = mpt_raid_probe,
+ .attach = mpt_raid_attach,
+ .event = mpt_raid_event,
+ .reset = mpt_raid_ioc_reset,
+ .shutdown = mpt_raid_shutdown,
+ .detach = mpt_raid_detach,
+};
+
+DECLARE_MPT_PERSONALITY(mpt_raid, SI_ORDER_THIRD);
+MPT_PERSONALITY_DEPEND(mpt_raid, mpt_cam, 1, 1, 1);
+
+static mpt_reply_handler_t mpt_raid_reply_handler;
+static int mpt_raid_reply_frame_handler(struct mpt_softc *mpt, request_t *req,
+ MSG_DEFAULT_REPLY *reply_frame);
+static int mpt_spawn_raid_thread(struct mpt_softc *mpt);
+static void mpt_terminate_raid_thread(struct mpt_softc *mpt);
+static void mpt_raid_thread(void *arg);
+static timeout_t mpt_raid_timer;
+static timeout_t mpt_raid_quiesce_timeout;
+#if UNUSED
+static void mpt_enable_vol(struct mpt_softc *mpt,
+ struct mpt_raid_volume *mpt_vol, int enable);
+#endif
+static void mpt_verify_mwce(struct mpt_softc *mpt,
+ struct mpt_raid_volume *mpt_vol);
+static void mpt_adjust_queue_depth(struct mpt_softc *mpt,
+ struct mpt_raid_volume *mpt_vol,
+ struct cam_path *path);
+static void mpt_raid_sysctl_attach(struct mpt_softc *mpt);
+
+static uint32_t raid_handler_id = MPT_HANDLER_ID_NONE;
+
+const char *
+mpt_vol_type(struct mpt_raid_volume *vol)
+{
+ switch (vol->config_page->VolumeType) {
+ case MPI_RAID_VOL_TYPE_IS:
+ return ("RAID-0");
+ case MPI_RAID_VOL_TYPE_IME:
+ return ("RAID-1E");
+ case MPI_RAID_VOL_TYPE_IM:
+ return ("RAID-1");
+ default:
+ return ("Unknown");
+ }
+}
+
+const char *
+mpt_vol_state(struct mpt_raid_volume *vol)
+{
+ switch (vol->config_page->VolumeStatus.State) {
+ case MPI_RAIDVOL0_STATUS_STATE_OPTIMAL:
+ return ("Optimal");
+ case MPI_RAIDVOL0_STATUS_STATE_DEGRADED:
+ return ("Degraded");
+ case MPI_RAIDVOL0_STATUS_STATE_FAILED:
+ return ("Failed");
+ default:
+ return ("Unknown");
+ }
+}
+
+const char *
+mpt_disk_state(struct mpt_raid_disk *disk)
+{
+ switch (disk->config_page.PhysDiskStatus.State) {
+ case MPI_PHYSDISK0_STATUS_ONLINE:
+ return ("Online");
+ case MPI_PHYSDISK0_STATUS_MISSING:
+ return ("Missing");
+ case MPI_PHYSDISK0_STATUS_NOT_COMPATIBLE:
+ return ("Incompatible");
+ case MPI_PHYSDISK0_STATUS_FAILED:
+ return ("Failed");
+ case MPI_PHYSDISK0_STATUS_INITIALIZING:
+ return ("Initializing");
+ case MPI_PHYSDISK0_STATUS_OFFLINE_REQUESTED:
+ return ("Offline Requested");
+ case MPI_PHYSDISK0_STATUS_FAILED_REQUESTED:
+ return ("Failed per Host Request");
+ case MPI_PHYSDISK0_STATUS_OTHER_OFFLINE:
+ return ("Offline");
+ default:
+ return ("Unknown");
+ }
+}
+
+void
+mpt_vol_prt(struct mpt_softc *mpt, struct mpt_raid_volume *vol,
+ const char *fmt, ...)
+{
+ va_list ap;
+
+ printf("%s:vol%d(%s:%d:%d): ", device_get_nameunit(mpt->dev),
+ (u_int)(vol - mpt->raid_volumes), device_get_nameunit(mpt->dev),
+ vol->config_page->VolumeBus, vol->config_page->VolumeID);
+ va_start(ap, fmt);
+ vprintf(fmt, ap);
+ va_end(ap);
+}
+
+void
+mpt_disk_prt(struct mpt_softc *mpt, struct mpt_raid_disk *disk,
+ const char *fmt, ...)
+{
+ va_list ap;
+
+ if (disk->volume != NULL) {
+ printf("(%s:vol%d:%d): ",
+ device_get_nameunit(mpt->dev),
+ disk->volume->config_page->VolumeID,
+ disk->member_number);
+ } else {
+ printf("(%s:%d:%d): ", device_get_nameunit(mpt->dev),
+ disk->config_page.PhysDiskBus,
+ disk->config_page.PhysDiskID);
+ }
+ va_start(ap, fmt);
+ vprintf(fmt, ap);
+ va_end(ap);
+}
+
+static void
+mpt_raid_async(void *callback_arg, u_int32_t code,
+ struct cam_path *path, void *arg)
+{
+ struct mpt_softc *mpt;
+
+ mpt = (struct mpt_softc*)callback_arg;
+ switch (code) {
+ case AC_FOUND_DEVICE:
+ {
+ struct ccb_getdev *cgd;
+ struct mpt_raid_volume *mpt_vol;
+
+ cgd = (struct ccb_getdev *)arg;
+ if (cgd == NULL)
+ break;
+
+ mpt_lprt(mpt, MPT_PRT_DEBUG, " Callback for %d\n",
+ cgd->ccb_h.target_id);
+
+ RAID_VOL_FOREACH(mpt, mpt_vol) {
+ if ((mpt_vol->flags & MPT_RVF_ACTIVE) == 0)
+ continue;
+
+ if (mpt_vol->config_page->VolumeID
+ == cgd->ccb_h.target_id) {
+ mpt_adjust_queue_depth(mpt, mpt_vol, path);
+ break;
+ }
+ }
+ }
+ default:
+ break;
+ }
+}
+
+int
+mpt_raid_probe(struct mpt_softc *mpt)
+{
+ if (mpt->ioc_page2 == NULL
+ || mpt->ioc_page2->MaxPhysDisks == 0)
+ return (ENODEV);
+ return (0);
+}
+
+int
+mpt_raid_attach(struct mpt_softc *mpt)
+{
+ struct ccb_setasync csa;
+ mpt_handler_t handler;
+ int error;
+
+ mpt_callout_init(&mpt->raid_timer);
+
+ handler.reply_handler = mpt_raid_reply_handler;
+ error = mpt_register_handler(mpt, MPT_HANDLER_REPLY, handler,
+ &raid_handler_id);
+ if (error != 0)
+ goto cleanup;
+
+ error = mpt_spawn_raid_thread(mpt);
+ if (error != 0) {
+ mpt_prt(mpt, "Unable to spawn RAID thread!\n");
+ goto cleanup;
+ }
+
+ xpt_setup_ccb(&csa.ccb_h, mpt->path, /*priority*/5);
+ csa.ccb_h.func_code = XPT_SASYNC_CB;
+ csa.event_enable = AC_FOUND_DEVICE;
+ csa.callback = mpt_raid_async;
+ csa.callback_arg = mpt;
+ xpt_action((union ccb *)&csa);
+ if (csa.ccb_h.status != CAM_REQ_CMP) {
+ mpt_prt(mpt, "mpt_raid_attach: Unable to register "
+ "CAM async handler.\n");
+ }
+
+ mpt_raid_sysctl_attach(mpt);
+ return (0);
+cleanup:
+ mpt_raid_detach(mpt);
+ return (error);
+}
+
+void
+mpt_raid_detach(struct mpt_softc *mpt)
+{
+ struct ccb_setasync csa;
+ mpt_handler_t handler;
+
+ callout_stop(&mpt->raid_timer);
+ mpt_terminate_raid_thread(mpt);
+
+ handler.reply_handler = mpt_raid_reply_handler;
+ mpt_deregister_handler(mpt, MPT_HANDLER_REPLY, handler,
+ raid_handler_id);
+ xpt_setup_ccb(&csa.ccb_h, mpt->path, /*priority*/5);
+ csa.ccb_h.func_code = XPT_SASYNC_CB;
+ csa.event_enable = 0;
+ csa.callback = mpt_raid_async;
+ csa.callback_arg = mpt;
+ xpt_action((union ccb *)&csa);
+}
+
+static void
+mpt_raid_ioc_reset(struct mpt_softc *mpt, int type)
+{
+ /* Nothing to do yet. */
+}
+
+static const char *raid_event_txt[] =
+{
+ "Volume Created",
+ "Volume Deleted",
+ "Volume Settings Changed",
+ "Volume Status Changed",
+ "Volume Physical Disk Membership Changed",
+ "Physical Disk Created",
+ "Physical Disk Deleted",
+ "Physical Disk Settings Changed",
+ "Physical Disk Status Changed",
+ "Domain Validation Required",
+ "SMART Data Received",
+ "Replace Action Started",
+};
+
+static int
+mpt_raid_event(struct mpt_softc *mpt, request_t *req,
+ MSG_EVENT_NOTIFY_REPLY *msg)
+{
+ EVENT_DATA_RAID *raid_event;
+ struct mpt_raid_volume *mpt_vol;
+ struct mpt_raid_disk *mpt_disk;
+ CONFIG_PAGE_RAID_VOL_0 *vol_pg;
+ int i;
+ int print_event;
+
+ if (msg->Event != MPI_EVENT_INTEGRATED_RAID)
+ return (/*handled*/0);
+
+ raid_event = (EVENT_DATA_RAID *)&msg->Data;
+
+ mpt_vol = NULL;
+ vol_pg = NULL;
+ if (mpt->raid_volumes != NULL && mpt->ioc_page2 != NULL) {
+ for (i = 0; i < mpt->ioc_page2->MaxVolumes; i++) {
+ mpt_vol = &mpt->raid_volumes[i];
+ vol_pg = mpt_vol->config_page;
+
+ if ((mpt_vol->flags & MPT_RVF_ACTIVE) == 0)
+ continue;
+
+ if (vol_pg->VolumeID == raid_event->VolumeID
+ && vol_pg->VolumeBus == raid_event->VolumeBus)
+ break;
+ }
+ if (i >= mpt->ioc_page2->MaxVolumes) {
+ mpt_vol = NULL;
+ vol_pg = NULL;
+ }
+ }
+
+ mpt_disk = NULL;
+ if (raid_event->PhysDiskNum != 0xFF
+ && mpt->raid_disks != NULL) {
+ mpt_disk = mpt->raid_disks
+ + raid_event->PhysDiskNum;
+ if ((mpt_disk->flags & MPT_RDF_ACTIVE) == 0)
+ mpt_disk = NULL;
+ }
+
+ print_event = 1;
+ switch(raid_event->ReasonCode) {
+ case MPI_EVENT_RAID_RC_VOLUME_CREATED:
+ case MPI_EVENT_RAID_RC_VOLUME_DELETED:
+ break;
+ case MPI_EVENT_RAID_RC_VOLUME_STATUS_CHANGED:
+ if (mpt_vol != NULL) {
+ if ((mpt_vol->flags & MPT_RVF_UP2DATE) != 0) {
+ mpt_vol->flags &= ~MPT_RVF_UP2DATE;
+ } else {
+ /*
+ * Coalesce status messages into one
+ * per background run of our RAID thread.
+ * This removes "spurious" status messages
+ * from our output.
+ */
+ print_event = 0;
+ }
+ }
+ break;
+ case MPI_EVENT_RAID_RC_VOLUME_SETTINGS_CHANGED:
+ case MPI_EVENT_RAID_RC_VOLUME_PHYSDISK_CHANGED:
+ mpt->raid_rescan++;
+ if (mpt_vol != NULL)
+ mpt_vol->flags &= ~(MPT_RVF_UP2DATE|MPT_RVF_ANNOUNCED);
+ break;
+ case MPI_EVENT_RAID_RC_PHYSDISK_CREATED:
+ case MPI_EVENT_RAID_RC_PHYSDISK_DELETED:
+ mpt->raid_rescan++;
+ break;
+ case MPI_EVENT_RAID_RC_PHYSDISK_SETTINGS_CHANGED:
+ case MPI_EVENT_RAID_RC_PHYSDISK_STATUS_CHANGED:
+ mpt->raid_rescan++;
+ if (mpt_disk != NULL)
+ mpt_disk->flags &= ~MPT_RDF_UP2DATE;
+ break;
+ case MPI_EVENT_RAID_RC_DOMAIN_VAL_NEEDED:
+ mpt->raid_rescan++;
+ break;
+ case MPI_EVENT_RAID_RC_SMART_DATA:
+ case MPI_EVENT_RAID_RC_REPLACE_ACTION_STARTED:
+ break;
+ }
+
+ if (print_event) {
+ if (mpt_disk != NULL) {
+ mpt_disk_prt(mpt, mpt_disk, "");
+ } else if (mpt_vol != NULL) {
+ mpt_vol_prt(mpt, mpt_vol, "");
+ } else {
+ mpt_prt(mpt, "Volume(%d:%d", raid_event->VolumeBus,
+ raid_event->VolumeID);
+
+ if (raid_event->PhysDiskNum != 0xFF)
+ mpt_prtc(mpt, ":%d): ",
+ raid_event->PhysDiskNum);
+ else
+ mpt_prtc(mpt, "): ");
+ }
+
+ if (raid_event->ReasonCode >= NUM_ELEMENTS(raid_event_txt))
+ mpt_prtc(mpt, "Unhandled RaidEvent %#x\n",
+ raid_event->ReasonCode);
+ else
+ mpt_prtc(mpt, "%s\n",
+ raid_event_txt[raid_event->ReasonCode]);
+ }
+
+ if (raid_event->ReasonCode == MPI_EVENT_RAID_RC_SMART_DATA) {
+ /* XXX Use CAM's print sense for this... */
+ if (mpt_disk != NULL)
+ mpt_disk_prt(mpt, mpt_disk, "");
+ else
+ mpt_prt(mpt, "Volume(%d:%d:%d: ");
+ mpt_prtc(mpt, "ASC 0x%x, ASCQ 0x%x\n",
+ raid_event->ASC, raid_event->ASCQ);
+ }
+
+ mpt_raid_wakeup(mpt);
+ return (/*handled*/1);
+}
+
+static void
+mpt_raid_shutdown(struct mpt_softc *mpt)
+{
+ struct mpt_raid_volume *mpt_vol;
+
+ if (mpt->raid_mwce_setting != MPT_RAID_MWCE_REBUILD_ONLY)
+ return;
+
+ mpt->raid_mwce_setting = MPT_RAID_MWCE_OFF;
+ RAID_VOL_FOREACH(mpt, mpt_vol) {
+
+ mpt_verify_mwce(mpt, mpt_vol);
+ }
+}
+
+static int
+mpt_raid_reply_handler(struct mpt_softc *mpt, request_t *req,
+ MSG_DEFAULT_REPLY *reply_frame)
+{
+ int free_req;
+
+ if (req == NULL)
+ return (/*free_reply*/TRUE);
+
+ free_req = TRUE;
+ if (reply_frame != NULL)
+ free_req = mpt_raid_reply_frame_handler(mpt, req, reply_frame);
+#if NOTYET
+ else if (req->ccb != NULL) {
+ /* Complete Quiesce CCB with error... */
+ }
+#endif
+
+ req->state &= ~REQ_STATE_QUEUED;
+ req->state |= REQ_STATE_DONE;
+ TAILQ_REMOVE(&mpt->request_pending_list, req, links);
+
+ if ((req->state & REQ_STATE_NEED_WAKEUP) != 0) {
+ wakeup(req);
+ } else if (free_req) {
+ mpt_free_request(mpt, req);
+ }
+
+ return (/*free_reply*/TRUE);
+}
+
+/*
+ * Parse additional completion information in the reply
+ * frame for RAID I/O requests.
+ */
+static int
+mpt_raid_reply_frame_handler(struct mpt_softc *mpt, request_t *req,
+ MSG_DEFAULT_REPLY *reply_frame)
+{
+ MSG_RAID_ACTION_REPLY *reply;
+ struct mpt_raid_action_result *action_result;
+ MSG_RAID_ACTION_REQUEST *rap;
+
+ reply = (MSG_RAID_ACTION_REPLY *)reply_frame;
+ req->IOCStatus = le16toh(reply->IOCStatus);
+ rap = (MSG_RAID_ACTION_REQUEST *)req->req_vbuf;
+
+ switch (rap->Action) {
+ case MPI_RAID_ACTION_QUIESCE_PHYS_IO:
+ /*
+ * Parse result, call mpt_start with ccb,
+ * release device queue.
+ * COWWWWW
+ */
+ break;
+ case MPI_RAID_ACTION_ENABLE_PHYS_IO:
+ /*
+ * Need additional state for transition to enabled to
+ * protect against attempts to disable??
+ */
+ break;
+ default:
+ action_result = REQ_TO_RAID_ACTION_RESULT(req);
+ memcpy(&action_result->action_data, &reply->ActionData,
+ sizeof(action_result->action_data));
+ action_result->action_status = reply->ActionStatus;
+ break;
+ }
+
+ return (/*Free Request*/TRUE);
+}
+
+/*
+ * Utiltity routine to perform a RAID action command;
+ */
+int
+mpt_issue_raid_req(struct mpt_softc *mpt, struct mpt_raid_volume *vol,
+ struct mpt_raid_disk *disk, request_t *req, u_int Action,
+ uint32_t ActionDataWord, bus_addr_t addr, bus_size_t len,
+ int write, int wait)
+{
+ MSG_RAID_ACTION_REQUEST *rap;
+ SGE_SIMPLE32 *se;
+
+ rap = req->req_vbuf;
+ memset(rap, 0, sizeof *rap);
+ rap->Action = Action;
+ rap->ActionDataWord = ActionDataWord;
+ rap->Function = MPI_FUNCTION_RAID_ACTION;
+ rap->VolumeID = vol->config_page->VolumeID;
+ rap->VolumeBus = vol->config_page->VolumeBus;
+ if (disk != 0)
+ rap->PhysDiskNum = disk->config_page.PhysDiskNum;
+ else
+ rap->PhysDiskNum = 0xFF;
+ se = (SGE_SIMPLE32 *)&rap->ActionDataSGE;
+ se->Address = addr;
+ MPI_pSGE_SET_LENGTH(se, len);
+ MPI_pSGE_SET_FLAGS(se, (MPI_SGE_FLAGS_SIMPLE_ELEMENT |
+ MPI_SGE_FLAGS_LAST_ELEMENT | MPI_SGE_FLAGS_END_OF_BUFFER |
+ MPI_SGE_FLAGS_END_OF_LIST |
+ write ? MPI_SGE_FLAGS_HOST_TO_IOC : MPI_SGE_FLAGS_IOC_TO_HOST));
+ rap->MsgContext = htole32(req->index | raid_handler_id);
+
+ mpt_check_doorbell(mpt);
+ mpt_send_cmd(mpt, req);
+
+ if (wait) {
+ return (mpt_wait_req(mpt, req, REQ_STATE_DONE, REQ_STATE_DONE,
+ /*sleep_ok*/FALSE, /*time_ms*/2000));
+ } else {
+ return (0);
+ }
+}
+
+/*************************** RAID Status Monitoring ***************************/
+static int
+mpt_spawn_raid_thread(struct mpt_softc *mpt)
+{
+ int error;
+
+ /*
+ * Freeze out any CAM transactions until our thread
+ * is able to run at least once. We need to update
+ * our RAID pages before acception I/O or we may
+ * reject I/O to an ID we later determine is for a
+ * hidden physdisk.
+ */
+ xpt_freeze_simq(mpt->phydisk_sim, 1);
+ error = mpt_kthread_create(mpt_raid_thread, mpt,
+ &mpt->raid_thread, /*flags*/0, /*altstack*/0,
+ "mpt_raid%d", mpt->unit);
+ if (error != 0)
+ xpt_release_simq(mpt->phydisk_sim, /*run_queue*/FALSE);
+ return (error);
+}
+
+/*
+ * Lock is not held on entry.
+ */
+static void
+mpt_terminate_raid_thread(struct mpt_softc *mpt)
+{
+
+ MPT_LOCK(mpt);
+ if (mpt->raid_thread == NULL) {
+ MPT_UNLOCK(mpt);
+ return;
+ }
+ mpt->shutdwn_raid = 1;
+ wakeup(mpt->raid_volumes);
+ /*
+ * Sleep on a slightly different location
+ * for this interlock just for added safety.
+ */
+ mpt_sleep(mpt, &mpt->raid_thread, PUSER, "thtrm", 0);
+ MPT_UNLOCK(mpt);
+}
+
+static void
+mpt_cam_rescan_callback(struct cam_periph *periph, union ccb *ccb)
+{
+ xpt_free_path(ccb->ccb_h.path);
+ free(ccb, M_DEVBUF);
+}
+
+static void
+mpt_raid_thread(void *arg)
+{
+ struct mpt_softc *mpt;
+ int firstrun;
+
+#if __FreeBSD_version >= 500000
+ mtx_lock(&Giant);
+#endif
+ mpt = (struct mpt_softc *)arg;
+ firstrun = 1;
+ MPT_LOCK(mpt);
+ while (mpt->shutdwn_raid == 0) {
+
+ if (mpt->raid_wakeup == 0) {
+ mpt_sleep(mpt, &mpt->raid_volumes, PUSER, "idle", 0);
+ continue;
+ }
+
+ mpt->raid_wakeup = 0;
+
+ mpt_refresh_raid_data(mpt);
+
+ /*
+ * Now that we have our first snapshot of RAID data,
+ * allow CAM to access our physical disk bus.
+ */
+ if (firstrun) {
+ firstrun = 0;
+ xpt_release_simq(mpt->phydisk_sim, /*run_queue*/TRUE);
+ }
+
+ if (mpt->raid_rescan != 0) {
+ union ccb *ccb;
+ struct cam_path *path;
+ int error;
+
+ mpt->raid_rescan = 0;
+
+ ccb = malloc(sizeof(*ccb), M_DEVBUF, M_WAITOK);
+ error = xpt_create_path(&path, xpt_periph,
+ cam_sim_path(mpt->phydisk_sim),
+ CAM_TARGET_WILDCARD,
+ CAM_LUN_WILDCARD);
+ if (error != CAM_REQ_CMP) {
+ free(ccb, M_DEVBUF);
+ mpt_prt(mpt, "Unable to rescan RAID Bus!\n");
+ } else {
+ xpt_setup_ccb(&ccb->ccb_h, path, /*priority*/5);
+ ccb->ccb_h.func_code = XPT_SCAN_BUS;
+ ccb->ccb_h.cbfcnp = mpt_cam_rescan_callback;
+ ccb->crcn.flags = CAM_FLAG_NONE;
+ xpt_action(ccb);
+ }
+ }
+ }
+ mpt->raid_thread = NULL;
+ wakeup(&mpt->raid_thread);
+ MPT_UNLOCK(mpt);
+#if __FreeBSD_version >= 500000
+ mtx_unlock(&Giant);
+#endif
+ kthread_exit(0);
+}
+
+cam_status
+mpt_raid_quiesce_disk(struct mpt_softc *mpt, struct mpt_raid_disk *mpt_disk,
+ request_t *req)
+{
+ union ccb *ccb;
+
+ ccb = req->ccb;
+ if ((mpt_disk->flags & MPT_RDF_QUIESCED) != 0)
+ return (CAM_REQ_CMP);
+
+ if ((mpt_disk->flags & MPT_RDF_QUIESCING) == 0) {
+ int rv;
+
+ mpt_disk->flags |= MPT_RDF_QUIESCING;
+ xpt_freeze_devq(ccb->ccb_h.path, 1);
+
+ rv = mpt_issue_raid_req(mpt, mpt_disk->volume, mpt_disk, req,
+ MPI_RAID_ACTION_QUIESCE_PHYS_IO,
+ /*ActionData*/0, /*addr*/0,
+ /*len*/0, /*write*/FALSE,
+ /*wait*/FALSE);
+ if (rv != 0)
+ return (CAM_REQ_CMP_ERR);
+
+ ccb->ccb_h.timeout_ch =
+ timeout(mpt_raid_quiesce_timeout, (caddr_t)ccb, 5 * hz);
+#if 0
+ if (rv == ETIMEDOUT) {
+ mpt_disk_prt(mpt, mpt_disk, "mpt_raid_quiesce_disk: "
+ "Quiece Timed-out\n");
+ xpt_release_devq(ccb->ccb_h.path, 1, /*run*/0);
+ return (CAM_REQ_CMP_ERR);
+ }
+
+ ar = REQ_TO_RAID_ACTION_RESULT(req);
+ if (rv != 0
+ || REQ_IOCSTATUS(req) != MPI_IOCSTATUS_SUCCESS
+ || (ar->action_status != MPI_RAID_ACTION_ASTATUS_SUCCESS)) {
+ mpt_disk_prt(mpt, mpt_disk, "Quiece Failed"
+ "%d:%x:%x\n", rv, req->IOCStatus,
+ ar->action_status);
+ xpt_release_devq(ccb->ccb_h.path, 1, /*run*/0);
+ return (CAM_REQ_CMP_ERR);
+ }
+#endif
+ return (CAM_REQ_INPROG);
+ }
+ return (CAM_REQUEUE_REQ);
+}
+
+/* XXX Ignores that there may be multiple busses/IOCs involved. */
+cam_status
+mpt_map_physdisk(struct mpt_softc *mpt, union ccb *ccb, u_int *tgt)
+{
+ struct mpt_raid_disk *mpt_disk;
+
+ mpt_disk = mpt->raid_disks + ccb->ccb_h.target_id;
+ if (ccb->ccb_h.target_id < mpt->raid_max_disks
+ && (mpt_disk->flags & MPT_RDF_ACTIVE) != 0) {
+
+ *tgt = mpt_disk->config_page.PhysDiskID;
+ return (0);
+ }
+ mpt_lprt(mpt, MPT_PRT_DEBUG, "mpt_map_physdisk(%d) - Not Active\n",
+ ccb->ccb_h.target_id);
+ return (-1);
+}
+
+#if UNUSED
+static void
+mpt_enable_vol(struct mpt_softc *mpt, struct mpt_raid_volume *mpt_vol,
+ int enable)
+{
+ request_t *req;
+ struct mpt_raid_action_result *ar;
+ CONFIG_PAGE_RAID_VOL_0 *vol_pg;
+ int enabled;
+ int rv;
+
+ vol_pg = mpt_vol->config_page;
+ enabled = vol_pg->VolumeStatus.Flags & MPI_RAIDVOL0_STATUS_FLAG_ENABLED;
+
+ /*
+ * If the setting matches the configuration,
+ * there is nothing to do.
+ */
+ if ((enabled && enable)
+ || (!enabled && !enable))
+ return;
+
+ req = mpt_get_request(mpt, /*sleep_ok*/TRUE);
+ if (req == NULL) {
+ mpt_vol_prt(mpt, mpt_vol,
+ "mpt_enable_vol: Get request failed!\n");
+ return;
+ }
+
+ rv = mpt_issue_raid_req(mpt, mpt_vol, /*disk*/NULL, req,
+ enable ? MPI_RAID_ACTION_ENABLE_VOLUME
+ : MPI_RAID_ACTION_DISABLE_VOLUME,
+ /*data*/0, /*addr*/0, /*len*/0,
+ /*write*/FALSE, /*wait*/TRUE);
+ if (rv == ETIMEDOUT) {
+ mpt_vol_prt(mpt, mpt_vol, "mpt_enable_vol: "
+ "%s Volume Timed-out\n",
+ enable ? "Enable" : "Disable");
+ return;
+ }
+ ar = REQ_TO_RAID_ACTION_RESULT(req);
+ if (rv != 0
+ || REQ_IOCSTATUS(req) != MPI_IOCSTATUS_SUCCESS
+ || (ar->action_status != MPI_RAID_ACTION_ASTATUS_SUCCESS)) {
+ mpt_vol_prt(mpt, mpt_vol, "%s Volume Failed: %d:%x:%x\n",
+ enable ? "Enable" : "Disable",
+ rv, req->IOCStatus, ar->action_status);
+ }
+
+ mpt_free_request(mpt, req);
+}
+#endif
+
+static void
+mpt_verify_mwce(struct mpt_softc *mpt, struct mpt_raid_volume *mpt_vol)
+{
+ request_t *req;
+ struct mpt_raid_action_result *ar;
+ CONFIG_PAGE_RAID_VOL_0 *vol_pg;
+ uint32_t data;
+ int rv;
+ int resyncing;
+ int mwce;
+
+ vol_pg = mpt_vol->config_page;
+ resyncing = vol_pg->VolumeStatus.Flags
+ & MPI_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS;
+ mwce = vol_pg->VolumeSettings.Settings
+ & MPI_RAIDVOL0_SETTING_WRITE_CACHING_ENABLE;
+
+ /*
+ * If the setting matches the configuration,
+ * there is nothing to do.
+ */
+ switch (mpt->raid_mwce_setting) {
+ case MPT_RAID_MWCE_REBUILD_ONLY:
+ if ((resyncing && mwce)
+ || (!resyncing && !mwce))
+ return;
+
+ mpt_vol->flags ^= MPT_RVF_WCE_CHANGED;
+ if ((mpt_vol->flags & MPT_RVF_WCE_CHANGED) == 0) {
+ /*
+ * Wait one more status update to see if
+ * resyncing gets enabled. It gets disabled
+ * temporarilly when WCE is changed.
+ */
+ return;
+ }
+ break;
+ case MPT_RAID_MWCE_ON:
+ if (mwce)
+ return;
+ break;
+ case MPT_RAID_MWCE_OFF:
+ if (!mwce)
+ return;
+ break;
+ case MPT_RAID_MWCE_NC:
+ return;
+ }
+
+ req = mpt_get_request(mpt, /*sleep_ok*/TRUE);
+ if (req == NULL) {
+ mpt_vol_prt(mpt, mpt_vol,
+ "mpt_verify_mwce: Get request failed!\n");
+ return;
+ }
+
+ vol_pg->VolumeSettings.Settings ^=
+ MPI_RAIDVOL0_SETTING_WRITE_CACHING_ENABLE;
+ memcpy(&data, &vol_pg->VolumeSettings, sizeof(data));
+ vol_pg->VolumeSettings.Settings ^=
+ MPI_RAIDVOL0_SETTING_WRITE_CACHING_ENABLE;
+ rv = mpt_issue_raid_req(mpt, mpt_vol, /*disk*/NULL, req,
+ MPI_RAID_ACTION_CHANGE_VOLUME_SETTINGS,
+ data, /*addr*/0, /*len*/0,
+ /*write*/FALSE, /*wait*/TRUE);
+ if (rv == ETIMEDOUT) {
+ mpt_vol_prt(mpt, mpt_vol, "mpt_verify_mwce: "
+ "Write Cache Enable Timed-out\n");
+ return;
+ }
+ ar = REQ_TO_RAID_ACTION_RESULT(req);
+ if (rv != 0
+ || REQ_IOCSTATUS(req) != MPI_IOCSTATUS_SUCCESS
+ || (ar->action_status != MPI_RAID_ACTION_ASTATUS_SUCCESS)) {
+ mpt_vol_prt(mpt, mpt_vol, "Write Cache Enable Failed: "
+ "%d:%x:%x\n", rv, req->IOCStatus,
+ ar->action_status);
+ } else {
+ vol_pg->VolumeSettings.Settings ^=
+ MPI_RAIDVOL0_SETTING_WRITE_CACHING_ENABLE;
+ }
+
+ mpt_free_request(mpt, req);
+}
+
+static void
+mpt_verify_resync_rate(struct mpt_softc *mpt, struct mpt_raid_volume *mpt_vol)
+{
+ request_t *req;
+ struct mpt_raid_action_result *ar;
+ CONFIG_PAGE_RAID_VOL_0 *vol_pg;
+ u_int prio;
+ int rv;
+
+ vol_pg = mpt_vol->config_page;
+
+ if (mpt->raid_resync_rate == MPT_RAID_RESYNC_RATE_NC)
+ return;
+
+ /*
+ * If the current RAID resync rate does not
+ * match our configured rate, update it.
+ */
+ prio = vol_pg->VolumeSettings.Settings
+ & MPI_RAIDVOL0_SETTING_PRIORITY_RESYNC;
+ if (vol_pg->ResyncRate != 0
+ && vol_pg->ResyncRate != mpt->raid_resync_rate) {
+
+ req = mpt_get_request(mpt, /*sleep_ok*/TRUE);
+ if (req == NULL) {
+ mpt_vol_prt(mpt, mpt_vol, "mpt_verify_resync_rate: "
+ "Get request failed!\n");
+ return;
+ }
+
+ rv = mpt_issue_raid_req(mpt, mpt_vol, /*disk*/NULL, req,
+ MPI_RAID_ACTION_SET_RESYNC_RATE,
+ mpt->raid_resync_rate, /*addr*/0,
+ /*len*/0, /*write*/FALSE, /*wait*/TRUE);
+ if (rv == ETIMEDOUT) {
+ mpt_vol_prt(mpt, mpt_vol, "mpt_refresh_raid_data: "
+ "Resync Rate Setting Timed-out\n");
+ return;
+ }
+
+ ar = REQ_TO_RAID_ACTION_RESULT(req);
+ if (rv != 0
+ || REQ_IOCSTATUS(req) != MPI_IOCSTATUS_SUCCESS
+ || (ar->action_status != MPI_RAID_ACTION_ASTATUS_SUCCESS)) {
+ mpt_vol_prt(mpt, mpt_vol, "Resync Rate Setting Failed: "
+ "%d:%x:%x\n", rv, req->IOCStatus,
+ ar->action_status);
+ } else
+ vol_pg->ResyncRate = mpt->raid_resync_rate;
+ mpt_free_request(mpt, req);
+ } else if ((prio && mpt->raid_resync_rate < 128)
+ || (!prio && mpt->raid_resync_rate >= 128)) {
+ uint32_t data;
+
+ req = mpt_get_request(mpt, /*sleep_ok*/TRUE);
+ if (req == NULL) {
+ mpt_vol_prt(mpt, mpt_vol, "mpt_verify_resync_rate: "
+ "Get request failed!\n");
+ return;
+ }
+
+ vol_pg->VolumeSettings.Settings ^=
+ MPI_RAIDVOL0_SETTING_PRIORITY_RESYNC;
+ memcpy(&data, &vol_pg->VolumeSettings, sizeof(data));
+ vol_pg->VolumeSettings.Settings ^=
+ MPI_RAIDVOL0_SETTING_PRIORITY_RESYNC;
+ rv = mpt_issue_raid_req(mpt, mpt_vol, /*disk*/NULL, req,
+ MPI_RAID_ACTION_CHANGE_VOLUME_SETTINGS,
+ data, /*addr*/0, /*len*/0,
+ /*write*/FALSE, /*wait*/TRUE);
+ if (rv == ETIMEDOUT) {
+ mpt_vol_prt(mpt, mpt_vol, "mpt_refresh_raid_data: "
+ "Resync Rate Setting Timed-out\n");
+ return;
+ }
+ ar = REQ_TO_RAID_ACTION_RESULT(req);
+ if (rv != 0
+ || REQ_IOCSTATUS(req) != MPI_IOCSTATUS_SUCCESS
+ || (ar->action_status != MPI_RAID_ACTION_ASTATUS_SUCCESS)) {
+ mpt_vol_prt(mpt, mpt_vol, "Resync Rate Setting Failed: "
+ "%d:%x:%x\n", rv, req->IOCStatus,
+ ar->action_status);
+ } else {
+ vol_pg->VolumeSettings.Settings ^=
+ MPI_RAIDVOL0_SETTING_PRIORITY_RESYNC;
+ }
+
+ mpt_free_request(mpt, req);
+ }
+}
+
+static void
+mpt_adjust_queue_depth(struct mpt_softc *mpt, struct mpt_raid_volume *mpt_vol,
+ struct cam_path *path)
+{
+ struct ccb_relsim crs;
+
+ xpt_setup_ccb(&crs.ccb_h, path, /*priority*/5);
+ crs.ccb_h.func_code = XPT_REL_SIMQ;
+ crs.release_flags = RELSIM_ADJUST_OPENINGS;
+ crs.openings = mpt->raid_queue_depth;
+ xpt_action((union ccb *)&crs);
+ if (crs.ccb_h.status != CAM_REQ_CMP)
+ mpt_vol_prt(mpt, mpt_vol, "mpt_adjust_queue_depth failed "
+ "with CAM status %#x\n", crs.ccb_h.status);
+}
+
+static void
+mpt_announce_vol(struct mpt_softc *mpt, struct mpt_raid_volume *mpt_vol)
+{
+ CONFIG_PAGE_RAID_VOL_0 *vol_pg;
+ u_int i;
+
+ vol_pg = mpt_vol->config_page;
+ mpt_vol_prt(mpt, mpt_vol, "Settings (");
+ for (i = 1; i <= 0x8000; i <<= 1) {
+ switch (vol_pg->VolumeSettings.Settings & i) {
+ case MPI_RAIDVOL0_SETTING_WRITE_CACHING_ENABLE:
+ mpt_prtc(mpt, " Member-WCE");
+ break;
+ case MPI_RAIDVOL0_SETTING_OFFLINE_ON_SMART:
+ mpt_prtc(mpt, " Offline-On-SMART-Err");
+ break;
+ case MPI_RAIDVOL0_SETTING_AUTO_CONFIGURE:
+ mpt_prtc(mpt, " Hot-Plug-Spares");
+ break;
+ case MPI_RAIDVOL0_SETTING_PRIORITY_RESYNC:
+ mpt_prtc(mpt, " High-Priority-ReSync");
+ break;
+ default:
+ break;
+ }
+ }
+ mpt_prtc(mpt, " )\n");
+ if (vol_pg->VolumeSettings.HotSparePool != 0) {
+ mpt_vol_prt(mpt, mpt_vol, "Using Spare Pool%s",
+ powerof2(vol_pg->VolumeSettings.HotSparePool)
+ ? ":" : "s:");
+ for (i = 0; i < 8; i++) {
+ u_int mask;
+
+ mask = 0x1 << i;
+ if ((vol_pg->VolumeSettings.HotSparePool & mask) == 0)
+ continue;
+ mpt_prtc(mpt, " %d", i);
+ }
+ mpt_prtc(mpt, "\n");
+ }
+ mpt_vol_prt(mpt, mpt_vol, "%d Members:\n", vol_pg->NumPhysDisks);
+ for (i = 0; i < vol_pg->NumPhysDisks; i++){
+ struct mpt_raid_disk *mpt_disk;
+ CONFIG_PAGE_RAID_PHYS_DISK_0 *disk_pg;
+
+ mpt_disk = mpt->raid_disks
+ + vol_pg->PhysDisk[i].PhysDiskNum;
+ disk_pg = &mpt_disk->config_page;
+ mpt_prtc(mpt, " ");
+ mpt_prtc(mpt, "(%s:%d:%d): ", device_get_nameunit(mpt->dev),
+ disk_pg->PhysDiskBus, disk_pg->PhysDiskID);
+ if (vol_pg->VolumeType == MPI_RAID_VOL_TYPE_IM)
+ mpt_prtc(mpt, "%s\n",
+ mpt_disk->member_number == 0
+ ? "Primary" : "Secondary");
+ else
+ mpt_prtc(mpt, "Stripe Position %d\n",
+ mpt_disk->member_number);
+ }
+}
+
+static void
+mpt_announce_disk(struct mpt_softc *mpt, struct mpt_raid_disk *mpt_disk)
+{
+ CONFIG_PAGE_RAID_PHYS_DISK_0 *disk_pg;
+ u_int i;
+
+ disk_pg = &mpt_disk->config_page;
+ mpt_disk_prt(mpt, mpt_disk,
+ "Physical (%s:%d:%d), Pass-thru (%s:%d:%d)\n",
+ device_get_nameunit(mpt->dev), disk_pg->PhysDiskBus,
+ disk_pg->PhysDiskID, device_get_nameunit(mpt->dev),
+ /*bus*/1, mpt_disk - mpt->raid_disks);
+
+ if (disk_pg->PhysDiskSettings.HotSparePool == 0)
+ return;
+ mpt_disk_prt(mpt, mpt_disk, "Member of Hot Spare Pool%s",
+ powerof2(disk_pg->PhysDiskSettings.HotSparePool)
+ ? ":" : "s:");
+ for (i = 0; i < 8; i++) {
+ u_int mask;
+
+ mask = 0x1 << i;
+ if ((disk_pg->PhysDiskSettings.HotSparePool & mask) == 0)
+ continue;
+ mpt_prtc(mpt, " %d", i);
+ }
+ mpt_prtc(mpt, "\n");
+}
+
+static void
+mpt_refresh_raid_disk(struct mpt_softc *mpt, struct mpt_raid_disk *mpt_disk,
+ IOC_3_PHYS_DISK *ioc_disk)
+{
+ int rv;
+
+ rv = mpt_read_cfg_header(mpt, MPI_CONFIG_PAGETYPE_RAID_PHYSDISK,
+ /*PageNumber*/0, ioc_disk->PhysDiskNum,
+ &mpt_disk->config_page.Header,
+ /*sleep_ok*/TRUE, /*timeout_ms*/5000);
+ if (rv != 0) {
+ mpt_prt(mpt, "mpt_refresh_raid_disk: "
+ "Failed to read RAID Disk Hdr(%d)\n",
+ ioc_disk->PhysDiskNum);
+ return;
+ }
+ rv = mpt_read_cur_cfg_page(mpt, ioc_disk->PhysDiskNum,
+ &mpt_disk->config_page.Header,
+ sizeof(mpt_disk->config_page),
+ /*sleep_ok*/TRUE, /*timeout_ms*/5000);
+ if (rv != 0)
+ mpt_prt(mpt, "mpt_refresh_raid_disk: "
+ "Failed to read RAID Disk Page(%d)\n",
+ ioc_disk->PhysDiskNum);
+}
+
+static void
+mpt_refresh_raid_vol(struct mpt_softc *mpt, struct mpt_raid_volume *mpt_vol,
+ CONFIG_PAGE_IOC_2_RAID_VOL *ioc_vol)
+{
+ CONFIG_PAGE_RAID_VOL_0 *vol_pg;
+ struct mpt_raid_action_result *ar;
+ request_t *req;
+ int rv;
+ int i;
+
+ vol_pg = mpt_vol->config_page;
+ mpt_vol->flags &= ~MPT_RVF_UP2DATE;
+ rv = mpt_read_cfg_header(mpt, MPI_CONFIG_PAGETYPE_RAID_VOLUME,
+ /*PageNumber*/0, ioc_vol->VolumePageNumber,
+ &vol_pg->Header, /*sleep_ok*/TRUE,
+ /*timeout_ms*/5000);
+ if (rv != 0) {
+ mpt_vol_prt(mpt, mpt_vol, "mpt_refresh_raid_vol: "
+ "Failed to read RAID Vol Hdr(%d)\n",
+ ioc_vol->VolumePageNumber);
+ return;
+ }
+ rv = mpt_read_cur_cfg_page(mpt, ioc_vol->VolumePageNumber,
+ &vol_pg->Header, mpt->raid_page0_len,
+ /*sleep_ok*/TRUE, /*timeout_ms*/5000);
+ if (rv != 0) {
+ mpt_vol_prt(mpt, mpt_vol, "mpt_refresh_raid_vol: "
+ "Failed to read RAID Vol Page(%d)\n",
+ ioc_vol->VolumePageNumber);
+ return;
+ }
+ mpt_vol->flags |= MPT_RVF_ACTIVE;
+
+ /* Update disk entry array data. */
+ for (i = 0; i < vol_pg->NumPhysDisks; i++) {
+ struct mpt_raid_disk *mpt_disk;
+
+ mpt_disk = mpt->raid_disks + vol_pg->PhysDisk[i].PhysDiskNum;
+ mpt_disk->volume = mpt_vol;
+ mpt_disk->member_number = vol_pg->PhysDisk[i].PhysDiskMap;
+ if (vol_pg->VolumeType == MPI_RAID_VOL_TYPE_IM)
+ mpt_disk->member_number--;
+ }
+
+ if ((vol_pg->VolumeStatus.Flags
+ & MPI_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS) == 0)
+ return;
+
+ req = mpt_get_request(mpt, /*sleep_ok*/TRUE);
+ if (req == NULL) {
+ mpt_vol_prt(mpt, mpt_vol,
+ "mpt_refresh_raid_vol: Get request failed!\n");
+ return;
+ }
+ rv = mpt_issue_raid_req(mpt, mpt_vol, /*disk*/NULL, req,
+ MPI_RAID_ACTION_INDICATOR_STRUCT,
+ /*ActionWord*/0, /*addr*/0, /*len*/0,
+ /*write*/FALSE, /*wait*/TRUE);
+ if (rv == ETIMEDOUT) {
+ mpt_vol_prt(mpt, mpt_vol, "mpt_refresh_raid_vol: "
+ "Progress indicator fetch timedout!\n");
+ return;
+ }
+
+ ar = REQ_TO_RAID_ACTION_RESULT(req);
+ if (rv == 0
+ && ar->action_status == MPI_RAID_ACTION_ASTATUS_SUCCESS
+ && REQ_IOCSTATUS(req) == MPI_IOCSTATUS_SUCCESS) {
+ memcpy(&mpt_vol->sync_progress,
+ &ar->action_data.indicator_struct,
+ sizeof(mpt_vol->sync_progress));
+ } else {
+ mpt_vol_prt(mpt, mpt_vol, "mpt_refresh_raid_vol: "
+ "Progress indicator fetch failed!\n");
+ }
+ mpt_free_request(mpt, req);
+}
+
+/*
+ * Update in-core information about RAID support. We update any entries
+ * that didn't previously exists or have been marked as needing to
+ * be updated by our event handler. Interesting changes are displayed
+ * to the console.
+ */
+void
+mpt_refresh_raid_data(struct mpt_softc *mpt)
+{
+ CONFIG_PAGE_IOC_2_RAID_VOL *ioc_vol;
+ CONFIG_PAGE_IOC_2_RAID_VOL *ioc_last_vol;
+ IOC_3_PHYS_DISK *ioc_disk;
+ IOC_3_PHYS_DISK *ioc_last_disk;
+ CONFIG_PAGE_RAID_VOL_0 *vol_pg;
+ size_t len;
+ int rv;
+ int i;
+
+ if (mpt->ioc_page2 == NULL || mpt->ioc_page3 == NULL)
+ return;
+
+ /*
+ * Mark all items as unreferrened by the configuration.
+ * This allows us to find, report, and discard stale
+ * entries.
+ */
+ for (i = 0; i < mpt->ioc_page2->MaxPhysDisks; i++)
+ mpt->raid_disks[i].flags &= ~MPT_RDF_REFERENCED;
+ for (i = 0; i < mpt->ioc_page2->MaxVolumes; i++)
+ mpt->raid_volumes[i].flags &= ~MPT_RVF_REFERENCED;
+
+ /*
+ * Get Physical Disk information.
+ */
+ len = mpt->ioc_page3->Header.PageLength * sizeof(uint32_t);
+ rv = mpt_read_cur_cfg_page(mpt, /*PageAddress*/0,
+ &mpt->ioc_page3->Header, len,
+ /*sleep_ok*/TRUE, /*timeout_ms*/5000);
+ if (rv) {
+ mpt_prt(mpt, "mpt_refresh_raid_data: "
+ "Failed to read IOC Page 3\n");
+ return;
+ }
+
+ ioc_disk = mpt->ioc_page3->PhysDisk;
+ ioc_last_disk = ioc_disk + mpt->ioc_page3->NumPhysDisks;
+ for (; ioc_disk != ioc_last_disk; ioc_disk++) {
+ struct mpt_raid_disk *mpt_disk;
+
+ mpt_disk = mpt->raid_disks + ioc_disk->PhysDiskNum;
+ mpt_disk->flags |= MPT_RDF_REFERENCED;
+ if ((mpt_disk->flags & (MPT_RDF_ACTIVE|MPT_RDF_UP2DATE))
+ != (MPT_RDF_ACTIVE|MPT_RDF_UP2DATE)) {
+
+ mpt_refresh_raid_disk(mpt, mpt_disk, ioc_disk);
+
+ }
+ mpt_disk->flags |= MPT_RDF_ACTIVE;
+ mpt->raid_rescan++;
+ }
+
+ /*
+ * Refresh volume data.
+ */
+ len = mpt->ioc_page2->Header.PageLength * sizeof(uint32_t);
+ rv = mpt_read_cur_cfg_page(mpt, /*PageAddress*/0,
+ &mpt->ioc_page2->Header, len,
+ /*sleep_ok*/TRUE, /*timeout_ms*/5000);
+ if (rv) {
+ mpt_prt(mpt, "mpt_refresh_raid_data: "
+ "Failed to read IOC Page 2\n");
+ return;
+ }
+
+ ioc_vol = mpt->ioc_page2->RaidVolume;
+ ioc_last_vol = ioc_vol + mpt->ioc_page2->NumActiveVolumes;
+ for (;ioc_vol != ioc_last_vol; ioc_vol++) {
+ struct mpt_raid_volume *mpt_vol;
+
+ mpt_vol = mpt->raid_volumes + ioc_vol->VolumePageNumber;
+ mpt_vol->flags |= MPT_RVF_REFERENCED;
+ vol_pg = mpt_vol->config_page;
+ if (vol_pg == NULL)
+ continue;
+ if (((mpt_vol->flags & (MPT_RVF_ACTIVE|MPT_RVF_UP2DATE))
+ != (MPT_RVF_ACTIVE|MPT_RVF_UP2DATE))
+ || (vol_pg->VolumeStatus.Flags
+ & MPI_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS) != 0) {
+
+ mpt_refresh_raid_vol(mpt, mpt_vol, ioc_vol);
+ }
+ mpt_vol->flags |= MPT_RVF_ACTIVE;
+ }
+
+ for (i = 0; i < mpt->ioc_page2->MaxVolumes; i++) {
+ struct mpt_raid_volume *mpt_vol;
+ uint64_t total;
+ uint64_t left;
+ int m;
+ u_int prio;
+
+ mpt_vol = &mpt->raid_volumes[i];
+
+ if ((mpt_vol->flags & MPT_RVF_ACTIVE) == 0)
+ continue;
+
+ vol_pg = mpt_vol->config_page;
+ if ((mpt_vol->flags & (MPT_RVF_REFERENCED|MPT_RVF_ANNOUNCED))
+ == MPT_RVF_ANNOUNCED) {
+ mpt_vol_prt(mpt, mpt_vol, "No longer configured\n");
+ mpt_vol->flags = 0;
+ continue;
+ }
+
+ if ((mpt_vol->flags & MPT_RVF_ANNOUNCED) == 0) {
+
+ mpt_announce_vol(mpt, mpt_vol);
+ mpt_vol->flags |= MPT_RVF_ANNOUNCED;
+ }
+
+ if ((mpt_vol->flags & MPT_RVF_UP2DATE) != 0)
+ continue;
+
+ mpt_vol->flags |= MPT_RVF_UP2DATE;
+ mpt_vol_prt(mpt, mpt_vol, "%s - %s\n",
+ mpt_vol_type(mpt_vol), mpt_vol_state(mpt_vol));
+ mpt_verify_mwce(mpt, mpt_vol);
+
+ if (vol_pg->VolumeStatus.Flags == 0)
+ continue;
+
+ mpt_vol_prt(mpt, mpt_vol, "Status (");
+ for (m = 1; m <= 0x80; m <<= 1) {
+ switch (vol_pg->VolumeStatus.Flags & m) {
+ case MPI_RAIDVOL0_STATUS_FLAG_ENABLED:
+ mpt_prtc(mpt, " Enabled");
+ break;
+ case MPI_RAIDVOL0_STATUS_FLAG_QUIESCED:
+ mpt_prtc(mpt, " Quiesced");
+ break;
+ case MPI_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS:
+ mpt_prtc(mpt, " Re-Syncing");
+ break;
+ case MPI_RAIDVOL0_STATUS_FLAG_VOLUME_INACTIVE:
+ mpt_prtc(mpt, " Inactive");
+ break;
+ default:
+ break;
+ }
+ }
+ mpt_prtc(mpt, " )\n");
+
+ if ((vol_pg->VolumeStatus.Flags
+ & MPI_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS) == 0)
+ continue;
+
+ mpt_verify_resync_rate(mpt, mpt_vol);
+
+ left = u64toh(mpt_vol->sync_progress.BlocksRemaining);
+ total = u64toh(mpt_vol->sync_progress.TotalBlocks);
+ if (vol_pg->ResyncRate != 0) {
+
+ prio = ((u_int)vol_pg->ResyncRate * 100000) / 0xFF;
+ mpt_vol_prt(mpt, mpt_vol, "Rate %d.%d%%\n",
+ prio / 1000, prio % 1000);
+ } else {
+ prio = vol_pg->VolumeSettings.Settings
+ & MPI_RAIDVOL0_SETTING_PRIORITY_RESYNC;
+ mpt_vol_prt(mpt, mpt_vol, "%s Priority Re-Sync\n",
+ prio ? "High" : "Low");
+ }
+ mpt_vol_prt(mpt, mpt_vol, "%ju of %ju "
+ "blocks remaining\n", (uintmax_t)left,
+ (uintmax_t)total);
+
+ /* Periodically report on sync progress. */
+ mpt_schedule_raid_refresh(mpt);
+ }
+
+ for (i = 0; i < mpt->ioc_page2->MaxPhysDisks; i++) {
+ struct mpt_raid_disk *mpt_disk;
+ CONFIG_PAGE_RAID_PHYS_DISK_0 *disk_pg;
+ int m;
+
+ mpt_disk = &mpt->raid_disks[i];
+ disk_pg = &mpt_disk->config_page;
+
+ if ((mpt_disk->flags & MPT_RDF_ACTIVE) == 0)
+ continue;
+
+ if ((mpt_disk->flags & (MPT_RDF_REFERENCED|MPT_RDF_ANNOUNCED))
+ == MPT_RDF_ANNOUNCED) {
+ mpt_disk_prt(mpt, mpt_disk, "No longer configured\n");
+ mpt_disk->flags = 0;
+ mpt->raid_rescan++;
+ continue;
+ }
+
+ if ((mpt_disk->flags & MPT_RDF_ANNOUNCED) == 0) {
+
+ mpt_announce_disk(mpt, mpt_disk);
+ mpt_disk->flags |= MPT_RVF_ANNOUNCED;
+ }
+
+ if ((mpt_disk->flags & MPT_RDF_UP2DATE) != 0)
+ continue;
+
+ mpt_disk->flags |= MPT_RDF_UP2DATE;
+ mpt_disk_prt(mpt, mpt_disk, "%s\n", mpt_disk_state(mpt_disk));
+ if (disk_pg->PhysDiskStatus.Flags == 0)
+ continue;
+
+ mpt_disk_prt(mpt, mpt_disk, "Status (");
+ for (m = 1; m <= 0x80; m <<= 1) {
+ switch (disk_pg->PhysDiskStatus.Flags & m) {
+ case MPI_PHYSDISK0_STATUS_FLAG_OUT_OF_SYNC:
+ mpt_prtc(mpt, " Out-Of-Sync");
+ break;
+ case MPI_PHYSDISK0_STATUS_FLAG_QUIESCED:
+ mpt_prtc(mpt, " Quiesced");
+ break;
+ default:
+ break;
+ }
+ }
+ mpt_prtc(mpt, " )\n");
+ }
+}
+
+static void
+mpt_raid_timer(void *arg)
+{
+ struct mpt_softc *mpt;
+
+ mpt = (struct mpt_softc *)arg;
+ MPT_LOCK(mpt);
+ mpt_raid_wakeup(mpt);
+ MPT_UNLOCK(mpt);
+}
+
+static void
+mpt_raid_quiesce_timeout(void *arg)
+{
+ /* Complete the CCB with error */
+ /* COWWWW */
+}
+
+void
+mpt_schedule_raid_refresh(struct mpt_softc *mpt)
+{
+ callout_reset(&mpt->raid_timer, MPT_RAID_SYNC_REPORT_INTERVAL,
+ mpt_raid_timer, mpt);
+}
+
+static int
+mpt_raid_set_vol_resync_rate(struct mpt_softc *mpt, u_int rate)
+{
+ struct mpt_raid_volume *mpt_vol;
+
+ if ((rate > MPT_RAID_RESYNC_RATE_MAX
+ || rate < MPT_RAID_RESYNC_RATE_MIN)
+ && rate != MPT_RAID_RESYNC_RATE_NC)
+ return (EINVAL);
+
+ MPT_LOCK(mpt);
+ mpt->raid_resync_rate = rate;
+ RAID_VOL_FOREACH(mpt, mpt_vol) {
+ if ((mpt_vol->flags & MPT_RVF_ACTIVE) == 0)
+ continue;
+ mpt_verify_resync_rate(mpt, mpt_vol);
+ }
+ MPT_UNLOCK(mpt);
+ return (0);
+}
+
+static int
+mpt_raid_set_vol_queue_depth(struct mpt_softc *mpt, u_int vol_queue_depth)
+{
+ struct mpt_raid_volume *mpt_vol;
+
+ if (vol_queue_depth > 255
+ || vol_queue_depth < 1)
+ return (EINVAL);
+
+ MPT_LOCK(mpt);
+ mpt->raid_queue_depth = vol_queue_depth;
+ RAID_VOL_FOREACH(mpt, mpt_vol) {
+ struct cam_path *path;
+ int error;
+
+ if ((mpt_vol->flags & MPT_RVF_ACTIVE) == 0)
+ continue;
+
+ mpt->raid_rescan = 0;
+
+ error = xpt_create_path(&path, xpt_periph,
+ cam_sim_path(mpt->sim),
+ mpt_vol->config_page->VolumeID,
+ /*lun*/0);
+ if (error != CAM_REQ_CMP) {
+ mpt_vol_prt(mpt, mpt_vol, "Unable to allocate path!\n");
+ continue;
+ }
+ mpt_adjust_queue_depth(mpt, mpt_vol, path);
+ xpt_free_path(path);
+ }
+ MPT_UNLOCK(mpt);
+ return (0);
+}
+
+static int
+mpt_raid_set_vol_mwce(struct mpt_softc *mpt, mpt_raid_mwce_t mwce)
+{
+ struct mpt_raid_volume *mpt_vol;
+ int force_full_resync;
+
+ MPT_LOCK(mpt);
+ if (mwce == mpt->raid_mwce_setting) {
+ MPT_UNLOCK(mpt);
+ return (0);
+ }
+
+ /*
+ * Catch MWCE being left on due to a failed shutdown. Since
+ * sysctls cannot be set by the loader, we treat the first
+ * setting of this varible specially and force a full volume
+ * resync if MWCE is enabled and a resync is in progress.
+ */
+ force_full_resync = 0;
+ if (mpt->raid_mwce_set == 0
+ && mpt->raid_mwce_setting == MPT_RAID_MWCE_NC
+ && mwce == MPT_RAID_MWCE_REBUILD_ONLY)
+ force_full_resync = 1;
+
+ mpt->raid_mwce_setting = mwce;
+ RAID_VOL_FOREACH(mpt, mpt_vol) {
+ CONFIG_PAGE_RAID_VOL_0 *vol_pg;
+ int resyncing;
+ int mwce;
+
+ if ((mpt_vol->flags & MPT_RVF_ACTIVE) == 0)
+ continue;
+
+ vol_pg = mpt_vol->config_page;
+ resyncing = vol_pg->VolumeStatus.Flags
+ & MPI_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS;
+ mwce = vol_pg->VolumeSettings.Settings
+ & MPI_RAIDVOL0_SETTING_WRITE_CACHING_ENABLE;
+ if (force_full_resync && resyncing && mwce) {
+
+ /*
+ * XXX disable/enable volume should force a resync,
+ * but we'll need to queice, drain, and restart
+ * I/O to do that.
+ */
+ mpt_vol_prt(mpt, mpt_vol, "WARNING - Unsafe shutdown "
+ "detected. Suggest full resync.\n");
+ }
+ mpt_verify_mwce(mpt, mpt_vol);
+ }
+ mpt->raid_mwce_set = 1;
+ MPT_UNLOCK(mpt);
+ return (0);
+}
+
+const char *mpt_vol_mwce_strs[] =
+{
+ "On",
+ "Off",
+ "On-During-Rebuild",
+ "NC"
+};
+
+static int
+mpt_raid_sysctl_vol_member_wce(SYSCTL_HANDLER_ARGS)
+{
+ char inbuf[20];
+ struct mpt_softc *mpt;
+ const char *str;
+ int error;
+ u_int size;
+ u_int i;
+
+ GIANT_REQUIRED;
+ mpt = (struct mpt_softc *)arg1;
+ str = mpt_vol_mwce_strs[mpt->raid_mwce_setting];
+ error = SYSCTL_OUT(req, str, strlen(str) + 1);
+ if (error || !req->newptr)
+ return (error);
+
+ size = req->newlen - req->newidx;
+ if (size >= sizeof(inbuf))
+ return (EINVAL);
+
+ error = SYSCTL_IN(req, inbuf, size);
+ if (error)
+ return (error);
+ inbuf[size] = '\0';
+ for (i = 0; i < NUM_ELEMENTS(mpt_vol_mwce_strs); i++) {
+
+ if (strcmp(mpt_vol_mwce_strs[i], inbuf) == 0)
+ return (mpt_raid_set_vol_mwce(mpt, i));
+ }
+ return (EINVAL);
+}
+
+static int
+mpt_raid_sysctl_vol_resync_rate(SYSCTL_HANDLER_ARGS)
+{
+ struct mpt_softc *mpt;
+ u_int raid_resync_rate;
+ int error;
+
+ GIANT_REQUIRED;
+ mpt = (struct mpt_softc *)arg1;
+ raid_resync_rate = mpt->raid_resync_rate;
+
+ error = sysctl_handle_int(oidp, &raid_resync_rate, 0, req);
+ if (error || !req->newptr)
+ return error;
+
+ return (mpt_raid_set_vol_resync_rate(mpt, raid_resync_rate));
+}
+
+static int
+mpt_raid_sysctl_vol_queue_depth(SYSCTL_HANDLER_ARGS)
+{
+ struct mpt_softc *mpt;
+ u_int raid_queue_depth;
+ int error;
+
+ GIANT_REQUIRED;
+ mpt = (struct mpt_softc *)arg1;
+ raid_queue_depth = mpt->raid_queue_depth;
+
+ error = sysctl_handle_int(oidp, &raid_queue_depth, 0, req);
+ if (error || !req->newptr)
+ return error;
+
+ return (mpt_raid_set_vol_queue_depth(mpt, raid_queue_depth));
+}
+
+static void
+mpt_raid_sysctl_attach(struct mpt_softc *mpt)
+{
+ struct sysctl_ctx_list *ctx = device_get_sysctl_ctx(mpt->dev);
+ struct sysctl_oid *tree = device_get_sysctl_tree(mpt->dev);
+
+ SYSCTL_ADD_PROC(ctx, SYSCTL_CHILDREN(tree), OID_AUTO,
+ "vol_member_wce", CTLTYPE_STRING | CTLFLAG_RW, mpt, 0,
+ mpt_raid_sysctl_vol_member_wce, "A",
+ "volume member WCE(On,Off,On-During-Rebuild,NC)");
+
+ SYSCTL_ADD_PROC(ctx, SYSCTL_CHILDREN(tree), OID_AUTO,
+ "vol_queue_depth", CTLTYPE_INT | CTLFLAG_RW, mpt, 0,
+ mpt_raid_sysctl_vol_queue_depth, "I",
+ "default volume queue depth");
+
+ SYSCTL_ADD_PROC(ctx, SYSCTL_CHILDREN(tree), OID_AUTO,
+ "vol_resync_rate", CTLTYPE_INT | CTLFLAG_RW, mpt, 0,
+ mpt_raid_sysctl_vol_resync_rate, "I",
+ "volume resync priority (0 == NC, 1 - 255)");
+}
diff --git a/sys/dev/mpt/mpt_raid.h b/sys/dev/mpt/mpt_raid.h
new file mode 100644
index 0000000..939a9f0
--- /dev/null
+++ b/sys/dev/mpt/mpt_raid.h
@@ -0,0 +1,95 @@
+/* $FreeBSD$ */
+/*-
+ * Definitions for the integrated RAID features LSI MPT Fusion adapters.
+ *
+ * Copyright (c) 2005, WHEEL Sp. z o.o.
+ * Copyright (c) 2004, 2005 Justin T. Gibbs
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon including
+ * a substantially similar Disclaimer requirement for further binary
+ * redistribution.
+ * 3. Neither the name of the LSI Logic Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF THE COPYRIGHT
+ * OWNER OR CONTRIBUTOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _MPT_RAID_H_
+#define _MPT_RAID_H_
+
+#include <cam/cam.h>
+union ccb;
+
+typedef enum {
+ MPT_RAID_MWCE_ON,
+ MPT_RAID_MWCE_OFF,
+ MPT_RAID_MWCE_REBUILD_ONLY,
+ MPT_RAID_MWCE_NC
+} mpt_raid_mwce_t;
+
+const char *
+ mpt_vol_type(struct mpt_raid_volume *);
+const char *
+ mpt_vol_state(struct mpt_raid_volume *);
+const char *
+ mpt_disk_state(struct mpt_raid_disk *);
+void mpt_vol_prt(struct mpt_softc *, struct mpt_raid_volume *,
+ const char *fmt, ...);
+void mpt_disk_prt(struct mpt_softc *mpt, struct mpt_raid_disk *disk,
+ const char *fmt, ...);
+
+int mpt_issue_raid_req(struct mpt_softc *mpt, struct mpt_raid_volume *vol,
+ struct mpt_raid_disk *disk, request_t *req,
+ u_int Action, uint32_t ActionDataWord,
+ bus_addr_t addr, bus_size_t len, int write,
+ int wait);
+cam_status
+ mpt_map_physdisk(struct mpt_softc *mpt, union ccb *, u_int *tgt);
+cam_status
+ mpt_raid_quiesce_disk(struct mpt_softc *mpt,
+ struct mpt_raid_disk *mpt_disk,
+ request_t *req);
+void mpt_refresh_raid_data(struct mpt_softc *);
+void mpt_schedule_raid_refresh(struct mpt_softc *mpt);
+
+static __inline void
+mpt_raid_wakeup(struct mpt_softc *mpt)
+{
+ mpt->raid_wakeup++;
+ wakeup(&mpt->raid_volumes);
+}
+
+#define MPT_RAID_SYNC_REPORT_INTERVAL (15 * 60 * hz)
+#define MPT_RAID_RESYNC_RATE_MAX (255)
+#define MPT_RAID_RESYNC_RATE_MIN (1)
+#define MPT_RAID_RESYNC_RATE_NC (0)
+#define MPT_RAID_RESYNC_RATE_DEFAULT MPT_RAID_RESYNC_RATE_NC
+
+#define MPT_RAID_QUEUE_DEPTH_DEFAULT (128)
+
+#define MPT_RAID_MWCE_DEFAULT MPT_RAID_MWCE_NC
+
+#define RAID_VOL_FOREACH(mpt, mpt_vol) \
+ for (mpt_vol = (mpt)->raid_volumes; \
+ mpt_vol != (mpt)->raid_volumes + (mpt)->raid_max_volumes; \
+ mpt_vol++)
+
+#endif /*_MPT_RAID_H_ */
diff --git a/sys/dev/mpt/mpt_reg.h b/sys/dev/mpt/mpt_reg.h
new file mode 100644
index 0000000..6403b52
--- /dev/null
+++ b/sys/dev/mpt/mpt_reg.h
@@ -0,0 +1,125 @@
+/* $FreeBSD$ */
+/*-
+ * Generic defines for LSI '909 FC adapters.
+ * FreeBSD Version.
+ *
+ * Copyright (c) 2000, 2001 by Greg Ansley
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice immediately at the beginning of the file, without modification,
+ * this list of conditions, and the following disclaimer.
+ * 2. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
+ * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ * Additional Copyright (c) 2002 by Matthew Jacob under same license.
+ */
+#ifndef _MPT_REG_H_
+#define _MPT_REG_H_
+
+#define MPT_OFFSET_DOORBELL 0x00
+#define MPT_OFFSET_SEQUENCE 0x04
+#define MPT_OFFSET_DIAGNOSTIC 0x08
+#define MPT_OFFSET_TEST 0x0C
+#define MPT_OFFSET_DIAG_DATA 0x10
+#define MPT_OFFSET_DIAG_ADDR 0x14
+#define MPT_OFFSET_INTR_STATUS 0x30
+#define MPT_OFFSET_INTR_MASK 0x34
+#define MPT_OFFSET_REQUEST_Q 0x40
+#define MPT_OFFSET_REPLY_Q 0x44
+#define MPT_OFFSET_HOST_INDEX 0x50
+#define MPT_OFFSET_FUBAR 0x90
+
+/* Bit Maps for DOORBELL register */
+enum DB_STATE_BITS {
+ MPT_DB_STATE_RESET = 0x00000000,
+ MPT_DB_STATE_READY = 0x10000000,
+ MPT_DB_STATE_RUNNING = 0x20000000,
+ MPT_DB_STATE_FAULT = 0x40000000,
+ MPT_DB_STATE_MASK = 0xf0000000
+};
+
+#define MPT_STATE(v) ((enum DB_STATE_BITS)((v) & MPT_DB_STATE_MASK))
+
+#define MPT_DB_LENGTH_SHIFT (16)
+#define MPT_DB_DATA_MASK (0xffff)
+
+#define MPT_DB_DB_USED 0x08000000
+#define MPT_DB_IS_IN_USE(v) (((v) & MPT_DB_DB_USED) != 0)
+
+/*
+ * "Whom" initializor values
+ */
+#define MPT_DB_INIT_NOONE 0x00
+#define MPT_DB_INIT_BIOS 0x01
+#define MPT_DB_INIT_ROMBIOS 0x02
+#define MPT_DB_INIT_PCIPEER 0x03
+#define MPT_DB_INIT_HOST 0x04
+#define MPT_DB_INIT_MANUFACTURE 0x05
+
+#define MPT_WHO(v) \
+ ((v & MPI_DOORBELL_WHO_INIT_MASK) >> MPI_DOORBELL_WHO_INIT_SHIFT)
+
+/* Function Maps for DOORBELL register */
+enum DB_FUNCTION_BITS {
+ MPT_FUNC_IOC_RESET = 0x40000000,
+ MPT_FUNC_UNIT_RESET = 0x41000000,
+ MPT_FUNC_HANDSHAKE = 0x42000000,
+ MPT_FUNC_REPLY_REMOVE = 0x43000000,
+ MPT_FUNC_MASK = 0xff000000
+};
+
+/* Function Maps for INTERRUPT request register */
+enum _MPT_INTR_REQ_BITS {
+ MPT_INTR_DB_BUSY = 0x80000000,
+ MPT_INTR_REPLY_READY = 0x00000008,
+ MPT_INTR_DB_READY = 0x00000001
+};
+
+#define MPT_DB_IS_BUSY(v) (((v) & MPT_INTR_DB_BUSY) != 0)
+#define MPT_DB_INTR(v) (((v) & MPT_INTR_DB_READY) != 0)
+#define MPT_REPLY_INTR(v) (((v) & MPT_INTR_REPLY_READY) != 0)
+
+/* Function Maps for INTERRUPT make register */
+enum _MPT_INTR_MASK_BITS {
+ MPT_INTR_REPLY_MASK = 0x00000008,
+ MPT_INTR_DB_MASK = 0x00000001
+};
+
+/* Magic addresses in diagnostic memory space */
+#define MPT_DIAG_IOP_BASE (0x00000000)
+#define MPT_DIAG_IOP_SIZE (0x00002000)
+#define MPT_DIAG_GPIO (0x00030010)
+#define MPT_DIAG_IOPQ_REG_BASE0 (0x00050004)
+#define MPT_DIAG_IOPQ_REG_BASE1 (0x00051004)
+#define MPT_DIAG_CTX0_BASE (0x000E0000)
+#define MPT_DIAG_CTX0_SIZE (0x00002000)
+#define MPT_DIAG_CTX1_BASE (0x001E0000)
+#define MPT_DIAG_CTX1_SIZE (0x00002000)
+#define MPT_DIAG_FLASH_BASE (0x00800000)
+#define MPT_DIAG_RAM_BASE (0x01000000)
+#define MPT_DIAG_RAM_SIZE (0x00400000)
+#define MPT_DIAG_MEM_CFG_BASE (0x3F000000)
+#define MPT_DIAG_MEM_CFG_BADFL (0x04000000)
+
+/* GPIO bit assignments */
+#define MPT_DIAG_GPIO_SCL (0x00010000)
+#define MPT_DIAG_GPIO_SDA_OUT (0x00008000)
+#define MPT_DIAG_GPIO_SDA_IN (0x00004000)
+
+#define MPT_REPLY_EMPTY (0xFFFFFFFF) /* Reply Queue Empty Symbol */
+#endif /* _MPT_REG_H_ */
OpenPOWER on IntegriCloud