diff options
author | Dan Williams <dan.j.williams@intel.com> | 2007-01-02 11:10:44 -0700 |
---|---|---|
committer | Dan Williams <dan.j.williams@intel.com> | 2007-07-13 08:06:14 -0700 |
commit | 9bc89cd82d6f88fb0ca39b30445c329a430fd66b (patch) | |
tree | 7bd0e856abd359f84edea1bacfd1dd32edd93fbb /crypto/async_tx/async_xor.c | |
parent | 685784aaf3cd0e3ff5e36c7ecf6f441cdbf57f73 (diff) | |
download | op-kernel-dev-9bc89cd82d6f88fb0ca39b30445c329a430fd66b.zip op-kernel-dev-9bc89cd82d6f88fb0ca39b30445c329a430fd66b.tar.gz |
async_tx: add the async_tx api
The async_tx api provides methods for describing a chain of asynchronous
bulk memory transfers/transforms with support for inter-transactional
dependencies. It is implemented as a dmaengine client that smooths over
the details of different hardware offload engine implementations. Code
that is written to the api can optimize for asynchronous operation and the
api will fit the chain of operations to the available offload resources.
I imagine that any piece of ADMA hardware would register with the
'async_*' subsystem, and a call to async_X would be routed as
appropriate, or be run in-line. - Neil Brown
async_tx exploits the capabilities of struct dma_async_tx_descriptor to
provide an api of the following general format:
struct dma_async_tx_descriptor *
async_<operation>(..., struct dma_async_tx_descriptor *depend_tx,
dma_async_tx_callback cb_fn, void *cb_param)
{
struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>);
struct dma_device *device = chan ? chan->device : NULL;
int int_en = cb_fn ? 1 : 0;
struct dma_async_tx_descriptor *tx = device ?
device->device_prep_dma_<operation>(chan, len, int_en) : NULL;
if (tx) { /* run <operation> asynchronously */
...
tx->tx_set_dest(addr, tx, index);
...
tx->tx_set_src(addr, tx, index);
...
async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
} else { /* run <operation> synchronously */
...
<operation>
...
async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param);
}
return tx;
}
async_tx_find_channel() returns a capable channel from its pool. The
channel pool is organized as a per-cpu array of channel pointers. The
async_tx_rebalance() routine is tasked with managing these arrays. In the
uniprocessor case async_tx_rebalance() tries to spread responsibility
evenly over channels of similar capabilities. For example if there are two
copy+xor channels, one will handle copy operations and the other will
handle xor. In the SMP case async_tx_rebalance() attempts to spread the
operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor
channel0 while cpu1 gets copy channel 1 and xor channel 1. When a
dependency is specified async_tx_find_channel defaults to keeping the
operation on the same channel. A xor->copy->xor chain will stay on one
channel if it supports both operation types, otherwise the transaction will
transition between a copy and a xor resource.
Currently the raid5 implementation in the MD raid456 driver has been
converted to the async_tx api. A driver for the offload engines on the
Intel Xscale series of I/O processors, iop-adma, is provided in a later
commit. With the iop-adma driver and async_tx, raid456 is able to offload
copy, xor, and xor-zero-sum operations to hardware engines.
On iop342 tiobench showed higher throughput for sequential writes (20 - 30%
improvement) and sequential reads to a degraded array (40 - 55%
improvement). For the other cases performance was roughly equal, +/- a few
percentage points. On a x86-smp platform the performance of the async_tx
implementation (in synchronous mode) was also +/- a few percentage points
of the original implementation. According to 'top' on iop342 CPU
utilization drops from ~50% to ~15% during a 'resync' while the speed
according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s.
The tiobench command line used for testing was: tiobench --size 2048
--block 4096 --block 131072 --dir /mnt/raid --numruns 5
* iop342 had 1GB of memory available
Details:
* if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making
async_tx_find_channel a static inline routine that always returns NULL
* when a callback is specified for a given transaction an interrupt will
fire at operation completion time and the callback will occur in a
tasklet. if the the channel does not support interrupts then a live
polling wait will be performed
* the api is written as a dmaengine client that requests all available
channels
* In support of dependencies the api implicitly schedules channel-switch
interrupts. The interrupt triggers the cleanup tasklet which causes
pending operations to be scheduled on the next channel
* Xor engines treat an xor destination address differently than a software
xor routine. To the software routine the destination address is an implied
source, whereas engines treat it as a write-only destination. This patch
modifies the xor_blocks routine to take a an explicit destination address
to mirror the hardware.
Changelog:
* fixed a leftover debug print
* don't allow callbacks in async_interrupt_cond
* fixed xor_block changes
* fixed usage of ASYNC_TX_XOR_DROP_DEST
* drop dma mapping methods, suggested by Chris Leech
* printk warning fixups from Andrew Morton
* don't use inline in C files, Adrian Bunk
* select the API when MD is enabled
* BUG_ON xor source counts <= 1
* implicitly handle hardware concerns like channel switching and
interrupts, Neil Brown
* remove the per operation type list, and distribute operation capabilities
evenly amongst the available channels
* simplify async_tx_find_channel to optimize the fast path
* introduce the channel_table_initialized flag to prevent early calls to
the api
* reorganize the code to mimic crypto
* include mm.h as not all archs include it in dma-mapping.h
* make the Kconfig options non-user visible, Adrian Bunk
* move async_tx under crypto since it is meant as 'core' functionality, and
the two may share algorithms in the future
* move large inline functions into c files
* checkpatch.pl fixes
* gpl v2 only correction
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-By: NeilBrown <neilb@suse.de>
Diffstat (limited to 'crypto/async_tx/async_xor.c')
-rw-r--r-- | crypto/async_tx/async_xor.c | 327 |
1 files changed, 327 insertions, 0 deletions
diff --git a/crypto/async_tx/async_xor.c b/crypto/async_tx/async_xor.c new file mode 100644 index 0000000..2575f67 --- /dev/null +++ b/crypto/async_tx/async_xor.c @@ -0,0 +1,327 @@ +/* + * xor offload engine api + * + * Copyright © 2006, Intel Corporation. + * + * Dan Williams <dan.j.williams@intel.com> + * + * with architecture considerations by: + * Neil Brown <neilb@suse.de> + * Jeff Garzik <jeff@garzik.org> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. + * + */ +#include <linux/kernel.h> +#include <linux/interrupt.h> +#include <linux/mm.h> +#include <linux/dma-mapping.h> +#include <linux/raid/xor.h> +#include <linux/async_tx.h> + +static void +do_async_xor(struct dma_async_tx_descriptor *tx, struct dma_device *device, + struct dma_chan *chan, struct page *dest, struct page **src_list, + unsigned int offset, unsigned int src_cnt, size_t len, + enum async_tx_flags flags, struct dma_async_tx_descriptor *depend_tx, + dma_async_tx_callback cb_fn, void *cb_param) +{ + dma_addr_t dma_addr; + enum dma_data_direction dir; + int i; + + pr_debug("%s: len: %zu\n", __FUNCTION__, len); + + dir = (flags & ASYNC_TX_ASSUME_COHERENT) ? + DMA_NONE : DMA_FROM_DEVICE; + + dma_addr = dma_map_page(device->dev, dest, offset, len, dir); + tx->tx_set_dest(dma_addr, tx, 0); + + dir = (flags & ASYNC_TX_ASSUME_COHERENT) ? + DMA_NONE : DMA_TO_DEVICE; + + for (i = 0; i < src_cnt; i++) { + dma_addr = dma_map_page(device->dev, src_list[i], + offset, len, dir); + tx->tx_set_src(dma_addr, tx, i); + } + + async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param); +} + +static void +do_sync_xor(struct page *dest, struct page **src_list, unsigned int offset, + unsigned int src_cnt, size_t len, enum async_tx_flags flags, + struct dma_async_tx_descriptor *depend_tx, + dma_async_tx_callback cb_fn, void *cb_param) +{ + void *_dest; + int i; + + pr_debug("%s: len: %zu\n", __FUNCTION__, len); + + /* reuse the 'src_list' array to convert to buffer pointers */ + for (i = 0; i < src_cnt; i++) + src_list[i] = (struct page *) + (page_address(src_list[i]) + offset); + + /* set destination address */ + _dest = page_address(dest) + offset; + + if (flags & ASYNC_TX_XOR_ZERO_DST) + memset(_dest, 0, len); + + xor_blocks(src_cnt, len, _dest, + (void **) src_list); + + async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param); +} + +/** + * async_xor - attempt to xor a set of blocks with a dma engine. + * xor_blocks always uses the dest as a source so the ASYNC_TX_XOR_ZERO_DST + * flag must be set to not include dest data in the calculation. The + * assumption with dma eninges is that they only use the destination + * buffer as a source when it is explicity specified in the source list. + * @dest: destination page + * @src_list: array of source pages (if the dest is also a source it must be + * at index zero). The contents of this array may be overwritten. + * @offset: offset in pages to start transaction + * @src_cnt: number of source pages + * @len: length in bytes + * @flags: ASYNC_TX_XOR_ZERO_DST, ASYNC_TX_XOR_DROP_DEST, + * ASYNC_TX_ASSUME_COHERENT, ASYNC_TX_ACK, ASYNC_TX_DEP_ACK + * @depend_tx: xor depends on the result of this transaction. + * @cb_fn: function to call when the xor completes + * @cb_param: parameter to pass to the callback routine + */ +struct dma_async_tx_descriptor * +async_xor(struct page *dest, struct page **src_list, unsigned int offset, + int src_cnt, size_t len, enum async_tx_flags flags, + struct dma_async_tx_descriptor *depend_tx, + dma_async_tx_callback cb_fn, void *cb_param) +{ + struct dma_chan *chan = async_tx_find_channel(depend_tx, DMA_XOR); + struct dma_device *device = chan ? chan->device : NULL; + struct dma_async_tx_descriptor *tx = NULL; + dma_async_tx_callback _cb_fn; + void *_cb_param; + unsigned long local_flags; + int xor_src_cnt; + int i = 0, src_off = 0, int_en; + + BUG_ON(src_cnt <= 1); + + while (src_cnt) { + local_flags = flags; + if (device) { /* run the xor asynchronously */ + xor_src_cnt = min(src_cnt, device->max_xor); + /* if we are submitting additional xors + * only set the callback on the last transaction + */ + if (src_cnt > xor_src_cnt) { + local_flags &= ~ASYNC_TX_ACK; + _cb_fn = NULL; + _cb_param = NULL; + } else { + _cb_fn = cb_fn; + _cb_param = cb_param; + } + + int_en = _cb_fn ? 1 : 0; + + tx = device->device_prep_dma_xor( + chan, xor_src_cnt, len, int_en); + + if (tx) { + do_async_xor(tx, device, chan, dest, + &src_list[src_off], offset, xor_src_cnt, len, + local_flags, depend_tx, _cb_fn, + _cb_param); + } else /* fall through */ + goto xor_sync; + } else { /* run the xor synchronously */ +xor_sync: + /* in the sync case the dest is an implied source + * (assumes the dest is at the src_off index) + */ + if (flags & ASYNC_TX_XOR_DROP_DST) { + src_cnt--; + src_off++; + } + + /* process up to 'MAX_XOR_BLOCKS' sources */ + xor_src_cnt = min(src_cnt, MAX_XOR_BLOCKS); + + /* if we are submitting additional xors + * only set the callback on the last transaction + */ + if (src_cnt > xor_src_cnt) { + local_flags &= ~ASYNC_TX_ACK; + _cb_fn = NULL; + _cb_param = NULL; + } else { + _cb_fn = cb_fn; + _cb_param = cb_param; + } + + /* wait for any prerequisite operations */ + if (depend_tx) { + /* if ack is already set then we cannot be sure + * we are referring to the correct operation + */ + BUG_ON(depend_tx->ack); + if (dma_wait_for_async_tx(depend_tx) == + DMA_ERROR) + panic("%s: DMA_ERROR waiting for " + "depend_tx\n", + __FUNCTION__); + } + + do_sync_xor(dest, &src_list[src_off], offset, + xor_src_cnt, len, local_flags, depend_tx, + _cb_fn, _cb_param); + } + + /* the previous tx is hidden from the client, + * so ack it + */ + if (i && depend_tx) + async_tx_ack(depend_tx); + + depend_tx = tx; + + if (src_cnt > xor_src_cnt) { + /* drop completed sources */ + src_cnt -= xor_src_cnt; + src_off += xor_src_cnt; + + /* unconditionally preserve the destination */ + flags &= ~ASYNC_TX_XOR_ZERO_DST; + + /* use the intermediate result a source, but remember + * it's dropped, because it's implied, in the sync case + */ + src_list[--src_off] = dest; + src_cnt++; + flags |= ASYNC_TX_XOR_DROP_DST; + } else + src_cnt = 0; + i++; + } + + return tx; +} +EXPORT_SYMBOL_GPL(async_xor); + +static int page_is_zero(struct page *p, unsigned int offset, size_t len) +{ + char *a = page_address(p) + offset; + return ((*(u32 *) a) == 0 && + memcmp(a, a + 4, len - 4) == 0); +} + +/** + * async_xor_zero_sum - attempt a xor parity check with a dma engine. + * @dest: destination page used if the xor is performed synchronously + * @src_list: array of source pages. The dest page must be listed as a source + * at index zero. The contents of this array may be overwritten. + * @offset: offset in pages to start transaction + * @src_cnt: number of source pages + * @len: length in bytes + * @result: 0 if sum == 0 else non-zero + * @flags: ASYNC_TX_ASSUME_COHERENT, ASYNC_TX_ACK, ASYNC_TX_DEP_ACK + * @depend_tx: xor depends on the result of this transaction. + * @cb_fn: function to call when the xor completes + * @cb_param: parameter to pass to the callback routine + */ +struct dma_async_tx_descriptor * +async_xor_zero_sum(struct page *dest, struct page **src_list, + unsigned int offset, int src_cnt, size_t len, + u32 *result, enum async_tx_flags flags, + struct dma_async_tx_descriptor *depend_tx, + dma_async_tx_callback cb_fn, void *cb_param) +{ + struct dma_chan *chan = async_tx_find_channel(depend_tx, DMA_ZERO_SUM); + struct dma_device *device = chan ? chan->device : NULL; + int int_en = cb_fn ? 1 : 0; + struct dma_async_tx_descriptor *tx = device ? + device->device_prep_dma_zero_sum(chan, src_cnt, len, result, + int_en) : NULL; + int i; + + BUG_ON(src_cnt <= 1); + + if (tx) { + dma_addr_t dma_addr; + enum dma_data_direction dir; + + pr_debug("%s: (async) len: %zu\n", __FUNCTION__, len); + + dir = (flags & ASYNC_TX_ASSUME_COHERENT) ? + DMA_NONE : DMA_TO_DEVICE; + + for (i = 0; i < src_cnt; i++) { + dma_addr = dma_map_page(device->dev, src_list[i], + offset, len, dir); + tx->tx_set_src(dma_addr, tx, i); + } + + async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param); + } else { + unsigned long xor_flags = flags; + + pr_debug("%s: (sync) len: %zu\n", __FUNCTION__, len); + + xor_flags |= ASYNC_TX_XOR_DROP_DST; + xor_flags &= ~ASYNC_TX_ACK; + + tx = async_xor(dest, src_list, offset, src_cnt, len, xor_flags, + depend_tx, NULL, NULL); + + if (tx) { + if (dma_wait_for_async_tx(tx) == DMA_ERROR) + panic("%s: DMA_ERROR waiting for tx\n", + __FUNCTION__); + async_tx_ack(tx); + } + + *result = page_is_zero(dest, offset, len) ? 0 : 1; + + tx = NULL; + + async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param); + } + + return tx; +} +EXPORT_SYMBOL_GPL(async_xor_zero_sum); + +static int __init async_xor_init(void) +{ + return 0; +} + +static void __exit async_xor_exit(void) +{ + do { } while (0); +} + +module_init(async_xor_init); +module_exit(async_xor_exit); + +MODULE_AUTHOR("Intel Corporation"); +MODULE_DESCRIPTION("asynchronous xor/xor-zero-sum api"); +MODULE_LICENSE("GPL"); |