diff options
author | marcel <marcel@FreeBSD.org> | 2009-05-18 18:37:18 +0000 |
---|---|---|
committer | marcel <marcel@FreeBSD.org> | 2009-05-18 18:37:18 +0000 |
commit | 8b09116a5afc11ad02ed3d58dc2619303aeca12b (patch) | |
tree | cf3c716dbc38984e19bd613ced4f6b9f8d4ca6b7 /sys/dev/md/md.c | |
parent | c6c278457529c585668914f3ccc199e9c40165ac (diff) | |
download | FreeBSD-src-8b09116a5afc11ad02ed3d58dc2619303aeca12b.zip FreeBSD-src-8b09116a5afc11ad02ed3d58dc2619303aeca12b.tar.gz |
Add cpu_flush_dcache() for use after non-DMA based I/O so that a
possible future I-cache coherency operation can succeed. On ARM
for example the L1 cache can be (is) virtually mapped, which
means that any I/O that uses temporary mappings will not see the
I-cache made coherent. On ia64 a similar behaviour has been
observed. By flushing the D-cache, execution of binaries backed
by md(4) and/or NFS work reliably.
For Book-E (powerpc), execution over NFS exhibits SIGILL once in
a while as well, though cpu_flush_dcache() hasn't been implemented
yet.
Doing an explicit D-cache flush as part of the non-DMA based I/O
read operation eliminates the need to do it as part of the
I-cache coherency operation itself and as such avoids pessimizing
the DMA-based I/O read operations for which D-cache are already
flushed/invalidated. It also allows future optimizations whereby
the bcopy() followed by the D-cache flush can be integrated in a
single operation, which could be implemented using on-chips DMA
engines, by-passing the D-cache altogether.
Diffstat (limited to 'sys/dev/md/md.c')
-rw-r--r-- | sys/dev/md/md.c | 9 |
1 files changed, 6 insertions, 3 deletions
diff --git a/sys/dev/md/md.c b/sys/dev/md/md.c index 48d48fd..a03b078 100644 --- a/sys/dev/md/md.c +++ b/sys/dev/md/md.c @@ -436,10 +436,11 @@ mdstart_malloc(struct md_s *sc, struct bio *bp) if (osp == 0) bzero(dst, sc->sectorsize); else if (osp <= 255) - for (i = 0; i < sc->sectorsize; i++) - dst[i] = osp; - else + memset(dst, osp, sc->sectorsize); + else { bcopy((void *)osp, dst, sc->sectorsize); + cpu_flush_dcache(dst, sc->sectorsize); + } osp = 0; } else if (bp->bio_cmd == BIO_WRITE) { if (sc->flags & MD_COMPRESS) { @@ -491,6 +492,7 @@ mdstart_preload(struct md_s *sc, struct bio *bp) case BIO_READ: bcopy(sc->pl_ptr + bp->bio_offset, bp->bio_data, bp->bio_length); + cpu_flush_dcache(bp->bio_data, bp->bio_length); break; case BIO_WRITE: bcopy(bp->bio_data, sc->pl_ptr + bp->bio_offset, @@ -633,6 +635,7 @@ mdstart_swap(struct md_s *sc, struct bio *bp) break; } bcopy((void *)(sf_buf_kva(sf) + offs), p, len); + cpu_flush_dcache(p, len); } else if (bp->bio_cmd == BIO_WRITE) { if (len != PAGE_SIZE && m->valid != VM_PAGE_BITS_ALL) rv = vm_pager_get_pages(sc->object, &m, 1, 0); |