diff options
author | royger <royger@FreeBSD.org> | 2015-11-09 12:22:44 +0000 |
---|---|---|
committer | royger <royger@FreeBSD.org> | 2015-11-09 12:22:44 +0000 |
commit | 50d6b5faf455d78f6829aa57523ff8fe0d7cb83e (patch) | |
tree | b2613e91de1099a4e50b8e2ea11c733e88808559 /sys/dev/xen | |
parent | 8877774b9d4614ea3cd52319e7663653d95db364 (diff) | |
download | FreeBSD-src-50d6b5faf455d78f6829aa57523ff8fe0d7cb83e.zip FreeBSD-src-50d6b5faf455d78f6829aa57523ff8fe0d7cb83e.tar.gz |
xen-blkfront: add support for unmapped IO
Using unmapped IO is really beneficial when running inside of a VM,
since it avoids IPIs to other vCPUs in order to invalidate the
mappings.
This patch adds unmapped IO support to blkfront. The following tests
results have been obtained when running on a Xen host without HAP:
PVHVM
3165.84 real 6354.17 user 4483.32 sys
PVHVM with unmapped IO
2099.46 real 4624.52 user 2967.38 sys
This is because when running using shadow page tables TLB flushes and
range invalidations are much more expensive, so using unmapped IO
provides a very important performance boost.
Sponsored by: Citrix Systems R&D
MFC after: 2 weeks
X-MFC-with: r290610
dev/xen/blkfront/blkfront.c:
- Add and announce support for unmapped IO.
Diffstat (limited to 'sys/dev/xen')
-rw-r--r-- | sys/dev/xen/blkfront/blkfront.c | 12 |
1 files changed, 7 insertions, 5 deletions
diff --git a/sys/dev/xen/blkfront/blkfront.c b/sys/dev/xen/blkfront/blkfront.c index fa8d479..0a75d79 100644 --- a/sys/dev/xen/blkfront/blkfront.c +++ b/sys/dev/xen/blkfront/blkfront.c @@ -293,8 +293,12 @@ xbd_queue_request(struct xbd_softc *sc, struct xbd_command *cm) { int error; - error = bus_dmamap_load(sc->xbd_io_dmat, cm->cm_map, cm->cm_data, - cm->cm_datalen, xbd_queue_cb, cm, 0); + if (cm->cm_bp != NULL) + error = bus_dmamap_load_bio(sc->xbd_io_dmat, cm->cm_map, + cm->cm_bp, xbd_queue_cb, cm, 0); + else + error = bus_dmamap_load(sc->xbd_io_dmat, cm->cm_map, + cm->cm_data, cm->cm_datalen, xbd_queue_cb, cm, 0); if (error == EINPROGRESS) { /* * Maintain queuing order by freezing the queue. The next @@ -354,8 +358,6 @@ xbd_bio_command(struct xbd_softc *sc) } cm->cm_bp = bp; - cm->cm_data = bp->bio_data; - cm->cm_datalen = bp->bio_bcount; cm->cm_sector_number = (blkif_sector_t)bp->bio_pblkno; switch (bp->bio_cmd) { @@ -1009,7 +1011,7 @@ xbd_instance_create(struct xbd_softc *sc, blkif_sector_t sectors, sc->xbd_disk->d_mediasize = sectors * sector_size; sc->xbd_disk->d_maxsize = sc->xbd_max_request_size; - sc->xbd_disk->d_flags = 0; + sc->xbd_disk->d_flags = DISKFLAG_UNMAPPED_BIO; if ((sc->xbd_flags & (XBDF_FLUSH|XBDF_BARRIER)) != 0) { sc->xbd_disk->d_flags |= DISKFLAG_CANFLUSHCACHE; device_printf(sc->xbd_dev, |