summaryrefslogtreecommitdiffstats
path: root/block
Commit message (Collapse)AuthorAgeFilesLines
* block/sheepdog: Don't use qerror_report()Markus Armbruster2014-05-281-13/+13
| | | | | | | | | | | qerror_report() is a transitional interface to help with converting existing HMP commands to QMP. It should not be used elsewhere. Replace by error_report(). Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/sheepdog: Fix silent sd_open(), sd_create() failuresMarkus Armbruster2014-05-281-0/+5
| | | | | | | | | Open and create methods must set an error when they fail. Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/sheepdog: Propagate errors to open and create methodsMarkus Armbruster2014-05-281-28/+12
| | | | | | | | | | Completes the conversion to Error started in commit 015a103^..d5124c0, except for a few bugs fixed in the next commit. Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/sheepdog: Propagate errors through find_vdi_name()Markus Armbruster2014-05-281-9/+11
| | | | | | | Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/sheepdog: Propagate errors through do_sd_create()Markus Armbruster2014-05-281-14/+21
| | | | | | | Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/sheepdog: Propagate errors through sd_prealloc()Markus Armbruster2014-05-281-7/+13
| | | | | | | Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/sheepdog: Propagate errors through get_sheep_fd()Markus Armbruster2014-05-281-8/+10
| | | | | | | Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/sheepdog: Propagate errors through connect_to_sdog()Markus Armbruster2014-05-281-22/+55
| | | | | | | Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/vvfat: Propagate errors through init_directories()Markus Armbruster2014-05-281-7/+9
| | | | | | | | | Completes the conversion of the open method to Error started in commit 015a103. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/vvfat: Propagate errors through enable_write_target()Markus Armbruster2014-05-281-11/+7
| | | | | | | | | Continues the conversion of the open method to Error started in commit 015a103. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/ssh: Propagate errors to open and create methodsMarkus Armbruster2014-05-281-25/+22
| | | | | | | | Completes the conversion to Error started in commit 015a103^..d5124c0. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/ssh: Propagate errors through connect_to_ssh()Markus Armbruster2014-05-281-17/+17
| | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/ssh: Propagate errors through authenticate()Markus Armbruster2014-05-281-9/+14
| | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/ssh: Propagate errors through check_host_key()Markus Armbruster2014-05-281-19/+49
| | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/ssh: Drop superfluous libssh2_session_last_errno() callsMarkus Armbruster2014-05-281-5/+4
| | | | | | | | libssh2_session_last_error() already returns the error code. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/rbd: Propagate errors to open and create methodsMarkus Armbruster2014-05-281-32/+39
| | | | | | | | | Completes the conversion to Error started in commit 015a103^..d5124c0. Cc: Josh Durgin <josh.durgin@inktank.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Add backing_blocker in BlockDriverStateFam Zheng2014-05-281-1/+1
| | | | | | | | | | | | This makes use of op_blocker and blocks all the operations except for commit target, on each BlockDriverState->backing_hd. The asserts for op_blocker in bdrv_swap are removed because with this change, the target of block commit has at least the backing blocker of its child, so the assertion is not true. Callers should do their check. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Use bdrv_set_backing_hd everywhereFam Zheng2014-05-282-3/+3
| | | | | | | | | | We need to handle the coming backing_blocker properly, so don't open code the assignment, instead, call bdrv_set_backing_hd to change backing_hd. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* qcow2: Fix memory leak in COW error pathKevin Wolf2014-05-281-1/+2
| | | | | | | | | This triggers if bs->drv becomes NULL in a concurrent request. This is currently only the case when corruption prevention kicks in (i.e. at most once per image, and after that it produces I/O errors). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* Merge remote-tracking branch 'remotes/bonzini/scsi-next' into stagingPeter Maydell2014-05-221-3/+1
|\ | | | | | | | | | | | | | | | | | | * remotes/bonzini/scsi-next: megasas: remove buildtime strings block: iscsi build fix if LIBISCSI_FEATURE_IOVECTOR is not defined virtio-scsi: Plug memory leak on virtio_scsi_push_event() error path scsi: Document intentional fall through in scsi_req_length() Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * block: iscsi build fix if LIBISCSI_FEATURE_IOVECTOR is not definedJeff Cody2014-05-201-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Commit b03c380 introduced the function iscsi_allocationmap_is_allocated(), however it is only used within a code block that is conditionally compiled. This produces a warning (error with -werror) of "defined but not used" for the the function, if LIBISCSI_FEATURE_IOVECTOR is not defined. This wraps iscsi_allocationmap_is_allocated() in the same conditional. Signed-off-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into stagingPeter Maydell2014-05-204-50/+83
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Block patches # gpg: Signature made Mon 19 May 2014 15:21:14 BST using RSA key ID C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" * remotes/kevin/tags/for-upstream: (22 commits) block: optimize zero writes with bdrv_write_zeroes blockdev: add a function to parse enum ids from strings util: add qemu_iovec_is_zero qcow1: Stricter backing file length check qcow1: Validate image size (CVE-2014-0223) qcow1: Validate L2 table size (CVE-2014-0222) qcow1: Check maximum cluster size qcow1: Make padding in the header explicit curl: Add usage documentation curl: Add sslverify option curl: Remove broken parsing of options from url curl: Fix build when curl_multi_socket_action isn't available qemu-iotests: Fix blkdebug in VM drive in 030 qemu-iotests: Fix core dump suppression in test 039 iotests: Add test for the JSON protocol block: Allow JSON filenames check-qdict: Add test for qdict_join() qdict: Add qdict_join() block: add test for vhdx image created by Disk2VHD block: vhdx - account for identical header sections ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * | block: optimize zero writes with bdrv_write_zeroesPeter Lieven2014-05-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | this patch tries to optimize zero write requests by automatically using bdrv_write_zeroes if it is supported by the format. This significantly speeds up file system initialization and should speed zero write test used to test backend storage performance. I ran the following 2 tests on my internal SSD with a 50G QCOW2 container and on an attached iSCSI storage. a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX QCOW2 [off] [on] [unmap] ----- runtime: 14secs 1.1secs 1.1secs filesize: 937M 18M 18M iSCSI [off] [on] [unmap] ---- runtime: 9.3s 0.9s 0.9s b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct QCOW2 [off] [on] [unmap] ----- runtime: 246secs 18secs 18secs filesize: 51G 192K 192K throughput: 203M/s 2.3G/s 2.3G/s iSCSI* [off] [on] [unmap] ---- runtime: 8mins 45secs 33secs throughput: 106M/s 1.2G/s 1.6G/s allocated: 100% 100% 0% * The storage was connected via an 1Gbit interface. It seems to internally handle writing zeroes via WRITESAME16 very fast. Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * | qcow1: Stricter backing file length checkKevin Wolf2014-05-191-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Like qcow2 since commit 6d33e8e7, error out on invalid lengths instead of silently truncating them to 1023. Also don't rely on bdrv_pread() catching integer overflows that make len negative, but use unsigned variables in the first place. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net>
| * | qcow1: Validate image size (CVE-2014-0223)Kevin Wolf2014-05-191-2/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A huge image size could cause s->l1_size to overflow. Make sure that images never require a L1 table larger than what fits in s->l1_size. This cannot only cause unbounded allocations, but also the allocation of a too small L1 table, resulting in out-of-bounds array accesses (both reads and writes). Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * | qcow1: Validate L2 table size (CVE-2014-0222)Kevin Wolf2014-05-191-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Too large L2 table sizes cause unbounded allocations. Images actually created by qemu-img only have 512 byte or 4k L2 tables. To keep things consistent with cluster sizes, allow ranges between 512 bytes and 64k (in fact, down to 1 entry = 8 bytes is technically working, but L2 table sizes smaller than a cluster don't make a lot of sense). This also means that the number of bytes on the virtual disk that are described by the same L2 table is limited to at most 8k * 64k or 2^29, preventively avoiding any integer overflows. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net>
| * | qcow1: Check maximum cluster sizeKevin Wolf2014-05-191-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Huge values for header.cluster_bits cause unbounded allocations (e.g. for s->cluster_cache) and crash qemu this way. Less huge values may survive those allocations, but can cause integer overflows later on. The only cluster sizes that qemu can create are 4k (for standalone images) and 512 (for images with backing files), so we can limit it to 64k. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net>
| * | qcow1: Make padding in the header explicitKevin Wolf2014-05-191-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were relying on all compilers inserting the same padding in the header struct that is used for the on-disk format. Let's not do that. Mark the struct as packed and insert an explicit padding field for compatibility. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net>
| * | curl: Add sslverify optionMatthew Booth2014-05-191-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | This allows qemu to use images over https with a self-signed certificate. It defaults to verifying the certificate. Signed-off-by: Matthew Booth <mbooth@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * | curl: Remove broken parsing of options from urlMatthew Booth2014-05-191-42/+10
| | | | | | | | | | | | | | | | | | | | | | | | The block layer now supports a generic json syntax for passing option parameters explicitly, making parsing of options from the url redundant. Signed-off-by: Matthew Booth <mbooth@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * | curl: Fix build when curl_multi_socket_action isn't availableMatthew Booth2014-05-191-0/+15
| | | | | | | | | | | | | | | Signed-off-by: Matthew Booth <mbooth@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * | block: vhdx - account for identical header sectionsJeff Cody2014-05-191-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The VHDX spec v1.00 declares that "a header is current if it is the only valid header or if it is valid and its SequenceNumber field is greater than the other header’s SequenceNumber field. The parser must only use data from the current header. If there is no current header, then the VHDX file is corrupt." However, the Disk2VHD tool from Microsoft creates a VHDX image file that has 2 identical headers, including matching checksums and matching sequence numbers. Likely, as a shortcut the tool is just writing the header twice, for the active and inactive headers, during the image creation. Technically, this should be considered a corrupt VHDX file (at least per the 1.00 spec, and that is how we currently treat it). But in order to accomodate images created with Disk2VHD, we can safely create an exception for this case. If we find identical sequence numbers, then we check the VHDXHeader-sized chunks of each 64KB header sections (we won't rely just on the crc32c to indicate the headers are the same). If they are identical, then we go ahead and use the first one. Reported-by: Nerijus Baliūnas <nerijus@users.sourceforge.net> Signed-off-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | | Merge remote-tracking branch 'remotes/bonzini/scsi-next' into stagingPeter Maydell2014-05-191-107/+212
|\ \ \ | |/ / |/| / | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | * remotes/bonzini/scsi-next: [PATCH] block/iscsi: bump year in copyright notice block/iscsi: allow cluster_size of 4K and greater block/iscsi: clarify the meaning of ISCSI_CHECKALLOC_THRES block/iscsi: speed up read for unallocated sectors block/iscsi: allow fall back to WRITE SAME without UNMAP MAINTAINERS: mark megasas as maintained megasas: Add MSI support megasas: Enable MSI-X support megasas: Implement LD_LIST_QUERY scsi: Improve error messages more scsi-disk: Improve error messager if can't get version number Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * [PATCH] block/iscsi: bump year in copyright noticePeter Lieven2014-05-051-1/+1
| | | | | | | | Signed-off-by: Peter Lieven <pl@kamp.de>
| * block/iscsi: allow cluster_size of 4K and greaterPeter Lieven2014-04-291-1/+1
| | | | | | | | | | | | | | | | | | | | depending on the target the opt_unmap_gran might be as low as 4K. As we know use this also as a knob to activate the allocationmap feature lower the barrier. The limit 4K (and not 512) is choosen to avoid a potentially too big allocationmap. Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * block/iscsi: clarify the meaning of ISCSI_CHECKALLOC_THRESPeter Lieven2014-04-291-2/+10
| | | | | | | | | | Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * block/iscsi: speed up read for unallocated sectorsPeter Lieven2014-04-291-102/+198
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | this patch implements a cache that tracks if a page on the iscsi target is allocated or not. The cache is implemented in a way that it allows for false positives (e.g. pretending a page is allocated, but it isn't), but no false negatives. The cached allocation info is then used to speed up the read process for unallocated sectors by issueing a GET_LBA_STATUS request for all sectors that are not yet known to be allocated. If the read request is confirmed to fall into an unallocated range we directly return zeroes and do not transfer the data over the wire. Tests have shown that a relatively small amount of GET_LBA_STATUS requests happens a vServer boot time to fill the allocation cache (all those blocks are not queried again). Not to transfer all the data of unallocated sectors saves a lot of time, bandwidth and storage I/O load during block jobs or storage migration and it saves a lot of bandwidth as well for any big sequential read of the whole disk (e.g. block copy or speed tests) if a significant number of blocks is unallocated. Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * block/iscsi: allow fall back to WRITE SAME without UNMAPPeter Lieven2014-04-281-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | if the iscsi driver receives a write zeroes request with the BDRV_REQ_MAY_UNMAP flag set it fails with -ENOTSUP if the iscsi target does not support WRITE SAME with UNMAP. However, the BDRV_REQ_MAY_UNMAP is only a hint and writing zeroes with WRITE SAME will still be better than falling back to writing zeroes with WRITE16. Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | block/raw-posix: Try both FIEMAP and SEEK_HOLEMax Reitz2014-05-091-50/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current version of raw-posix always uses ioctl(FS_IOC_FIEMAP) if FIEMAP is available; lseek with SEEK_HOLE/SEEK_DATA are not even compiled in in this case. However, there may be implementations which support the latter but not the former (e.g., NFSv4.2) as well as vice versa. To cover both cases, try FIEMAP first (as this will return -ENOTSUP if not supported instead of returning a failsafe value (everything allocated as a single extent)) and if that does not work, fall back to SEEK_HOLE/SEEK_DATA. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | gluster: Correctly propagate errors when volume isn't accessiblePeter Krempa2014-05-091-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The docs for glfs_init suggest that the function sets errno on every failure. In fact it doesn't. As other functions such as qemu_gluster_open() in the gluster block code report their errors based on this fact we need to make sure that errno is set on each failure. This fixes a crash of qemu-img/qemu when a gluster brick isn't accessible from given host while the server serving the volume description is. Thread 1 (Thread 0x7ffff7fba740 (LWP 203880)): #0 0x00007ffff77673f8 in glfs_lseek () from /usr/lib64/libgfapi.so.0 #1 0x0000555555574a68 in qemu_gluster_getlength () #2 0x0000555555565742 in refresh_total_sectors () #3 0x000055555556914f in bdrv_open_common () #4 0x000055555556e8e8 in bdrv_open () #5 0x000055555556f02f in bdrv_open_image () #6 0x000055555556e5f6 in bdrv_open () #7 0x00005555555c5775 in bdrv_new_open () #8 0x00005555555c5b91 in img_info () #9 0x00007ffff62c9c05 in __libc_start_main () from /lib64/libc.so.6 #10 0x00005555555648ad in _start () Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | vmdk: Implement .bdrv_get_info()Fam Zheng2014-05-091-0/+21
| | | | | | | | | | | | | | | | | | | | | | This will return cluster_size and needs_compressed_writes to caller, if all the extents have the same value (or there's only one extent). Otherwise return -ENOTSUP. cluster_size is only reported for sparse formats. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | vmdk: Implement .bdrv_write_compressedFam Zheng2014-05-091-0/+14
| | | | | | | | | | | | | | | | | | Add a wrapper function to support "compressed" path in qemu-img convert. Only support streamOptimized subformat case for now (num_extents == 1 and extent compression is true). Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | block/iscsi: bump year in copyright noticePeter Lieven2014-05-091-1/+1
| | | | | | | | | | Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | block/nfs: Check for NULL server partMax Reitz2014-05-091-0/+4
| | | | | | | | | | | | | | | | After the URL has been parsed make sure the server part is valid in order to avoid a segmentation fault when calling nfs_mount(). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | qcow2: Fix alloc_clusters_noref() overflow detectionMax Reitz2014-05-091-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If the very first allocation has a length of 0, the free_cluster_index is still 0 after the for loop, which means that subtracting one from it will underflow and signal an invalid range of clusters by returning -EFBIG. However, there is no such range, as its length is 0. Fix this by preventing underflows on free_cluster_index during the check. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | curl: Fix hang reading from slow connectionsMatthew Booth2014-04-301-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | When receiving a new aio read request, we first look for an existing transaction whose range will cover the read request by the time it completes. However, we weren't checking that the existing transaction was still active. If it had timed out, we were adding the request to a transaction which would never complete and had already been cancelled, resulting in a hang. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | curl: Ensure all informationals are checked for completionMatthew Booth2014-04-301-30/+23
| | | | | | | | | | | | | | | | | | | | According to the documentation, the correct way to ensure all informationals have been returned by curl_multi_info_read is to loop until it returns NULL. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | curl: Eliminate unnecessary use of curl_multi_socket_allMatthew Booth2014-04-301-10/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | curl_multi_socket_all is a deprecated catch-all which checks for activities on all open curl sockets. We have enough information from the event loop to check only the sockets with activity. This change removes use of curl_multi_socket_all in favour of curl_multi_socket_action called with the relevant handle. At the same time, it also ensures that the driver only checks for completion of read operations after reading from a socket, rather than both reading and writing. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | curl: Remove unnecessary explicit calls to internal event handlerMatthew Booth2014-04-301-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | Remove calls to curl_multi_do where the relevant handles are already registered to the event loop. Ensure that we kick off socket handling with CURL_SOCKET_TIMEOUT after adding a new handle. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | curl: Remove erroneous sleep waiting for curl completionMatthew Booth2014-04-301-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The driver will not start more than a fixed number of curl sessions. If it needs more, it must wait for the completion of an existing one. The driver was sleeping, which will prevent the main loop from running, and therefore the event it's waiting on. It was also directly calling its internal handler rather than waiting on existing registered handlers to be called from the main loop. This change causes it simply to wait for a period of time whilst allowing the main loop to execute. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
OpenPOWER on IntegriCloud