summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* nbd: Don't mishandle unaligned client requestsEric Blake2019-11-291-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | The NBD protocol does not (yet) force any alignment constraints on clients. Even though qemu NBD clients always send requests that are aligned to 512 bytes, we must be prepared for non-qemu clients that don't care about alignment (even if it means they are less efficient). Our use of blk_read() and blk_write() was silently operating on the wrong file offsets when the client made an unaligned request, corrupting the client's data (but as the client already has control over the file we are serving, I don't think it is a security hole, per se, just a data corruption bug). Note that in the case of NBD_CMD_READ, an unaligned length could cause us to return up to 511 bytes of uninitialized trailing garbage from blk_try_blockalign() - hopefully nothing sensitive from the heap's prior usage is ever leaked in that manner. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Tested-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1461249750-31928-1-git-send-email-eblake@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Update version for v2.6.0-rc3 releasePeter Maydell2019-11-291-1/+1
| | | | Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* tcg: check for CONFIG_DEBUG_TCG instead of NDEBUGAurelien Jarno2019-11-2910-17/+12
| | | | | | | | | | Check for CONFIG_DEBUG_TCG instead of NDEBUG, drop now useless code. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-id: 1461228530-14852-2-git-send-email-aurelien@aurel32.net Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* tcg: use tcg_debug_assert instead of assert (fix performance regression)Aurelien Jarno2019-11-2911-93/+93
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The TCG code is quite performance sensitive, but at the same time can also be quite tricky. That is why asserts that can be enabled with the --enable-debug-tcg configure option. This used to work the following way: | #include "config.h" | | ... | | #if !defined(CONFIG_DEBUG_TCG) && !defined(NDEBUG) | /* define it to suppress various consistency checks (faster) */ | #define NDEBUG | #endif | | ... | | #include <assert.h> Since commit 757e725b (tcg: Clean up includes) "config.h" as been replaced by "qemu/osdep.h" which itself includes <assert.h>. As a consequence the assertions are always enabled, even when using --disable-debug-tcg, causing a performance regression, especially on targets with many registers. For instance on qemu-system-ppc the speed difference is about 15%. tcg_debug_assert is controlled directly by CONFIG_DEBUG_TCG and already uses in some places. This patch replaces all the calls to assert into calss to tcg_debug_assert. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-id: 1461228530-14852-1-git-send-email-aurelien@aurel32.net Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* hw/arm/boot: always clear r0 when booting kernelsSylvain Garrigues2019-11-291-1/+1
| | | | | | | | | | | | | | | | | The 32-bit ARM Linux kernel booting ABI requires that r0 is 0 when calling the kernel image. A bug in commit 10b8ec73e610e01 meant that for boards which use the write_board_setup hook (which means "highbank", "midway", "raspi2" and "xilinx-zynq-a9") we were incorrectly skipping the "clear r0" instruction in the mini-bootloader. Use the right offset in the "add lr, pc, #n" instruction so that we return from the board-setup code to the correct place. Signed-off-by: Sylvain Garrigues <sylvain@sylvaingarrigues.com> [PMM: Expanded commit message] Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* MAINTAINERS: Avoid using K: for NUMA sectionEduardo Habkost2019-11-291-2/+0
| | | | | | | | | | | | When using K: in MAINTAINERS, false positives makes get_maintainer.pl not use git history to find contributors. As those patterns cause lots of false positives they are causing more harm than good, so remove them. Reported-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Message-id: 1461164130-3847-1-git-send-email-ehabkost@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* iotests: Test case for drive-mirror with unaligned image sizeFam Zheng2019-11-293-0/+68
| | | | | | | | | | | | This is the regression test for the virtual size mismatch issue between target and source images. [ kwolf: Added test_unaligned_with_update ] Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com>
* iotests: Add iotests.image_sizeFam Zheng2019-11-291-0/+6
| | | | | | | | | This retrieves the virtual size of the image out of qemu-img info. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* mirror: Don't extend the last sub-chunkFam Zheng2019-11-292-26/+37
| | | | | | | | | | | | | | | | | | The last sub-chunk is rounded up to the copy granularity in the target image, resulting in a larger size than the source. Add a function to clip the copied sectors to the end. This undoes the "wrong" changes to tests/qemu-iotests/109.out in e5b43573e28. The remaining two offset changes are okay. [ kwolf: Use DIV_ROUND_UP to calculate nb_chunks now ] Reported-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com>
* block/mirror: Refresh stale bitmap iterator cacheMax Reitz2019-11-291-0/+5
| | | | | | | | | | | | | | If the drive's dirty bitmap is dirtied while the mirror operation is running, the cache of the iterator used by the mirror code may become stale and not contain all dirty bits. This only becomes an issue if we are looking for contiguously dirty chunks on the drive. In that case, we can easily detect the discrepancy and just refresh the iterator if one occurs. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/mirror: Revive dead yielding codeMax Reitz2019-11-291-11/+12
| | | | | | | | | | | | | | | | mirror_iteration() is supposed to wait if the current chunk is subject to a still in-flight mirroring operation. However, it mixed checking this conflict situation with checking the dirty status of a chunk. A simplification for the latter condition (the first chunk encountered is always dirty) led to neglecting the former: We just skip the first chunk and thus never test whether it conflicts with an in-flight operation. To fix this, pull out the code which waits for in-flight operations on the first chunk of the range to be mirrored to settle. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qemu-ga: do not run qga test when guest agent disabledYang Hongyang2019-11-291-1/+3
| | | | | | | | | | | | | | | | | | | | When configure with --disable-guest-agent, make check will fail with: ERROR:tests/test-qga.c:74:fixture_setup: assertion failed (error == NULL): Failed to execute child process "/home/xx/qemu/qemu-ga" (No such file or directory) (g-exec-error-quark, 8) make: *** [check-tests/test-qga] Error 1 This check was commented out by bab47d9a75a. I think that was by mistake, because the commit message of that commit didn't mention this change. Signed-off-by: Yang Hongyang <hongyang.yang@easystack.cn> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Michael Roth <mdroth@linux.vnet.ibm.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com> Cc: qemu-stable@nongnu.org
* Update language files for QEMU 2.6.0Peter Maydell2019-11-297-127/+127
| | | | | | | | Update translation files (change created via 'make -C po update'). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1461059023-14470-1-git-send-email-peter.maydell@linaro.org Reviewed-by: Stefan Weil <sw@weilnetz.de>
* block/gluster: prevent data loss after i/o errorJeff Cody2019-11-292-1/+60
| | | | | | | | | | | | | | | | | Upon receiving an I/O error after an fsync, by default gluster will dump its cache. However, QEMU will retry the fsync, which is especially useful when encountering errors such as ENOSPC when using the werror=stop option. When using caching with gluster, however, the last written data will be lost upon encountering ENOSPC. Using the write-behind-cache xlator option of 'resync-failed-syncs-after-fsync' should cause gluster to retain the cached data after a failed fsync, so that ENOSPC and other transient errors are recoverable. Unfortunately, we have no way of knowing if the 'resync-failed-syncs-after-fsync' xlator option is supported, so for now close the fd and set the BDS driver to NULL upon fsync error. Signed-off-by: Jeff Cody <jcody@redhat.com>
* block/gluster: code movement of qemu_gluster_close()Jeff Cody2019-11-291-11/+11
| | | | | | | Move qemu_gluster_close() further up in the file, in preparation for the next patch, to avoid a forward declaration. Signed-off-by: Jeff Cody <jcody@redhat.com>
* block/gluster: return correct error valueJeff Cody2019-11-291-1/+1
| | | | | | | | | | | Upon error, gluster will call the aio callback function with a ret value of -1, with errno set to the proper error value. If we set the acb->ret value to the return value in the callback, that results in every error being EPERM (i.e. 1). Instead, set it to the proper error result. Reviewed-by: Niels de Vos <ndevos@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>
* fw_cfg: Adopt /opt/RFQDN conventionMarkus Armbruster2019-11-292-24/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | FW CFG's primary user is QEMU, which uses it to expose configuration information (in the widest sense) to Firmware. Thus the name FW CFG. FW CFG can also be used by others for their own purposes. QEMU is merely acting as transport then. Names starting with opt/ are reserved for such uses. There is no provision, however, to guide safe sharing among different such users. Fix that, loosely following QMP precedence: names should start with opt/RFQDN/, where RFQDN is a reverse fully qualified domain name you control. Based on a more ambitious patch from Michael Tsirkin. Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Gabriel L. Somlo <somlo@cmu.edu> Cc: Laszlo Ersek <lersek@redhat.com> Cc: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Gabriel Somlo <somlo@cmu.edu> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* cadence_uart: bounds check write offsetMichael S. Tsirkin2019-11-291-0/+3
| | | | | | | | | | | | | | | | | | | | | cadence_uart_init() initializes an I/O memory region of size 0x1000 bytes. However in uart_write(), the 'offset' parameter (offset within region) is divided by 4 and then used to index the array 'r' of size CADENCE_UART_R_MAX which is much smaller: (0x48/4). If 'offset>>=2' exceeds CADENCE_UART_R_MAX, this will cause an out-of-bounds memory write where the offset and the value are controlled by guest. This will corrupt QEMU memory, in most situations this causes the vm to crash. Fix by checking the offset against the array size. Cc: qemu-stable@nongnu.org Reported-by: 李强 <liqiang6-s@360.cn> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> Message-id: 20160418100735.GA517@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Revert "ehci: make idt processing more robust"Gerd Hoffmann2019-11-291-3/+2
| | | | | | | | This reverts commit 156a2e4dbffa85997636a7a39ef12da6f1b40254. Breaks FreeBSD. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
* ehci: apply limit to iTD/sidt descriptorsGerd Hoffmann2019-11-291-1/+5
| | | | | | | | | | | | | | | Commit "156a2e4 ehci: make idt processing more robust" tries to avoid a DoS by the guest (create a circular iTD queue and let qemu ehci emulation run in circles forever). Unfortunately this has two problems: First it misses the case of siTDs, and second it reportedly breaks FreeBSD. So lets go for a different approach: just count the number of iTDs and siTDs we have seen per frame and apply a limit. That should really catch all cases now. Reported-by: 杜少博 <dushaobo@360.cn> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
* cuda: fix off-by-one error in SET_TIME commandAurelien Jarno2019-11-291-2/+2
| | | | | | | | | | | | | | | | With the new framework the cuda_cmd_set_time command directly receive the data, without the command byte. Therefore the time is stored at in_data[0], not at in_data[1]. This fixes the "hwclock --systohc" command in a guest. Cc: Hervé Poussineau <hpoussin@reactos.org> Cc: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Hervé Poussineau <hpoussin@reactos.org> [this fixes a regression introduced by e647317 "cuda: port SET_TIME command to new framework"] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target-i386: Set AMD alias bits after filtering CPUID dataEduardo Habkost2019-11-291-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | QEMU complains about -cpu host on an AMD machine: warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 0] For bits 0,1,3,4,5,6,7,8,9,12,13,14,15,16,17,23,24. KVM_GET_SUPPORTED_CPUID and and x86_cpu_get_migratable_flags() don't handle the AMD CPUID aliases bits, making x86_cpu_filter_features() print warnings and clear those CPUID bits incorrectly. To avoid hacking x86_cpu_get_migratable_flags() to handle CPUID_EXT2_AMD_ALIASES (just like the existing hack inside kvm_arch_get_supported_cpuid()), simply move the CPUID_EXT2_AMD_ALIASES code in x86_cpu_realizefn() after the x86_cpu_filter_features() call. This will probably make the CPUID_EXT2_AMD_ALIASES hack in kvm_arch_get_supported_cpuid() unnecessary, too. The hack will be removed in a follow-up patch after v2.6.0. Reported-by: Radim Krčmář <rkrcmar@redhat.com> Tested-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* MAINTAINERS: Drop target-i386 from CPU subsystemAndreas Färber2019-11-291-1/+0
| | | | | | | | | | | | X86CPU QOM type is in good hands and actively maintained these days, so drop it from the generic QOM CPU subsystem. Some refactorings and design questions will still intersect, but review and discussions of individual series can still take place while opting out of general X86CPU patch review. Acked-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Andreas Färber <afaerber@suse.de>
* Update OpenBIOS imagesMark Cave-Ayland2019-11-294-0/+0
| | | | | | Update OpenBIOS images to SVN r1395 built from submodule. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
* ppc: Fix migration of the XER registerThomas Huth2019-11-291-1/+1
| | | | | | | | | | | env->xer only holds the lower bits of the XER register nowadays, the SO, OV and CA bits are stored in separate variables (see the function cpu_write_xer() for details). Since the migration code currently only reads the "xer" variable, the upper bits are lost during migration. Fix it by using cpu_read_xer() instead. Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* ppc: Fix the bad exception NIP value and the range check in LSWXThomas Huth2019-11-291-2/+3
| | | | | | | | | | | | | | | | | | The range checks in the LSWX instruction are completely insufficient: They do not take the wrap-around case into account, and the check "reg < rx" should be "reg <= rx" instead. Fix it by using the new lsw_reg_in_range() helper function that is already used for LSWI, too. Then there is a second problem: In case the INVAL exception is generated, the NIP value is wrong, it currently points to the instruction before the LSWX instruction. This is because gen_lswx() already decreases the NIP value by 4 (to be prepared for page fault exceptions), and powerpc_excp() later decreases it again by 4 while handling the program exception. So to get this right, we've got to undo the "- 4" from gen_lswx() here before calling helper_raise_exception_err(). Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* ppc: Fix the range check in the LSWI instructionThomas Huth2019-11-292-4/+12
| | | | | | | | | | | | There are two issues: First, the number of registers that are used has to be calculated with "(nb + 3) / 4" (i.e. round always up, not down). Second, the "start <= ra && (start + nr - 32) > ra" condition for the wrap-around case is wrong: It has to be tested with "||" instead of "&&". Since we can reuse this check later for the LSWX instruction, let's place the fixed code into a helper function, too. Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* seccomp: adding sysinfo system call to whitelistMiroslav Rezanina2019-11-291-0/+1
| | | | | | | | | | Newer version of nss-softokn libraries (> 3.16.2.3) use sysinfo call so qemu using rbd image hang after start when run in sandbox mode. To allow using rbd images in sandbox mode we have to whitelist it. Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com> Acked-by: Eduardo Otubo <eduardo.otubo@profitbricks.com>
* seccomp: Whitelist cacheflush since 2.2.0 not 2.2.3James Hogan2019-11-291-3/+5
| | | | | | | | | | | | | | The cacheflush system call (found on MIPS and ARM) has been included in the libseccomp header since 2.2.0, so include it back to that version. Previously it was only enabled since 2.2.3 since that is when it was enabled properly for ARM. This will allow seccomp support to be enabled for MIPS back to libseccomp 2.2.0. Signed-off-by: James Hogan <james.hogan@imgtec.com> Reviewed-By: Andrew Jones <drjones@redhat.com> Acked-by: Eduardo Otubo <eduardo.otubo@profitbricks.com>
* configure: Enable seccomp sandbox for MIPSJames Hogan2019-11-291-0/+3
| | | | | | | | | Enable seccomp on MIPS since libseccomp version 2.2.0 when MIPS support was first added. Signed-off-by: James Hogan <james.hogan@imgtec.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Acked-by: Eduardo Otubo <eduardo.otubo@profitbricks.com>
* wxx: Fix broken TCP networking (regression)Stefan Weil2019-11-292-5/+1
| | | | | | | | | | It is broken since commit c619644067f98098dcdbc951e2dda79e97560afa. Reported-by: Michael Fritscher <michael@fritscher.net> Tested-by: Michael Fritscher <michael@fritscher.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Daniel P. Berrange <berrange@redhat.com> Signed-off-by: Stefan Weil <sw@weilnetz.de>
* nbd: Don't kill server on client that doesn't request TLSEric Blake2019-11-291-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Upstream NBD documents (as of commit 4feebc95) that servers MAY choose to operate in a conditional mode, where it is up to the client whether to use TLS. For qemu's case, we want to always be in FORCEDTLS mode, because of the risk of man-in-the-middle attacks, and since we never export more than one device; likewise, the qemu client will ALWAYS send NBD_OPT_STARTTLS as its first option. But now that SELECTIVETLS servers exist, it is feasible to encounter a (non-qemu) client that is programmed to talk to such a server, and does not do NBD_OPT_STARTTLS first, but rather wants to probe if it can use a non-encrypted export. The NBD protocol documents that we should let such a client continue trying, on the grounds that maybe the client will get the hint to send NBD_OPT_STARTTLS, rather than immediately dropping the connection. Note that NBD_OPT_EXPORT_NAME is a special case: since it is the only option request that can't have an error return, we have to (continue to) drop the connection on that one; rather, what we are fixing here is that all other replies prior to TLS initiation tell the client NBD_REP_ERR_TLS_REQD, but keep the connection alive. Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 1460671343-18485-1-git-send-email-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* nbd: fix assert() on qemu-nbd stopPavel Butsykin2019-11-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | From time to time qemu-nbd is crashing on the following assert: assert(state == TERMINATING); nbd_export_closed nbd_export_put main and the state at the moment of the crash is evaluated to TERMINATE. During shutdown process of the client the nbd_client_thread thread sends SIGTERM signal and the main thread calls the nbd_client_closed callback. If the SIGTERM callback will be executed after change the state to TERMINATING, then the state will once again be TERMINATE. To solve the issue, we must change the state to TERMINATE only if the state is RUNNING. In the other case we are shutting down already. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1460629215-11567-1-git-send-email-den@openvz.org Signed-off-by: Max Reitz <mreitz@redhat.com>
* nbd: Don't fail handshake on NBD_OPT_LIST descriptionsEric Blake2019-11-291-2/+21
| | | | | | | | | | | | | | | | | | | | The NBD Protocol states that NBD_REP_SERVER may set 'length > sizeof(namelen) + namelen'; in which case the rest of the packet is a UTF-8 description of the export. While we don't know of any NBD servers that send this description yet, we had better consume the data so we don't choke when we start to talk to such a server. Also, a (buggy/malicious) server that replies with length < sizeof(namelen) would cause us to block waiting for bytes that the server is not sending, and one that replies with super-huge lengths could cause us to temporarily allocate up to 4G memory. Sanity check things before blindly reading incorrectly. Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 1460077777-31004-1-git-send-email-eblake@redhat.com Reviewed-by: Alex Bligh <alex@alex.org.uk> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qemu-iotests: 041: More robust assertion on quorum nodeFam Zheng2019-11-292-8/+18
| | | | | | | | | | Block nodes are now assigned names automatically, therefore the test case is fragile in using fixed indices in result. Introduce a method in iotests.py and do the matching more sensibly. Signed-off-by: Fam Zheng <famz@redhat.com> Message-id: 1460518995-1338-1-git-send-email-famz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qemu-iotests: place valgrind log file in scratch dirSascha Silbe2019-11-291-1/+1
| | | | | | | | | | | | Do not place the valgrind log file at a predictable path in a world-writable location. Use the common scratch directory (${TEST_DIR}) instead. Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Reviewed-by: Bo Tu <tubo@linux.vnet.ibm.com> Message-id: 1460472980-26319-5-git-send-email-silbe@linux.vnet.ibm.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qemu-iotests: tests: do not set unused tmp variableSascha Silbe2019-11-29117-117/+0
| | | | | | | | | | | | The previous commit removed the last usage of ${tmp} inside the tests themselves; the only remaining users are sourced by check. So we can now drop this variable from the tests. Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Reviewed-by: Bo Tu <tubo@linux.vnet.ibm.com> Message-id: 1460472980-26319-4-git-send-email-silbe@linux.vnet.ibm.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qemu-iotests: common.rc: drop unused _do()Sascha Silbe2019-11-291-46/+0
| | | | | | | | | | | _do() was never used and possibly creates temporary files at predictable, world-writable locations. Get rid of it. Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Reviewed-by: Bo Tu <tubo@linux.vnet.ibm.com> Message-id: 1460472980-26319-3-git-send-email-silbe@linux.vnet.ibm.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qemu-iotests: drop unused _within_tolerance() filterSascha Silbe2019-11-291-101/+0
| | | | | | | | | | | | | | _within_tolerance() isn't used anymore and possibly creates temporary files at predictable, world-writable locations. Get rid of it. If it's needed again in the future it can be revived easily and fixed up to use TEST_DIR and / or safely created temporary files. Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Reviewed-by: Bo Tu <tubo@linux.vnet.ibm.com> Message-id: 1460472980-26319-2-git-send-email-silbe@linux.vnet.ibm.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* hostmem-file: plug a small leakMarc-André Lureau2019-11-291-0/+8
| | | | | | | Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <1460566660-19241-1-git-send-email-marcandre.lureau@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
* Fix pflash migrationDr. David Alan Gilbert2019-11-291-2/+16
| | | | | | | | | | | | | | | | | | | | | | | Pflash migration (e.g. q35 + EFI variable storage) fails with the assert: bdrv_co_do_pwritev: Assertion `!(bs->open_flags & 0x0800)' failed. This avoids the problem by delaying the pflash update until after the device loads complete. Tested by: Migrating Q35/EFI vm. Changing efi variable content (with efiboot in the guest) md5sum'ing the variable file before migration and after. This is a fix that Paolo posted in the message 570244B3.4070105@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Don't ignore flags in blk_{,co,aio}_write_zeroes()Kevin Wolf2019-11-291-3/+4
| | | | | | | | | | | | | | | | | Commit 57d6a428 neglected to pass the given flags to blk_aio_prwv(), which broke discard by WRITE SAME for scsi-disk (the UNMAP bit would be ignored). Commit fc1453cd introduced the same bug for blk_write_zeroes(). This is used for 'qemu-img convert' without has_zero_init (e.g. on a block device) and for preallocation=falloc in parallels. Commit 8896e088 is the version for blk_co_write_zeroes(). This function is only used in qemu-io. Reported-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
* block/vpc: update comments to be compliant w/coding guidelinesJeff Cody2019-11-291-34/+34
| | | | | | Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: set errp in vpc_openJeff Cody2019-11-291-0/+9
| | | | | | | | Add more useful error information to failure paths in vpc_open Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: make checks on max table size a bit more laxJeff Cody2019-11-291-4/+0
| | | | | | | | | | | | | | | | | | | | | The check on the max_table_size field not being larger than required is valid, and in accordance with the VHD spec. However, there have been VHD images encountered in the wild that have an out-of-spec max table size that is technically too large. There is no issue in allowing this larger table size, as we also later verify that the computed size (used for the pagetable) is large enough to fit all sectors. In addition, max_table_entries is bounds checked against SIZE_MAX and INT_MAX. Remove the strict check, so that we can accomodate these sorts of images that are benignly out of spec. Reported-by: Stefan Hajnoczi <stefanha@redhat.com> Reported-by: Grant Wu <grantwwu@gmail.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: Use the correct max sector count for VHD imagesJeff Cody2019-11-291-5/+5
| | | | | | | | | | | | | | | | The old VHD_MAX_SECTORS value is incorrect, and is a throwback to the CHS calculations. The VHD specification allows images up to 2040 GiB, which (using 512 byte sectors) corresponds to a maximum number of sectors of 0xff000000, rather than the old value of 0xfe0001ff. Update VHD_MAX_SECTORS to reflect the correct value. Also, update comment references to the actual size limit, and correct one compare so that we can have sizes up to the limit. Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: use current_size field for XenConverter VHD imagesJeff Cody2019-11-291-0/+2
| | | | | | | | | | | XenConverter VHD images are another VHD image where current_size is different from the CHS values in the the format header. Use current_size as the default, by looking at the creator_app signature field. Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* vpc: use current_size field for XenServer VHD imagesStefan Hajnoczi2019-11-291-1/+3
| | | | | | | | | | | | | | The vpc driver has two methods of determining virtual disk size. The correct one to use depends on the software that generated the image file. Add the XenServer creator_app signature so that image size is correctly detected for those images. Reported-by: Grant Wu <grantwwu@gmail.com> Reported-by: Spencer Baugh <sbaugh@catern.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: set errp in vpc_createJeff Cody2019-11-291-0/+5
| | | | | | | | Add more useful error information to failure paths in vpc_create(). Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Fix blk_aio_write_zeroes()Kevin Wolf2019-11-293-10/+100
| | | | | | | | | | | | | | | | | Commit 57d6a428 broke blk_aio_write_zeroes() because in some write functions in the call path don't have an explicit length argument but reuse qiov->size instead. Which is great, except that write_zeroes doesn't have a qiov, which this commit interprets as 0 bytes. Consequently, blk_aio_write_zeroes() didn't effectively do anything. This patch introduces an explicit acb->bytes in BlkAioEmAIOCB and uses that instead of acb->rwco.size. The synchronous version of the function is okay because it does pass a qiov (with the right size and a NULL pointer as its base). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
OpenPOWER on IntegriCloud