summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* rocker: mark copy-to-cpu pkts as forwarding offloadedScott Feldman2015-07-075-3/+15
| | | | | | | | | | | | | | | | | For pkts copied to the CPU (to be processed by guest driver), mark the Rx descriptor with flag "OFFLOAD_FWD" to indicate device has already forwarded pkt. The guest driver will use this indicator to avoid duplicate forwarding in the guest OS. Examples include bcast/mcast/unknown ucast pkts flooded to bridged ports. We want to avoid both the device and the guest bridge driver flooding these pkts, which would result in duplicates pkts on the wire. Packet sampling, such as sFlow, can also use this technique to mark pkts for the guest OS to record but otherwise drop. Signed-off-by: Scott Feldman <sfeldma@gmail.com> Message-id: 1435746792-41278-5-git-send-email-sfeldma@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* rocker: return -1 when dropping packet on ingressScott Feldman2015-07-071-1/+1
| | | | | | Signed-off-by: Scott Feldman <sfeldma@gmail.com> Message-id: 1435746792-41278-4-git-send-email-sfeldma@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* rocker: fix missing break statementsScott Feldman2015-07-071-0/+2
| | | | | | | Signed-off-by: Scott Feldman <sfeldma@gmail.com> Reported-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1435746792-41278-3-git-send-email-sfeldma@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* rocker: fix misplaced break statementScott Feldman2015-07-071-1/+1
| | | | | | | | | | | Premature break in switch case block. This particular case (group L2 rewrite) will be used for L2 LAG and L3 ECMP support, neither of which are enabled in the guest driver at this time, but are under development. Signed-off-by: Scott Feldman <sfeldma@gmail.com> Reported-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1435746792-41278-2-git-send-email-sfeldma@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* rocker: don't queue receive pkts when port is disabledScott Feldman2015-07-071-8/+10
| | | | | | | | | | | | | | | | | | | Commit 6e99c63 ("net/socket: Drop net_socket_can_send") changed the semantics around .can_receive for sockets to now require the device to flush queued pkts when transitioning to a .can_receive=true state. Rocker device was not flushing the queue on .can_receive=true transition, so the receiver was stuck. But, turns out we really don't want any queuing at all on the port when the port is disabled, otherwise when the port transitions to enabled, we'd receive and forward stale pkts that really should have been dropped. So, let's remove .can_receive so avoid queuing and drop the pkt in .receive if the port is disabled. Signed-off-by: Scott Feldman <sfeldma@gmail.com> Reviewed-by: Fam Zheng <famz@redhat.com> Message-id: 1435717553-36187-1-git-send-email-sfeldma@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* vmxnet3: Fix incorrect small packet paddingBrian Kress2015-07-071-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When running ESXi under qemu there is an issue with the ESXi guest discarding packets that are too short. The guest discards any packets under the normal minimum length for an ethernet packet (60). This results in odd behaviour where other hosts or VMs on other hosts can communicate with the ESXi guest just fine (since there's a physical NIC somewhere doing padding), but VMs on the host and the host itself cannot because the ARP request packets are too small for the ESXi host to accept. Someone in the past thought this was worth fixing, and added code to the vmxnet3 qemu emulation such that if it is receiving packets smaller than 60 bytes to pad the packet out to 60. Unfortunately this code is wrong (or at least in the wrong place). It does so BEFORE before taking into account the vnet_hdr at the front of the packet added by the tap device. As a result, it might add padding, but it never adds enough. Specifically it adds 10 less (the length of the vnet_hdr) than it needs to. The following (hopefully "obviously correct") patch simply swaps the order of processing the vnet header and the padding. With this patch an ESXi guest is able to communicate with the host or other local VMs. Signed-off-by: Brian Kress <kressb@moose.net> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Dmitry Fleytman <dmitry@daynix.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* e1000: flush packets when link comes upStefan Hajnoczi2015-07-071-0/+3
| | | | | | | | | | | | | | | | e1000_can_receive() checks the link up status register bit. If the bit is clear, packets will be queued and the peer may disable receive to avoid wasting CPU reading packets that cannot be delivered. The queue must be flushed once the link comes back up again. This patch fixes broken e1000 receive with Mac OS X Snow Leopard guests and tap networking. Flushing the queue invokes the async send callback, which re-enables tap fd read. Reported-by: Jonathan Liu <net147@gmail.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Message-id: 1435223885-12745-1-git-send-email-stefanha@redhat.com
* rocker: fix memory leakGonglei2015-07-071-1/+2
| | | | | | | | | Meanwhile, using g_new0 instead of g_malloc0, refer to commit 5839e53. Signed-off-by: Gonglei <arei.gonglei@huawei.com> Message-id: 1435213450-6700-1-git-send-email-arei.gonglei@huawei.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream-smm' into ↵Peter Maydell2015-07-0622-344/+517
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | staging This series implements KVM support for SMM, and lets you enable/disable it through the "smm" property of x86 machine types. # gpg: Signature made Mon Jul 6 17:41:05 2015 BST using RSA key ID 78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream-smm: pc: add SMM property ich9: add smm_enabled field and arguments pc_piix: rename kvm_enabled to smm_enabled target-i386: register a separate KVM address space including SMRAM regions kvm-all: kvm_irqchip_create is not expected to fail kvm-all: add support for multiple address spaces kvm-all: make KVM's memory listener more generic kvm-all: move internal types to kvm_int.h kvm-all: remove useless typedef kvm-all: put kvm_mem_flags to more work target-i386: add support for SMBASE MSR and SMIs piix4/ich9: do not raise SMI on ACPI enable/disable commands linux-headers: Update to 4.2-rc1 Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * pc: add SMM propertyPaolo Bonzini2015-07-067-2/+76
| | | | | | | | | | | | | | | | | | The property can take values on, off or auto. The default is "off" for KVM and pre-2.4 machines, otherwise "auto" (which makes it available on TCG or on new-enough kernels). Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * ich9: add smm_enabled field and argumentsPaolo Bonzini2015-07-065-7/+11
| | | | | | | | | | | | | | | | | | Q35's ACPI device is hard-coding SMM availability to KVM. Place the logic where the board is created instead, so that it will be possible to override it. Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * pc_piix: rename kvm_enabled to smm_enabledPaolo Bonzini2015-07-063-7/+7
| | | | | | | | | | | | | | | | We will enable SMM even if KVM is in use. Rename the field and arguments. Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * target-i386: register a separate KVM address space including SMRAM regionsPaolo Bonzini2015-07-061-1/+40
| | | | | | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * kvm-all: kvm_irqchip_create is not expected to failPaolo Bonzini2015-07-061-17/+18
| | | | | | | | | | | | | | | | | | | | | | KVM_CREATE_IRQCHIP should never fail, and so should its userspace wrapper kvm_irqchip_create. The function does not do anything if the irqchip capability is not available, as is the case for PPC. With this patch, kvm_arch_init can allocate memory and it will not be leaked. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * kvm-all: add support for multiple address spacesPaolo Bonzini2015-07-062-7/+10
| | | | | | | | | | | | | | Make kvm_memory_listener_register public, and assign a kernel address space id to each KVMMemoryListener. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * kvm-all: make KVM's memory listener more genericPaolo Bonzini2015-07-062-62/+94
| | | | | | | | | | | | | | | | No semantic change, but s->slots moves into a new struct KVMMemoryListener. KVM's memory listener becomes a member of struct KVMState, and becomes of type KVMMemoryListener. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * kvm-all: move internal types to kvm_int.hPaolo Bonzini2015-07-062-17/+31
| | | | | | | | | | | | | | i386 code will have to define a different KVMMemoryListener. Create an internal header so that KVMSlot is not exposed outside. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * kvm-all: remove useless typedefPaolo Bonzini2015-07-061-3/+1
| | | | | | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * kvm-all: put kvm_mem_flags to more workAndrew Jones2015-07-061-22/+20
| | | | | | | | | | | | | | | | | | Currently kvm_mem_flags just translates bools to bits, let's make it also determine the bools first. This avoids its parameter list growing each time we add a flag. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * target-i386: add support for SMBASE MSR and SMIsPaolo Bonzini2015-07-062-12/+94
| | | | | | | | | | | | | | | | | | Apart from the MSR, the smi field of struct kvm_vcpu_events has to be translated into the corresponding CPUX86State fields. Also, memory transaction flags depend on SMM state, so pull it from struct kvm_run on every exit from KVM to userspace. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * piix4/ich9: do not raise SMI on ACPI enable/disable commandsPaolo Bonzini2015-07-062-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | These commands are handled entirely by QEMU. Do not raise an SMI when they happen, because Windows (at least 2008r2) expects these commands to work and (depending on the value of APMC_EN at startup) the firmware might not have installed an SMI handler. When this happens (e.g. the kernel supports SMIs, or you are using TCG, but you have used "-machine smm=off") RIP is moved to 0x38000 where there is no code to execute. Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * linux-headers: Update to 4.2-rc1Alexey Kardashevskiy2015-07-067-199/+121
|/ | | | | | | | | | | This updates linux-headers against master 4.2-rc1 (commit d770e558e21961ad6cfdf0ff7df0eb5d7d4f0754). This is the result of ./scripts/update-linux-headers.sh work. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell2015-07-0626-73/+299
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * more of Peter Crosthwaite's multiarch preparation patches * unlocked MMIO support in KVM * support for compilation with ICC # gpg: Signature made Mon Jul 6 13:59:20 2015 BST using RSA key ID 78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: exec: skip MMIO regions correctly in cpu_physical_memory_write_rom_internal Stop including qemu-common.h in memory.h kvm: Switch to unlocked MMIO acpi: mark PMTIMER as unlocked kvm: Switch to unlocked PIO kvm: First step to push iothread lock out of inner run loop memory: let address_space_rw/ld*/st* run outside the BQL exec: pull qemu_flush_coalesced_mmio_buffer() into address_space_rw/ld*/st* memory: Add global-locking property to memory regions main-loop: introduce qemu_mutex_iothread_locked main-loop: use qemu_mutex_lock_iothread consistently Fix irq route entries exceeding KVM_MAX_IRQ_ROUTES cpu-defs: Move out TB_JMP defines include/exec: Move tb hash functions out include/exec: Move standard exceptions to cpu-all.h cpu-defs: Move CPU_TEMP_BUF_NLONGS to tcg memory_mapping: Rework cpu related includes cutils: allow compilation with icc qemu-common: add VEC_OR macro Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * exec: skip MMIO regions correctly in cpu_physical_memory_write_rom_internalPaolo Bonzini2015-07-061-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Loading the BIOS in the mac99 machine is interesting, because there is a PROM in the middle of the BIOS region (from 16K to 32K). Before memory region accesses were clamped, when QEMU was asked to load a BIOS from 0xfff00000 to 0xffffffff it would put even those 16K from the BIOS file into the region. This is weird because those 16K were not actually visible between 0xfff04000 and 0xfff07fff. However, it worked. After clamping was added, this also worked. In this case, the cpu_physical_memory_write_rom_internal function split the write in three parts: the first 16K were copied, the PROM area (second 16K) were ignored, then the rest was copied. Problems then started with commit 965eb2f (exec: do not clamp accesses to MMIO regions, 2015-06-17). Clamping accesses is not done for MMIO regions because they can overlap wildly, and MMIO registers can be expected to perform full-width accesses based only on their address (with no respect for adjacent registers that could decode to completely different MemoryRegions). However, this lack of clamping also applied to the PROM area! cpu_physical_memory_write_rom_internal thus failed to copy the third range above, i.e. only copied the first 16K of the BIOS. In effect, address_space_translate is expecting _something else_ to do the clamping for MMIO regions if the incoming length is large. This "something else" is memory_access_size in the case of address_space_rw, so use the same logic in cpu_physical_memory_write_rom_internal. Reported-by: Alexander Graf <agraf@redhat.com> Reviewed-by: Laurent Vivier <lvivier@redhat.com> Tested-by: Laurent Vivier <lvivier@redhat.com> Fixes: 965eb2f Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * Stop including qemu-common.h in memory.hPeter Maydell2015-07-066-5/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Including qemu-common.h from other header files is generally a bad idea, because it means it's very easy to end up with a circular dependency. For instance, if we wanted to include memory.h from qom/cpu.h we'd end up with this loop: memory.h -> qemu-common.h -> cpu.h -> cpu-qom.h -> qom/cpu.h -> memory.h Remove the include from memory.h. This requires us to fix up a few other files which were inadvertently getting declarations indirectly through memory.h. The biggest change is splitting the fprintf_function typedef out into its own header so other headers can get at it without having to include qemu-common.h. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <1435933104-15216-1-git-send-email-peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * kvm: Switch to unlocked MMIOPaolo Bonzini2015-07-011-2/+1
| | | | | | | | | | | | | | | | | | | | Do not take the BQL before dispatching MMIO requests of KVM VCPUs. Instead, address_space_rw will do it if necessary. This enables completely BQL-free MMIO handling in KVM mode for upcoming devices with fine-grained locking. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1434646046-27150-10-git-send-email-pbonzini@redhat.com>
| * acpi: mark PMTIMER as unlockedPaolo Bonzini2015-07-011-0/+1
| | | | | | | | | | | | | | Accessing QEMU_CLOCK_VIRTUAL is thread-safe. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1434646046-27150-9-git-send-email-pbonzini@redhat.com>
| * kvm: Switch to unlocked PIOJan Kiszka2015-07-011-2/+1
| | | | | | | | | | | | | | | | | | | | | | Do not take the BQL before dispatching PIO requests of KVM VCPUs. Instead, address_space_rw will do it if necessary. This enables completely BQL-free PIO handling in KVM mode for upcoming devices with fine-grained locking. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1434646046-27150-8-git-send-email-pbonzini@redhat.com>
| * kvm: First step to push iothread lock out of inner run loopJan Kiszka2015-07-015-2/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This opens the path to get rid of the iothread lock on vmexits in KVM mode. On x86, the in-kernel irqchips has to be used because we otherwise need to synchronize APIC and other per-cpu state accesses that could be changed concurrently. Regarding pre/post-run callbacks, s390x and ARM should be fine without specific locking as the callbacks are empty. MIPS and POWER require locking for the pre-run callback. For the handle_exit callback, it is non-empty in x86, POWER and s390. Some POWER cases could do without the locking, but it is left in place for now. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1434646046-27150-7-git-send-email-pbonzini@redhat.com>
| * memory: let address_space_rw/ld*/st* run outside the BQLJan Kiszka2015-07-011-9/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | The MMIO case is further broken up in two cases: if the caller does not hold the BQL on invocation, the unlocked one takes or avoids BQL depending on the locking strategy of the target memory region and its coalesced MMIO handling. In this case, the caller should not hold _any_ lock (a friendly suggestion which is disregarded by virtio-scsi-dataplane). Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Cc: Frederic Konrad <fred.konrad@greensocs.com> Message-Id: <1434646046-27150-6-git-send-email-pbonzini@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * exec: pull qemu_flush_coalesced_mmio_buffer() into address_space_rw/ld*/st*Paolo Bonzini2015-07-012-12/+21
| | | | | | | | | | | | | | | | | | As memory_region_read/write_accessor will now be run also without BQL held, we need to move coalesced MMIO flushing earlier in the dispatch process. Cc: Frederic Konrad <fred.konrad@greensocs.com> Message-Id: <1434646046-27150-5-git-send-email-pbonzini@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * memory: Add global-locking property to memory regionsJan Kiszka2015-07-012-0/+37
| | | | | | | | | | | | | | | | | | | | | | | | This introduces the memory region property "global_locking". It is true by default. By setting it to false, a device model can request BQL-free dispatching of region accesses to its r/w handlers. The actual BQL break-up will be provided in a separate patch. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Cc: Frederic Konrad <fred.konrad@greensocs.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1434646046-27150-4-git-send-email-pbonzini@redhat.com>
| * main-loop: introduce qemu_mutex_iothread_lockedPaolo Bonzini2015-07-013-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This function will be used to avoid recursive locking of the iothread lock whenever address_space_rw/ld*/st* are called with the BQL held, which is almost always the case. Tracking whether the iothread is owned is very cheap (just use a TLS variable) but requires some care because now the lock must always be taken with qemu_mutex_lock_iothread(). Previously this wasn't the case. Outside TCG mode this is not a problem. In TCG mode, we need to be careful and avoid the "prod out of compiled code" step if already in a VCPU thread. This is easily done with a check on current_cpu, i.e. qemu_in_vcpu_thread(). Hopefully, multithreaded TCG will get rid of the whole logic to kick VCPUs whenever an I/O event occurs! Cc: Frederic Konrad <fred.konrad@greensocs.com> Message-Id: <1434646046-27150-3-git-send-email-pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * main-loop: use qemu_mutex_lock_iothread consistentlyPaolo Bonzini2015-07-011-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The next patch will require the BQL to be always taken with qemu_mutex_lock_iothread(), while right now this isn't the case. Outside TCG mode this is not a problem. In TCG mode, we need to be careful and avoid the "prod out of compiled code" step if already in a VCPU thread. This is easily done with a check on current_cpu, i.e. qemu_in_vcpu_thread(). Hopefully, multithreaded TCG will get rid of the whole logic to kick VCPUs whenever an I/O event occurs! Cc: Frederic Konrad <fred.konrad@greensocs.com> Message-Id: <1434646046-27150-2-git-send-email-pbonzini@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * Fix irq route entries exceeding KVM_MAX_IRQ_ROUTES马文霜2015-07-011-7/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Last month, we experienced several guests crash(6cores-8cores), qemu logs display the following messages: qemu-system-x86_64: /build/qemu-2.1.2/kvm-all.c:976: kvm_irqchip_commit_routes: Assertion `ret == 0' failed. After analysis and verification, we can confirm it's irq-balance daemon(in guest) leads to the assertion failure. Start a 8 core guest with two disks, execute the following scripts will reproduce the BUG quickly: irq_affinity.sh ======================================================================== vda_irq_num=25 vdb_irq_num=27 while [ 1 ] do for irq in {1,2,4,8,10,20,40,80} do echo $irq > /proc/irq/$vda_irq_num/smp_affinity echo $irq > /proc/irq/$vdb_irq_num/smp_affinity dd if=/dev/vda of=/dev/zero bs=4K count=100 iflag=direct dd if=/dev/vdb of=/dev/zero bs=4K count=100 iflag=direct done done ======================================================================== QEMU setup static irq route entries in kvm_pc_setup_irq_routing(), PIC and IOAPIC share the first 15 GSI numbers, take up 23 GSI numbers, but take up 38 irq route entries. When change irq smp_affinity in guest, a dynamic route entry may be setup, the current logic is: if allocate GSI number succeeds, a new route entry can be added. The available dynamic GSI numbers is 1021(KVM_MAX_IRQ_ROUTES-23), but available irq route entries is only 986(KVM_MAX_IRQ_ROUTES-38), GSI numbers greater than route entries. irq-balance's behavior will eventually leads to total irq route entries exceed KVM_MAX_IRQ_ROUTES, ioctl(KVM_SET_GSI_ROUTING) fail and kvm_irqchip_commit_routes() trigger assertion failure. This patch fix the BUG. Signed-off-by: Wenshuang Ma <kevinnma@tencent.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * cpu-defs: Move out TB_JMP definesPeter Crosthwaite2015-06-262-8/+8
| | | | | | | | | | | | | | | | | | | | | | These are not Architecture specific in any way so move them out of cpu-defs.h. tb-hash.h is an appropriate place as a leading user and their strong relationship to TB hashing and caching. Reviewed-by: Richard Henderson <rth@redhat.com> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <43ceca65a3fa240efac49aa0bf604ad0442e1710.1433052532.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * include/exec: Move tb hash functions outPeter Crosthwaite2015-06-264-20/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is one of very few things in exec-all with a genuine CPU architecture dependency. Move these hashing helpers to a new header to trim exec-all.h down to a near architecture-agnostic header. The defs are only used by cpu-exec and translate-all which are both arch-obj's so the new tb-hash.h has no core code usage. Reviewed-by: Richard Henderson <rth@redhat.com> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <9d048b96f7cfa64a4d9c0b88e0dd2877fac51d41.1433052532.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * include/exec: Move standard exceptions to cpu-all.hPeter Crosthwaite2015-06-262-6/+6
| | | | | | | | | | | | | | | | | | | | | | These exception indicies are generic and don't have any reliance on the per-arch cpu.h defs. Move them to cpu-all.h so they can be used by core code that does not have access to cpu-defs.h. Reviewed-by: Richard Henderson <rth@redhat.com> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <dbebd3062c7cd4332240891a3564e73f374ddfcd.1433052532.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * cpu-defs: Move CPU_TEMP_BUF_NLONGS to tcgPeter Crosthwaite2015-06-262-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The usages of this define are pure TCG and there is no architecture specific variation of the value. Localise it to the TCG engine to remove another architecture agnostic piece from cpu-defs.h. This follows on from a28177820a868eafda8fab007561cc19f41941f4 where temp_buf was moved out of the CPU_COMMON obsoleting the need for the super early definition. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <498e8e5325c1a1aff79e5bcfc28cb760ef6b214e.1433052532.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * memory_mapping: Rework cpu related includesPeter Crosthwaite2015-06-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | This makes it more consistent with all other core code files, which either just rely on qemu-common.h inclusion or precede cpu.h with qemu-common.h. cpu-all.h should not be included in addition to cpu.h. Remove it. Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <1433714349-7262-1-git-send-email-crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * cutils: allow compilation with iccArtyom Tarasenko2015-06-261-7/+7
| | | | | | | | | | | | | | | | Use VEC_OR macro for operations on VECTYPE operands Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com> Message-Id: <3f62d7a3a265f7dd99e50d016a0333a99a4a082a.1435062067.git.atar4qemu@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * qemu-common: add VEC_OR macroArtyom Tarasenko2015-06-261-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | Intel C Compiler version 15.0.3.187 Build 20150407 doesn't support '|' function for non floating-point simd operands. Define VEC_OR macro which uses _mm_or_si128 supported both in icc and gcc on x86 platform. Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com> Message-Id: <54c804cdb3b3a93e93ef98f085dc57c4092580b7.1435062067.git.atar4qemu@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | Merge remote-tracking branch 'remotes/xtensa/tags/20150706-xtensa' into stagingPeter Maydell2015-07-069-16/+70
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Xtensa fixes: - add 64-bit floating point registers; - fix gdb register map construction. # gpg: Signature made Mon Jul 6 11:27:45 2015 BST using RSA key ID F83FA044 # gpg: Good signature from "Max Filippov <max.filippov@cogentembedded.com>" # gpg: aka "Max Filippov <jcmvbkbc@gmail.com>" * remotes/xtensa/tags/20150706-xtensa: target-xtensa: fix gdb register map construction target-xtensa: add 64-bit floating point registers Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * | target-xtensa: fix gdb register map constructionMax Filippov2015-07-067-7/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | Due to different gdb overlay organization between windowed/call0 configurations core import script doesn't always work correctly. Simplify the script: always copy complete gdb register map from overlay, count registers at core registerstion time. Update existing cores. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
| * | target-xtensa: add 64-bit floating point registersMax Filippov2015-07-064-9/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Xtensa ISA got specification for 64-bit floating point registers and opcodes, see ISA, 4.3.11 "Floating point coprocessor option". Add 64-bit FP registers. Although 64-bit floating point is currently not supported by xtensa translator, these registers need to be reported to gdb with proper size, otherwise it wouldn't find other registers. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
* | | Merge remote-tracking branch ↵Peter Maydell2015-07-069-15/+63
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'remotes/pmaydell/tags/pull-target-arm-20150706' into staging target-arm queue: * TLBI ALLEI1IS should operate on all CPUs, not just this one * Fix interval interrupt of cadence ttc in decrement mode * Implement YIELD insn to yield in ARM and Thumb translators * ARM GIC: reset all registers * arm_mptimer: fix timer shutdown and mode change * arm_mptimer: respect IT bit state # gpg: Signature made Mon Jul 6 10:58:27 2015 BST using RSA key ID 14360CDE # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" * remotes/pmaydell/tags/pull-target-arm-20150706: arm_mptimer: Respect IT bit state arm_mptimer: Fix timer shutdown and mode change hw/intc/arm_gic_common.c: Reset all registers target-arm: Implement YIELD insn to yield in ARM and Thumb translators target-arm: Split DISAS_YIELD from DISAS_WFE Fix interval interrupt of cadence ttc when timer is in decrement mode target-arm: fix write helper for TLBI ALLE1IS Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * | arm_mptimer: Respect IT bit stateDmitry Osipenko2015-07-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | The timer should fire the interrupt only if the IT (interrupt enable) bit state of the control register is enabled. Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * | arm_mptimer: Fix timer shutdown and mode changeDmitry Osipenko2015-07-061-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The running timer can't be stopped because timer control code just doesn't handle disabling the timer. Fix it by deleting the timer if the enable bit is cleared. The timer won't start periodic ticking if a ONE-SHOT -> PERIODIC mode change happens after a one-shot tick was completed. Fix it by re-starting ticking if the timer isn't ticking right now. To avoid code churning, these two fixes are squashed in one commit. Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * | hw/intc/arm_gic_common.c: Reset all registersPeter Maydell2015-07-061-3/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The arm_gic_common reset function was missing reset code for several of the GIC's state fields: * bpr[] * abpr[] * priority1[] * priority2[] * sgi_pending[] * irq_target[] (SMP configurations only) These probably went unnoticed because most guests will either never touch them, or will write to them in the process of configuring the GIC before enabling interrupts. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1435602345-32210-1-git-send-email-peter.maydell@linaro.org Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
| * | target-arm: Implement YIELD insn to yield in ARM and Thumb translatorsPeter Maydell2015-07-061-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement the YIELD instruction in the ARM and Thumb translators to actually yield control back to the top level loop rather than being a simple no-op. (We already do this for A64.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> Message-id: 1435672316-3311-3-git-send-email-peter.maydell@linaro.org
OpenPOWER on IntegriCloud