summaryrefslogtreecommitdiffstats
path: root/exec.c
Commit message (Collapse)AuthorAgeFilesLines
* cpu: Turn cpu_dump_{state,statistics}() into CPUState hooksAndreas Färber2013-06-281-1/+2
| | | | | | | | | Make cpustats monitor command available unconditionally. Prepares for changing kvm_handle_internal_error() and kvm_cpu_exec() arguments to CPUState. Signed-off-by: Andreas Färber <afaerber@suse.de>
* cpu: Change cpu_exit() argument to CPUStateAndreas Färber2013-06-281-8/+0
| | | | | | | | It no longer depends on CPUArchState, so move it to qom/cpu.c. Prepares for changing GDBState::c_cpu to CPUState. Signed-off-by: Andreas Färber <afaerber@suse.de>
* cpu: Introduce VMSTATE_CPU() macro for CPUStateAndreas Färber2013-06-281-3/+2
| | | | | | | To be used to embed common CPU state into CPU subclasses. Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Andreas Färber <afaerber@suse.de>
* linux-user: Fix compilation failurePeter Maydell2013-06-271-1/+1
| | | | | | | | | Fix compilation failures for linux-user targets following recent migration related commits bd2fa51fcd and 43487c67. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1372362818-4740-1-git-send-email-peter.maydell@linaro.org Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
* rdma: introduce qemu_ram_foreach_block()Michael R. Hines2013-06-271-0/+9
| | | | | | | | | | | | | | This is used during RDMA initialization in order to transmit a description of all the RAM blocks to the peer for later dynamic chunk registration purposes. Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Chegu Vinod <chegu_vinod@hp.com> Tested-by: Chegu Vinod <chegu_vinod@hp.com> Tested-by: Michael R. Hines <mrhines@us.ibm.com> Signed-off-by: Michael R. Hines <mrhines@us.ibm.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* memory: give name to every AddressSpaceAlexey Kardashevskiy2013-06-201-4/+2
| | | | | | | | | | | | | The "info mtree" command in QEMU console prints only "memory" and "I/O" address spaces while there are actually a lot more other AddressSpace structs created by PCI and VIO devices. Those devices do not normally have names and therefore not present in "info mtree" output. The patch fixes this. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* dma: eliminate DMAContextPaolo Bonzini2013-06-201-3/+0
| | | | | | | | | The DMAContext is a simple pointer to an AddressSpace that is now always already available. Make everyone hold the address space directly, and clean up the DMA API to use the AddressSpace directly. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* dma: eliminate old-style IOMMU supportPaolo Bonzini2013-06-201-2/+1
| | | | | | | | The translate function in the DMAContext is now always NULL. Remove every reference to it. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: iommu supportAvi Kivity2013-06-201-2/+33
| | | | | | | | | | | | | | | | | | Add a new memory region type that translates addresses it is given, then forwards them to a target address space. This is similar to an alias, except that the mapping is more flexible than a linear translation and trucation, and also less efficient since the translation happens at runtime. The implementation uses an AddressSpace mapping the target region to avoid hierarchical dispatch all the way to the resolved region; only iommu regions are looked up dynamically. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Avi Kivity <avi.kivity@gmail.com> [Modified to put translation in address_space_translate; assume IOMMUs are not reachable from TCG. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: make section size a 128-bit integerPaolo Bonzini2013-06-201-16/+21
| | | | | | | | | | | So far, the size of all regions passed to listeners could fit in 64 bits, because artificial regions (containers and aliases) are eliminated by the memory core, leaving only device regions which have reasonable sizes An IOMMU however cannot be eliminated by the memory core, and may have an artificial size, hence we may need 65 bits to represent its size. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: reorganize mem_add to match Int128 versionPaolo Bonzini2013-06-201-23/+16
| | | | | | | | When adding support for 2^64-byte sections, we will have to change the structure of mem_add to avoid failures in int128_get64. Reorganize the code now before introducing Int128. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Revert "memory: limit sections in the radix tree to the actual address space ↵Paolo Bonzini2013-06-201-12/+1
| | | | | | | | size" This reverts commit 86a8623692b1b559a419a92eb8b6897c221bca74. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: return MemoryRegion from address_space_translatePaolo Bonzini2013-06-201-75/+75
| | | | | | | | Only address_space_translate_for_iotlb needs to return the section. Every caller of address_space_translate now uses only section->mr, return it directly. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: Implement subpage_read/write via address_space_rwJan Kiszka2013-06-201-78/+47
| | | | | | | | | | | | This will allow to add support for unaligned memory regions: the subpage container region can activate unaligned support unconditionally because the read/write handler will now ensure that accesses are split as required by calling address_space_rw. We can furthermore drop the special handling of RAM subpages, address_space_rw takes care of this already. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: Resolve subpages in one step except for IOTLB fillsJan Kiszka2013-06-201-13/+36
| | | | | | | | | | | | | | | | Except for the case of setting the IOTLB entry in TCG mode, we can avoid the subpage dispatching handlers and do the resolution directly on address_space_lookup_region. An IOTLB entry describes a full page, not only the region that the first access to a sub-divided page may return. This patch therefore introduces a special translation function, address_space_translate_for_iotlb, that avoids the subpage resolutions. In contrast, callers of the existing address_space_translate service will now always receive the terminal memory region section. This will be important for breaking the BQL and for enabling unaligned memory region. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: Allow unaligned address_space_rwJan Kiszka2013-06-201-6/+6
| | | | | | | This will be needed for some corner cases with para-virtual I/O ports. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: move private types to exec.cPaolo Bonzini2013-06-201-0/+16
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: Introduce address_space_lookup_regionJan Kiszka2013-06-201-1/+7
| | | | | | | | | | This introduces a wrapper for phys_page_find (before we complicate address_space_translate with IOMMU translation). This function will also encapsulate locking and reference counting when we introduce BQL-free dispatching. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec.c: address_space_translate: handle access to addr 0 of 2^64 sized regionPeter Maydell2013-06-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | The memory API allows a MemoryRegion's size to be 2^64, as a special case (otherwise the size always fits in a 64 bit integer). This meant that attempts to access address zero in a 2^64 sized region would assert in address_space_translate(): #3 0x00007ffff3e4d192 in __GI___assert_fail#(assertion=0x555555a43f32 "!a.hi", file=0x555555a43ef0 "include/qemu/int128.h", line=18, function=0x555555a4439f "int128_get64") at assert.c:103 #4 0x0000555555877642 in int128_get64 (a=...) at include/qemu/int128.h:18 #5 0x00005555558782f2 in address_space_translate (as=0x55555668d140, /addr=0, xlat=0x7fffafac9918, plen=0x7fffafac9920, is_write=false) at exec.c:221 Fix this by doing the 'min' operation in 128 bit arithmetic rather than 64 bit arithmetic (we know the result of the 'min' definitely fits in 64 bits because one of the inputs did). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: add return value to address_space_rw/read/writePaolo Bonzini2013-05-291-19/+15
| | | | | Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: propagate errors on I/O dispatchPaolo Bonzini2013-05-291-9/+12
| | | | | Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: just use io_mem_read/io_mem_write for 8-byte I/O accessesPaolo Bonzini2013-05-291-7/+1
| | | | | | | The memory API is able to split it in two 4-byte accesses. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: correctly handle endian-swapped 64-bit accessesPaolo Bonzini2013-05-291-3/+9
| | | | | Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: add address_space_access_validPaolo Bonzini2013-05-291-0/+21
| | | | | | | | | The old-style IOMMU lets you check whether an access is valid in a given DMAContext. There is no equivalent for AddressSpace in the memory API, implement it with a lookup of the dispatch tree. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: implement .valid.accepts for subpagesPaolo Bonzini2013-05-291-0/+20
| | | | | Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: introduce memory_access_sizePaolo Bonzini2013-05-291-10/+17
| | | | | | | This will be used by address_space_access_valid too. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: introduce memory_access_is_directPaolo Bonzini2013-05-291-17/+22
| | | | | | | | After the previous patches, this is a common test for all read/write functions. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: expect mr->ops to be initialized for ROMPaolo Bonzini2013-05-291-9/+0
| | | | | | | There is no need to use the special phys_section_rom section. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: move unassigned_mem_ops to memory.cPaolo Bonzini2013-05-291-12/+0
| | | | | | | reservation_ops is already doing the same thing. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: add address_space_translatePaolo Bonzini2013-05-291-94/+98
| | | | | | | | | | | | | | | | | Using phys_page_find to translate an AddressSpace to a MemoryRegionSection is unwieldy. It requires to pass the page index rather than the address, and later memory_region_section_addr has to be called. Replace memory_region_section_addr with a function that does all of it: call phys_page_find, compute the offset within the region, and check how big the current mapping is. This way, a large flat region can be written with a single lookup rather than a page at a time. address_space_translate will also provide a single point where IOMMU forwarding is implemented. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: dispatch unassigned accesses based on .valid.acceptsPaolo Bonzini2013-05-291-24/+12
| | | | | | | | | This provides the basics for detecting accesses to unassigned memory as soon as they happen, and also for a simple implementation of address_space_access_valid. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: do not use error_mem_readPaolo Bonzini2013-05-291-14/+2
| | | | | | | | | | | We will soon reach this case when doing (unaligned) accesses that span partly past the end of memory. We do not want to crash in that case. unassigned_mem_ops and rom_mem_ops are now the same. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: make io_mem_unassigned privatePaolo Bonzini2013-05-291-2/+2
| | | | | | | | There is no reason to avoid a recompile before accessing unassigned memory. In the end it will be treated as MMIO anyway. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: drop useless #ifPaolo Bonzini2013-05-291-2/+0
| | | | | | | This code is only compiled for softmmu targets. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: eliminate io_mem_ramPaolo Bonzini2013-05-291-16/+2
| | | | | | | | | | | It is never used, the IOTLB always goes through io_mem_notdirty. In fact in softmmu_template.h, if it were, QEMU would crash just below the tests, as soon as io_mem_read/write dispatches to error_mem_read/write. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: clean up phys_page_findPaolo Bonzini2013-05-241-6/+2
| | | | | | | Remove the goto. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: limit sections in the radix tree to the actual address space sizeAvi Kivity2013-05-241-1/+12
| | | | | | | | | | | | | | | | | The radix tree is statically sized to fit TARGET_PHYS_ADDR_SPACE_BITS. If a larger memory region is registered, it will overflow. Fix by limiting any section in the radix tree to the supported size. This problem was not observed earlier since artificial regions (containers and aliases) are eliminated by the memory core, leaving only device regions which have reasonable sizes. An IOMMU however cannot be eliminated by the memory core, and may have an artificial size. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Avi Kivity <avi.kivity@gmail.com> [ Fail the build if TARGET_PHYS_ADDR_SPACE_BITS is too large - Paolo ] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: assert that PhysPageEntry's ptr does not overflowPaolo Bonzini2013-05-241-0/+6
| | | | | | | | | | While sized to 15 bits in PhysPageEntry, the ptr field is ORed into the iotlb entries together with a page-aligned pointer. The ptr field must not overflow into this page-aligned value, assert that it is smaller than the page size. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: eliminate stq_phys_notdirtyPaolo Bonzini2013-05-241-27/+0
| | | | | | | It is not used anywhere. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: eliminate qemu_put_ram_ptrPaolo Bonzini2013-05-241-8/+0
| | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: remove obsolete commentPaolo Bonzini2013-05-241-6/+0
| | | | | | | | See how we call memory_region_section_addr two lines below to convert a physical address to a base address in the region. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* osdep: introduce qemu_anon_ram_free to free qemu_anon_ram_alloc-ed memoryPaolo Bonzini2013-05-141-6/+2
| | | | | | | | | | | | | | | We switched from qemu_memalign to mmap() but then we don't modify qemu_vfree() to do a munmap() over free(). Which we cannot do because qemu_vfree() frees memory allocated by qemu_{mem,block}align. Introduce a new function that does the munmap(), luckily the size is available in the RAMBlock. Reported-by: Amos Kong <akong@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Amos Kong <akong@redhat.com> Message-id: 1368454796-14989-3-git-send-email-pbonzini@redhat.com Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
* osdep, kvm: rename low-level RAM allocation functionsPaolo Bonzini2013-05-141-3/+3
| | | | | | | | | | This is preparatory to the introduction of a separate freeing API. Reported-by: Amos Kong <akong@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Amos Kong <akong@redhat.com> Message-id: 1368454796-14989-2-git-send-email-pbonzini@redhat.com Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
* cpu: Add qemu_for_each_cpu()Michael S. Tsirkin2013-05-011-0/+10
| | | | | | | | | Wrapper to avoid open-coded loops and to make CPUState iteration independent of CPUArchState. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Andreas Färber <afaerber@suse.de>
* hw: move headers to include/Paolo Bonzini2013-04-081-1/+1
| | | | | | | | | Many of these should be cleaned up with proper qdev-/QOM-ification. Right now there are many catch-all headers in include/hw/ARCH depending on cpu.h, and this makes it necessary to compile these files per-target. However, fixing this does not belong in these patches. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: assert that RAMBlock size is non-zeroStefan Hajnoczi2013-03-261-0/+2
| | | | | | | | | | | | find_ram_offset() does not handle size=0 gracefully. It hands out the same RAMBlock offset multiple times, leading to obscure failures later on. Add an assert to warn early if something is incorrectly allocating a zero size RAMBlock. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* Merge remote-tracking branch 'afaerber/qom-cpu' into stagingAnthony Liguori2013-03-141-14/+16
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | # By Andreas Färber (16) and Igor Mammedov (1) # Via Andreas Färber * afaerber/qom-cpu: target-lm32: Update VMStateDescription to LM32CPU target-arm: Override do_interrupt for ARMv7-M profile cpu: Replace do_interrupt() by CPUClass::do_interrupt method cpu: Pass CPUState to cpu_interrupt() exec: Pass CPUState to cpu_reset_interrupt() cpu: Move halted and interrupt_request fields to CPUState target-cris/helper.c: Update Coding Style target-i386: Update VMStateDescription to X86CPU cpu: Introduce cpu_class_set_vmsd() cpu: Register VMStateDescription through CPUState stubs: Add a vmstate_dummy struct for CONFIG_USER_ONLY vmstate: Make vmstate_register() static inline target-sh4: Move PVR/PRR/CVR into SuperHCPUClass target-sh4: Introduce SuperHCPU subclasses cpus: Replace open-coded CPU loop in qmp_memsave() with qemu_get_cpu() monitor: Use qemu_get_cpu() in monitor_set_cpu() cpu: Fix qemu_get_cpu() to return NULL if CPU not found
| * cpu: Pass CPUState to cpu_interrupt()Andreas Färber2013-03-121-1/+1
| | | | | | | | | | | | | | | | Move it to qom/cpu.h to avoid issues with include order. Change pc_acpi_smi_interrupt() opaque to X86CPU. Signed-off-by: Andreas Färber <afaerber@suse.de>
| * exec: Pass CPUState to cpu_reset_interrupt()Andreas Färber2013-03-121-7/+0
| | | | | | | | | | | | | | | | | | | | | | | | Move it to qom/cpu.c to avoid build failures depending on include order of cpu-qom.h and exec/cpu-all.h. Change opaques of various ..._irq_handler() functions to the appropriate CPU type to facilitate using cpu_reset_interrupt(). Fix Coding Style issues while at it (missing braces, indentation). Signed-off-by: Andreas Färber <afaerber@suse.de>
| * cpu: Move halted and interrupt_request fields to CPUStateAndreas Färber2013-03-121-7/+9
| | | | | | | | | | | | | | | | | | | | Both fields are used in VMState, thus need to be moved together. Explicitly zero them on reset since they were located before breakpoints. Pass PowerPCCPU to kvmppc_handle_halt(). Signed-off-by: Andreas Färber <afaerber@suse.de>
OpenPOWER on IntegriCloud