summaryrefslogtreecommitdiffstats
path: root/include/exec
Commit message (Collapse)AuthorAgeFilesLines
* qemu-log: dfilter-ise exec, out_asm, op and opt_opAlex Bennée2019-11-291-3/+5
| | | | | | | | | | | | This ensures the code generation debug code will honour -dfilter if set. For the "exec" tracing I've added a new inline macro for efficiency's sake. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Aurelien Jarno <aurelien@aureL32.net> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <1458052224-9316-8-git-send-email-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qemu-log: Improve the "exec" TB execution loggingPeter Maydell2019-11-291-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Improve the TB execution logging so that it is easier to identify what is happening from trace logs: * move the "Trace" logging of executed TBs into cpu_tb_exec() so that it is emitted if and only if we actually execute a TB, and for consistency for the CPU state logging * log when we link two TBs together via tb_add_jump() * log when cpu_tb_exec() returns early from a chain of TBs The new style logging looks like this: Trace 0x7fb7cc822ca0 [ffffffc0000dce00] Linking TBs 0x7fb7cc822ca0 [ffffffc0000dce00] index 0 -> 0x7fb7cc823110 [ffffffc0000dce10] Trace 0x7fb7cc823110 [ffffffc0000dce10] Trace 0x7fb7cc823420 [ffffffc000302688] Trace 0x7fb7cc8234a0 [ffffffc000302698] Trace 0x7fb7cc823520 [ffffffc0003026a4] Trace 0x7fb7cc823560 [ffffffc0000dce44] Linking TBs 0x7fb7cc823560 [ffffffc0000dce44] index 1 -> 0x7fb7cc8235d0 [ffffffc0000dce70] Trace 0x7fb7cc8235d0 [ffffffc0000dce70] Stopped execution of TB chain before 0x7fb7cc8235d0 [ffffffc0000dce70] Trace 0x7fb7cc8235d0 [ffffffc0000dce70] Trace 0x7fb7cc822fd0 [ffffffc0000dd52c] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> [AJB: reword patch title, Abandoned->Stopped] Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <1458052224-9316-6-git-send-email-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Use scripts/clean-includes to drop redundant qemu/typedefs.hMarkus Armbruster2019-11-292-2/+0
| | | | | | | | | | | | Re-run scripts/clean-includes to apply the previous commit's corrections and updates. Besides redundant qemu/typedefs.h, this only finds a redundant config-host.h include in ui/egl-helpers.c. No idea how that escaped the previous runs. Some manual whitespace trimming around dropped includes squashed in. Signed-off-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: Pass RAMBlock pointer to qemu_ram_freeFam Zheng2019-11-291-1/+1
| | | | | | | | | | The only caller now knows exactly which RAMBlock to free, so it's not necessary to do the lookup. Reviewed-by: Gonglei <arei.gonglei@huawei.com> Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <1456813104-25902-6-git-send-email-famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: Drop MemoryRegion.ram_addrFam Zheng2019-11-291-1/+0
| | | | | | | | | | | | All references to mr->ram_addr are replaced by memory_region_get_ram_addr(mr) (except for a few assertions that are replaced with mr->ram_block). Reviewed-by: Gonglei <arei.gonglei@huawei.com> Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <1456813104-25902-5-git-send-email-famz@redhat.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: Implement memory_region_get_ram_addr with mr->ram_blockFam Zheng2019-11-291-7/+1
| | | | | | Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <1456813104-25902-4-git-send-email-famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: Return RAMBlock pointer from allocating functionsFam Zheng2019-11-291-11/+11
| | | | | | | | | | | | Previously we return RAMBlock.offset; now return the pointer to the whole structure. ram_block_add returns void now, error is completely passed with errp. Reviewed-by: Gonglei <arei.gonglei@huawei.com> Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <1456813104-25902-2-git-send-email-famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: Remove unreachable return statementGonglei2019-11-291-2/+0
| | | | | | Signed-off-by: Gonglei <arei.gonglei@huawei.com> Message-Id: <1455935721-8804-4-git-send-email-arei.gonglei@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: optimize qemu_get_ram_ptr and qemu_ram_ptr_lengthGonglei2019-11-291-2/+2
| | | | | | | | | | | | | | these two functions consume too much cpu overhead to find the RAMBlock by ram address. After this patch, we can pass the RAMBlock pointer to them so that they don't need to find the RAMBlock anymore most of the time. We can get better performance in address translation processing. Signed-off-by: Gonglei <arei.gonglei@huawei.com> Message-Id: <1455935721-8804-3-git-send-email-arei.gonglei@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: store RAMBlock pointer into memory regionGonglei2019-11-291-0/+2
| | | | | | | | | | | | | | | | | Each RAM memory region has a unique corresponding RAMBlock. In the current realization, the memory region only stored the ram_addr which means the offset of RAM address space, We need to qurey the global ram.list to find the ram block by ram_addr if we want to get the ram block, which is very expensive. Now, we store the RAMBlock pointer into memory region structure. So, if we know the mr, we can easily get the RAMBlock. Signed-off-by: Gonglei <arei.gonglei@huawei.com> Message-Id: <1456130097-4208-2-git-send-email-arei.gonglei@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* include: Clean up includesPeter Maydell2019-11-295-9/+0
| | | | | | | | | | | | | | Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. NB: If this commit breaks compilation for your out-of-tree patchseries or fork, then you need to make sure you add #include "qemu/osdep.h" to any new .c files that you have. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com>
* move get_current_ram_size to virtio-balloon.cVladimir Sementsov-Ogievskiy2019-11-291-1/+0
| | | | | | | | | | | | get_current_ram_size() is used only in virtio-balloon.c This patch moves it into virtio-balloon and make it static, to allow some balloon-specific tuning. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Denis V. Lunev <den@openvz.org> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* memory: fix usage of find_next_bit and find_next_zero_bitPaolo Bonzini2019-11-291-19/+36
| | | | | | | | | | | | | | | | | | The last two arguments to these functions are the last and first bit to check relative to the base. The code was using incorrectly the first bit and the number of bits. Fix this in cpu_physical_memory_get_dirty and cpu_physical_memory_all_dirty. This requires a few changes in the iteration; change the code in cpu_physical_memory_set_dirty_range to match. Fixes: 5b82b70 Cc: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Tested-by: Leon Alrae <leon.alrae@imgtec.com> Tested-by: Thomas Huth <thuth@redhat.com> Message-id: 1455113505-11237-1-git-send-email-pbonzini@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* memory: RCU ram_list.dirty_memory[] for safe RAM hotplugStefan Hajnoczi2019-11-291-24/+165
| | | | | | | | | | | | | | | | | | | Although accesses to ram_list.dirty_memory[] use atomics so multiple threads can safely dirty the bitmap, the data structure is not fully thread-safe yet. This patch handles the RAM hotplug case where ram_list.dirty_memory[] is grown. ram_list.dirty_memory[] is change from a regular bitmap to an RCU array of pointers to fixed-size bitmap blocks. Threads can continue accessing bitmap blocks while the array is being extended. See the comments in the code for an in-depth explanation of struct DirtyMemoryBlocks. I have tested that live migration with virtio-blk dataplane works. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <1453728801-5398-2-git-send-email-stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: add early bail out from cpu_physical_memory_set_dirty_rangePaolo Bonzini2019-11-291-0/+4
| | | | | | | | | | This condition is true in the common case, so we can cut out the body of the function. In addition, this makes it easier for the compiler to do at least partial inlining, even if it decides that fully inlining the function is unreasonable. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* ram: Split host_from_stream_offset() into two helper functionszhanghailiang2019-11-291-2/+6
| | | | | | | | | | | | | Split host_from_stream_offset() into two parts: One is to get ram block, which the block idstr may be get from migration stream, the other is to get hva (host) address from block and the offset. Besides, we will do the check working in a new helper offset_in_ramblock(). Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Message-Id: <1452829066-9764-2-git-send-email-zhang.zhanghailiang@huawei.com> Signed-off-by: Amit Shah <amit.shah@redhat.com>
* log: do not unnecessarily include qom/cpu.hPaolo Bonzini2019-11-291-0/+60
| | | | | | | | | | Split the bits that require it to exec/log.h. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-id: 1452174932-28657-8-git-send-email-den@openvz.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* memory: Add address_space_init_shareable()Peter Crosthwaite2019-11-291-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | This will either create a new AS or return a pointer to an already existing equivalent one, if we have already created an AS for the specified root memory region. The motivation is to reuse address spaces as much as possible. It's going to be quite common that bus masters out in device land have pointers to the same memory region for their mastering yet each will need to create its own address space. Let the memory API implement sharing for them. Aside from the perf optimisations, this should reduce the amount of redundant output on info mtree as well. Thee returned value will be malloced, but the malloc will be automatically freed when the AS runs out of refs. Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> [PMM: dropped check for NULL root as unused; added doc-comment; squashed Peter C's reference-counting patch into this one; don't compare name string when deciding if we can share ASes; read as->malloced before the unref of as->root to avoid possible read-after-free if as->root was the owner of as] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* exec.c: Add cpu_get_address_space()Peter Maydell2019-11-291-0/+9
| | | | | | | | | | Add a function to return the AddressSpace for a CPU based on its numerical index. (Callers outside exec.c don't have access to the CPUAddressSpace struct so can't just fish it out of the CPUState struct directly.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* exec.c: Pass MemTxAttrs to iotlb_to_region so it uses the right ASPeter Maydell2019-11-291-1/+1
| | | | | | | | Pass the MemTxAttrs for the memory access to iotlb_to_region(); this allows it to determine the correct AddressSpace to use for the lookup. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* cputlb.c: Use correct address space when looking up MemoryRegionSectionPeter Maydell2019-11-291-2/+2
| | | | | | | | | | When looking up the MemoryRegionSection for the new TLB entry in tlb_set_page_with_attrs(), use cpu_asidx_from_attrs() to determine the correct address space index for the lookup, and pass it into address_space_translate_for_iotlb(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* exec-all.h: Document tlb_set_page_with_attrs, tlb_set_pagePeter Maydell2019-11-291-3/+31
| | | | | | | | Add documentation comments for tlb_set_page_with_attrs() and tlb_set_page(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* exec.c: Allow target CPUs to define multiple AddressSpacesPeter Maydell2019-11-291-0/+4
| | | | | | | | | | | | | Allow multiple calls to cpu_address_space_init(); each call adds an entry to the cpu->ases array at the specified index. It is up to the target-specific CPU code to actually use these extra address spaces. Since this multiple AddressSpace support won't work with KVM, add an assertion to avoid confusing failures. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* exec.c: Don't set cpu->as until cpu_address_space_initPeter Maydell2019-11-291-1/+15
| | | | | | | | | | | | | | | | | | Rather than setting cpu->as unconditionally in cpu_exec_init (and then having target-i386 override this later), don't set it until the first call to cpu_address_space_init. This requires us to initialise the address space for both TCG and KVM (KVM doesn't need the AS listener but it does require cpu->as to be set). For target CPUs which don't set up any address spaces (currently everything except i386), add the default address_space_memory in qemu_init_vcpu(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* ivshmem: Store file descriptor for vhost-user negotiationTetsuya Mukawa2019-11-291-0/+1
| | | | | | | | | | | | | | If virtio-net driver allocates memory in ivshmem shared memory, vhost-net will work correctly, but vhost-user will not work because a fd of shared memory will not be sent to vhost-user backend. This patch fixes ivshmem to store file descriptor of shared memory. It will be used when vhost-user negotiates vhost-user backend. Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* memory: try to inline constant-length readsPaolo Bonzini2019-11-292-18/+57
| | | | | | | | | | memcpy can take a large amount of time for small reads and writes. Handle the common case of reading s/g descriptors from memory (there is no corresponding "write" case that is as common, because writes often use address_space_st* functions) by inlining the relevant parts of address_space_read into the caller. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: inline a few small accessorsPaolo Bonzini2019-11-291-4/+18
| | | | | | These are used in the address_space_* fast paths. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: extract first iteration of address_space_read and address_space_writePaolo Bonzini2019-11-291-0/+6
| | | | | | | | We want to inline the case where there is only one iteration, because then the compiler can also inline the memcpy. As a start, extract everything after the first address_space_translate call. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: avoid unnecessary object_ref/unrefPaolo Bonzini2019-11-291-0/+1
| | | | | | | | | For the common case of DMA into non-hotplugged RAM, it is unnecessary but expensive to do object_ref/unref. Add back an owner field to MemoryRegion, so that these memory regions can skip the reference counting. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: reorder MemoryRegion fieldsPaolo Bonzini2019-11-291-10/+14
| | | | | | | Order fields so that all fields accessed during a RAM read/write fit in the same cache line. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: always call qemu_get_ram_ptr within rcu_read_lockPaolo Bonzini2019-11-291-2/+7
| | | | | | | Simplify the code and document the assumption. The only caller that is not within rcu_read_lock is memory_region_get_ram_ptr. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: Eliminate qemu_ram_free_from_ptr()Eduardo Habkost2019-11-291-1/+0
| | | | | | | | | | | | | | | | | Replace qemu_ram_free_from_ptr() with qemu_ram_free(). The only difference between qemu_ram_free_from_ptr() and qemu_ram_free() is that g_free_rcu() is used instead of call_rcu(reclaim_ramblock). We can safely replace it because: * RAM blocks allocated by qemu_ram_alloc_from_ptr() always have RAM_PREALLOC set; * reclaim_ramblock(block) will do nothing except g_free(block) if RAM_PREALLOC is set at block->flags. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <1446844805-14492-2-git-send-email-ehabkost@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Initial overlay of HQEMU 2.5.2 changes onto underlying 2.5.0 QEMU GIT tree2.5_overlayTimothy Pearson2019-11-297-20/+56
|
* translate-all: ensure host page mask is always extended with 1'sPaolo Bonzini2015-12-021-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | Anthony reported that >4GB guests on Xen with 32bit QEMU broke after commit 4ed023c ("Round up RAMBlock sizes to host page sizes", 2015-11-05). In that patch sizes are masked against qemu_host_page_size/mask which are uintptr_t, and thus 32bit on a 32bit QEMU, even though the ram space might be bigger than 4GB on Xen. Since ram_addr_t is not available on user-mode emulation targets, ensure that we get a sign extension when masking away the low bits of the address. Remove the ~10 year old scary comment that the type of these variables is probably wrong, with another equally scary comment. The new comment however does not have "???" in it, which is arguably an improvement. For completeness use the alignment macros in linux-user and bsd-user instead of manually doing an &. linux-user and bsd-user are not affected by the Xen issue, however. Reviewed-by: Juan Quintela <quintela@redhat.com> Reported-by: Anthony PERARD <anthony.perard@citrix.com> Fixes: 4ed023ce2a39ab5812d33cf4d819def168965a7f Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qemu_ram_block_by_nameDr. David Alan Gilbert2015-11-101-0/+1
| | | | | | | | | | Add a function to find a RAMBlock by name; use it in two of the places that already open code that loop; we've got another use later in postcopy. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* qemu_ram_block_from_hostDr. David Alan Gilbert2015-11-102-2/+3
| | | | | | | | | | | | | | | | | | | | Postcopy sends RAMBlock names and offsets over the wire (since it can't rely on the order of ramaddr being the same), and it starts out with HVA fault addresses from the kernel. qemu_ram_block_from_host translates a HVA into a RAMBlock, an offset in the RAMBlock and the global ram_addr_t value. Rewrite qemu_ram_addr_from_host to use qemu_ram_block_from_host. Provide qemu_ram_get_idstr since its the actual name text sent on the wire. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* Move page_size_init earlierDr. David Alan Gilbert2015-11-101-1/+0
| | | | | | | | | | | | | The HOST_PAGE_ALIGN macros don't work until the page size variables have been set up; later in postcopy I use those macros in the RAM code, and it can be triggered using -object. Fix this by initialising page_size_init() earlier - it's currently initialised inside the accelerators, move it up into vl.c. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* cpu-exec: allow temporary disabling icountPavel Dovgalyuk2015-11-051-0/+1
| | | | | | | | | | | | This patch is required for deterministic replay to generate an exception by trying executing an instruction without changing icount. It adds new flag to TB for disabling icount while translating it. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150917162359.8676.77011.stgit@PASHA-ISP.def.inno> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* hw/pci: Introduce pci_requester_id()Pavel Fedin2015-10-191-2/+2
| | | | | | | | | | | | | | | | For GICv3 ITS implementation we are going to use requester IDs in KVM IRQ routing code. This patch introduces reusable convenient way to obtain this ID from the device pointer. The new function is now used in some places, where the same calculation was used. MemTxAttrs.stream_id also renamed to requester_id in order to better reflect semantics of the field. Signed-off-by: Pavel Fedin <p.fedin@samsung.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <5814bcb03a297f198e796b13ed9c35059c52f89b.1444916432.git.p.fedin@samsung.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: remove non-TCG stuff from exec-all.h header.Paolo Bonzini2015-10-122-6/+1
| | | | | | | | The header is included from basically everywhere, thanks to cpu.h. It should be moved to the (TCG only) files that actually need it. As a start, remove non-TCG stuff. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec.c: Collect AddressSpace related fields into a CPUAddressSpace structPeter Maydell2015-10-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | Gather up all the fields currently in CPUState which deal with the CPU's AddressSpace into a separate CPUAddressSpace struct. This paves the way for allowing the CPU to know about more than one AddressSpace. The rearrangement also allows us to make the MemoryListener a directly embedded object in the CPUAddressSpace (it could not be embedded in CPUState because 'struct MemoryListener' isn't defined for the user-only builds). This allows us to resolve the FIXME in tcg_commit() by going directly from the MemoryListener to the CPUAddressSpace. This patch extracts the actual update of the cached dispatch pointer from cpu_reload_memory_map() (which is renamed accordingly to cpu_reloading_memory_map() as it is only responsible for breaking cpu-exec.c's RCU critical section now). This lets us keep the definition of the CPUAddressSpace struct private to exec.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <1443709790-25180-4-git-send-email-peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Merge remote-tracking branch 'remotes/awilliam/tags/vfio-update-20151007.0' ↵Peter Maydell2015-10-081-0/+13
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | into staging VFIO updates 2015-10-07 - Change platform device IRQ setup sequence for compatibility with upcoming IRQ forwarding (Eric Auger) - Extensions to support vfio-pci devices on spapr-pci-host-bridge (David Gibson) [clang problem patch dropped] # gpg: Signature made Wed 07 Oct 2015 16:30:52 BST using RSA key ID 3BB08B22 # gpg: Good signature from "Alex Williamson <alex.williamson@redhat.com>" # gpg: aka "Alex Williamson <alex@shazbot.org>" # gpg: aka "Alex Williamson <alwillia@redhat.com>" # gpg: aka "Alex Williamson <alex.l.williamson@gmail.com>" * remotes/awilliam/tags/vfio-update-20151007.0: vfio: Allow hotplug of containers onto existing guest IOMMU mappings memory: Allow replay of IOMMU mapping notifications vfio: Record host IOMMU's available IO page sizes vfio: Check guest IOVA ranges against host IOMMU capabilities vfio: Generalize vfio_listener_region_add failure path vfio: Remove unneeded union from VFIOContainer hw/vfio/platform: do not set resamplefd for edge-sensitive IRQS hw/vfio/platform: change interrupt/unmask fields into pointer hw/vfio/platform: irqfd setup sequence update Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * memory: Allow replay of IOMMU mapping notificationsDavid Gibson2015-10-051-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we have guest visible IOMMUs, we allow notifiers to be registered which will be informed of all changes to IOMMU mappings. This is used by vfio to keep the host IOMMU mappings in sync with guest IOMMU mappings. However, unlike with a memory region listener, an iommu notifier won't be told about any mappings which already exist in the (guest) IOMMU at the time it is registered. This can cause problems if hotplugging a VFIO device onto a guest bus which had existing guest IOMMU mappings, but didn't previously have an VFIO devices (and hence no host IOMMU mappings). This adds a memory_region_iommu_replay() function to handle this case. It replays any existing mappings in an IOMMU memory region to a specified notifier. Because the IOMMU memory region doesn't internally remember the granularity of the guest IOMMU it has a small hack where the caller must specify a granularity at which to replay mappings. If there are finer mappings in the guest IOMMU these will be reported in the iotlb structures passed to the notifier which it must handle (probably causing it to flag an error). This isn't new - the VFIO iommu notifier must already handle notifications about guest IOMMU mappings too short for it to represent in the host IOMMU. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Laurent Vivier <lvivier@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
* | tcg: Adjust CODE_GEN_AVG_BLOCK_SIZERichard Henderson2015-10-071-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | At present, the "average" guestimate of TB size is way too small, leading to many unused entries in the pre-allocated TB array. For a guest with 1GB ram, we're currently allocating 256MB for the array. Survey arm, alpha, aarch64, ppc, sparc, i686, x86_64 guests running on x86_64 and ppc64 hosts and select a new average. The size of the array drops to 81MB with no more flushing than before. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg: Check for overflow via highwater markRichard Henderson2015-10-071-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | We currently pre-compute an worst case code size for any TB, which works out to be 122kB. Since the average TB size is near 1kB, this wastes quite a lot of storage. Instead, check for overflow in between generating code for each opcode. The overhead of the check isn't measurable and wastage is minimized. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg: Remove gen_intermediate_code_pcRichard Henderson2015-10-071-1/+0
| | | | | | | | | | | | | | | | | | | | It is no longer used, so tidy up everything reached by it. This includes the gen_opc_* arrays, the search_pc parameter and the inline gen_intermediate_code_internal functions. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg: Save insn data and use it in cpu_restore_state_from_tbRichard Henderson2015-10-071-0/+1
| | | | | | | | | | | | | | | | We can now restore state without retranslation. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg: Pass data argument to restore_state_to_opcRichard Henderson2015-10-071-1/+1
| | | | | | | | | | | | | | | | | | | | The gen_opc_* arrays are already redundant with the data stored in the insn_start arguments. Transition restore_state_to_opc to use data from the latter. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg: Merge cpu_gen_code into tb_gen_codeRichard Henderson2015-10-071-2/+0
|/ | | | | | | | As it's only caller, this tidies things a bit. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* include/exec: Move cputlb exec.c defs outPeter Crosthwaite2015-09-162-16/+18
| | | | | | | | | | | | | Move the architecture agnostic function prototypes for exec.c out of cputlb.h to exec-all.h. This allows hiding of the arch specific cputlb.h from exec.c which should be getting close to having no architecture specifics. Prepares support for multi-arch, which will have a minimal cpu.h that services exec.c but not cputlb.h. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <b4fe754c58c860315e35d44430c26b1c967ce2c9.1441614289.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
OpenPOWER on IntegriCloud