summaryrefslogtreecommitdiffstats
path: root/migration/ram.c
Commit message (Collapse)AuthorAgeFilesLines
* migration: fix incorrect memory_global_dirty_log_start outside BQLPaolo Bonzini2019-11-291-0/+4
| | | | | | | | | | This can cause various segmentation faults or aborts in qemu-iotests test 091. Fixes: 5b82b703b69acc67b78b98a5efc897a3912719eb Cc: Dave Gilbert <dgilbert@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* memory: RCU ram_list.dirty_memory[] for safe RAM hotplugStefan Hajnoczi2019-11-291-4/+0
| | | | | | | | | | | | | | | | | | | Although accesses to ram_list.dirty_memory[] use atomics so multiple threads can safely dirty the bitmap, the data structure is not fully thread-safe yet. This patch handles the RAM hotplug case where ram_list.dirty_memory[] is grown. ram_list.dirty_memory[] is change from a regular bitmap to an RCU array of pointers to fixed-size bitmap blocks. Threads can continue accessing bitmap blocks while the array is being extended. See the comments in the code for an in-depth explanation of struct DirtyMemoryBlocks. I have tested that live migration with virtio-blk dataplane works. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <1453728801-5398-2-git-send-email-stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* migration/ram: Fix some helper functions' parameter to use PageSearchStatuszhanghailiang2019-11-291-14/+19
| | | | | | | | | | | | | Some helper functions use parameters 'RAMBlock *block' and 'ram_addr_t *offset', We can use 'PageSearchStatus *pss' directly instead, with this change, we can reduce the number of parameters for these helper function, also it is easily to add new parameters for these helper functions. Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Message-Id: <1452829066-9764-5-git-send-email-zhang.zhanghailiang@huawei.com> Signed-off-by: Amit Shah <amit.shah@redhat.com>
* ram: Split host_from_stream_offset() into two helper functionszhanghailiang2019-11-291-15/+25
| | | | | | | | | | | | | Split host_from_stream_offset() into two parts: One is to get ram block, which the block idstr may be get from migration stream, the other is to get hva (host) address from block and the offset. Besides, we will do the check working in a new helper offset_in_ramblock(). Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Message-Id: <1452829066-9764-2-git-send-email-zhang.zhanghailiang@huawei.com> Signed-off-by: Amit Shah <amit.shah@redhat.com>
* migration: Clean up includesPeter Maydell2019-11-291-1/+1
| | | | | | | | | | Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1453832250-766-2-git-send-email-peter.maydell@linaro.org
* fpu: Replace uint8 typedef with uint8_tPeter Maydell2019-11-291-1/+1
| | | | | | | | | | | | | | | | | | | | Replace the uint8 softfloat-specific typedef with uint8_t. This change was made with find include hw fpu target-* -name '*.[ch]' | xargs sed -i -e 's/\buint8\b/uint8_t/g' together with manual removal of the typedef definition and manual fixing of more erroneous uses found via test compilation. It turns out that the only code using this type is an accidental use where uint8_t was intended anyway... Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Acked-by: Leon Alrae <leon.alrae@imgtec.com> Acked-by: James Hogan <james.hogan@imgtec.com> Message-id: 1452603315-27030-7-git-send-email-peter.maydell@linaro.org
* error: Strip trailing '\n' from error string arguments (again)Markus Armbruster2019-11-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Commit 6daf194d, be62a2eb and 312fd5f got rid of a bunch, but they keep coming back. Tracked down with the Coccinelle semantic patch from commit 312fd5f. Cc: Fam Zheng <famz@redhat.com> Cc: Peter Crosthwaite <crosthwaitepeter@gmail.com> Cc: Bharata B Rao <bharata@linux.vnet.ibm.com> Cc: Dominik Dingel <dingel@linux.vnet.ibm.com> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Jason J. Herne <jjherne@linux.vnet.ibm.com> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com> Cc: Changchun Ouyang <changchun.ouyang@intel.com> Cc: zhanghailiang <zhang.zhanghailiang@huawei.com> Cc: Pavel Fedin <p.fedin@samsung.com> Signed-off-by: Markus Armbruster <armbru@pond.sub.org> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> Acked-by: Bharata B Rao <bharata@linux.vnet.ibm.com> Acked-by: Fam Zheng <famz@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <1450452927-8346-17-git-send-email-armbru@redhat.com>
* multithread decompression: Avoid one copyDr. David Alan Gilbert2019-11-291-8/+3
| | | | | | | | | | | | qemu_get_buffer does a copy, we can avoid the memcpy, and we can then remove the extra buffer. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Liang Li <liang.z.li@intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Message-Id: <1450266458-3178-7-git-send-email-dgilbert@redhat.com> Signed-off-by: Amit Shah <amit.shah@redhat.com>
* Use qemu_get_buffer_in_place for xbzrle dataDr. David Alan Gilbert2019-11-291-2/+4
| | | | | | | | | | Avoid a data copy (if we're lucky) in the xbzrle code. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Message-Id: <1450266458-3178-6-git-send-email-dgilbert@redhat.com> Signed-off-by: Amit Shah <amit.shah@redhat.com>
* Migration: Emit event at start of passDr. David Alan Gilbert2019-11-291-0/+4
| | | | | | | | | | | | | Emit an event each time we sync the dirty bitmap on the source; this helps libvirt use postcopy by giving it a kick when it might be a good idea to start the postcopy. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Message-Id: <1450266458-3178-5-git-send-email-dgilbert@redhat.com> Signed-off-by: Amit Shah <amit.shah@redhat.com>
* Fix xbzrle vs last_sent_block updateDr. David Alan Gilbert2015-12-111-1/+10
| | | | | | | | | | | | | | | | | | | | | My fix (84e7b80a) replaced the last_sent_block update that I'd removed earlier; however it was too aggressive in the xbzrle case. save_xbzrle_page might return '0' to mean that the page didn't need sending since it was the same as the last sent version; in this case we can't update 'last_sent_block' since we didn't actually send it. Symptom: 'Illegal RAM offset 1018000' as we try and send a page to the wrong RAMBlock; potentially that could be a data corruption if you were really unlucky. Fixes: 84e7b80a05c0c44b90533c6cd2f1db5c932ccf77 Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-id: 1449765106-6528-1-git-send-email-dgilbert@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Set last_sent_blockDr. David Alan Gilbert2015-11-191-0/+1
| | | | | | | | | | | | | In a82d593b61054b3dea43 I accidentally removed the setting of last_sent_block, put it back. Symptoms: Multithreaded compression only uses one thread. Migration is a bit less efficient since it won't use 'cont' flags. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Fixes: a82d593b61054b3dea43 Signed-off-by: Juan Quintela <quintela@redhat.com>
* Postcopy: Fix TP!=HP zero caseDr. David Alan Gilbert2015-11-121-1/+1
| | | | | | | | | Where the target page size is different from the host page we special case it, but I messed up on the zero case check. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: print ram_addr_t as RAM_ADDR_FMT not %zxJuan Quintela2015-11-121-2/+3
| | | | | | | Not all the wold is 64bits (yet). Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* Host page!=target page: Cleanup bitmapsDr. David Alan Gilbert2015-11-101-0/+172
| | | | | | | | | | | | Prior to the start of postcopy, ensure that everything that will be transferred later is a whole host-page in size. This is accomplished by discarding partially transferred host pages and marking any that are partially dirty as fully dirty. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* Don't sync dirty bitmaps in postcopyDr. David Alan Gilbert2015-11-101-2/+5
| | | | | | | | | | | | | | | | | | | Once we're in postcopy the source processors are stopped and memory shouldn't change any more, so there's no need to look at the dirty map. There are two notes to this: 1) If we do resync and a page had changed then the page would get sent again, which the destination wouldn't allow (since it might have also modified the page) 2) Before disabling this I'd seen very rare cases where a page had been marked dirtied although the memory contents are apparently identical Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* postcopy: Check order of received target pagesDr. David Alan Gilbert2015-11-101-0/+11
| | | | | | | | | | | Ensure that target pages received within a host page are in order. This shouldn't trigger, but in the cases where the sender goes wrong and sends stuff out of order it produces a corruption that's really nasty to debug. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* Postcopy: Use helpers to map pages during migrationDr. David Alan Gilbert2015-11-101-1/+129
| | | | | | | | | | | | | | | | In postcopy, the destination guest is running at the same time as it's receiving pages; as we receive new pages we must put them into the guests address space atomically to avoid a running CPU accessing a partially written page. Use the helpers in postcopy-ram.c to map these pages. qemu_get_buffer_in_place is used to avoid a copy out of qemu_file in the case that postcopy is going to do a copy anyway. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* Page request: Consume pages off the post-copy queueDr. David Alan Gilbert2015-11-101-31/+218
| | | | | | | | | | | | | | | | | | | | | When transmitting RAM pages, consume pages that have been queued by MIG_RPCOMM_REQPAGE commands and send them ahead of normal page scanning. Note: a) After a queued page the linear walk carries on from after the unqueued page; there is a reasonable chance that the destination was about to ask for other closeby pages anyway. b) We have to be careful of any assumptions that the page walking code makes, in particular it does some short cuts on its first linear walk that break as soon as we do a queued page. c) We have to be careful to not break up host-page size chunks, since this makes it harder to place the pages on the destination. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* Page request: Process incoming page requestDr. David Alan Gilbert2015-11-101-0/+85
| | | | | | | | | | On receiving MIG_RPCOMM_REQ_PAGES look up the address and queue the page. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* postcopy: Incoming initialisationDr. David Alan Gilbert2015-11-101-0/+11
| | | | | | | | Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Amit Shah <amit.shah@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration_completion: Take current stateDr. David Alan Gilbert2015-11-101-1/+180
| | | | | | | | | | Soon we'll be in either ACTIVE or POSTCOPY_ACTIVE when we complete migration, and we need to know which we expect to be in to change state safely. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* Postcopy: Maintain unsentmapDr. David Alan Gilbert2015-11-101-6/+45
| | | | | | | | | | Maintain an 'unsentmap' of pages that have yet to be sent. This is used in the following patches to discard some set of the pages already sent as we enter postcopy mode. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* Add qemu_savevm_state_complete_postcopyDr. David Alan Gilbert2015-11-101-0/+1
| | | | | | | | | | | | | | | | | | Add qemu_savevm_state_complete_postcopy to complement qemu_savevm_state_complete_precopy together with a new save_live_complete_postcopy method on devices. The save_live_complete_precopy method is called on all devices during a precopy migration, and all non-postcopy devices during a postcopy migration at the transition. The save_live_complete_postcopy method is called at the end of postcopy for all postcopiable devices. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* Modify save_live_pending for postcopyDr. David Alan Gilbert2015-11-101-2/+6
| | | | | | | | | | Modify save_live_pending to return separate postcopiable and non-postcopiable counts. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* Rename save_live_complete to save_live_complete_precopyDr. David Alan Gilbert2015-11-101-1/+1
| | | | | | | | | | | In postcopy we're going to need to perform the complete phase for postcopiable devices at a different point, start out by renaming all of the 'complete's to make the difference obvious. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* ram_load: Factor out host_from_stream_offset call and checkDr. David Alan Gilbert2015-11-101-26/+15
| | | | | | | | | | | | | | | The main RAM load loop has a call to host_from_stream_offset for each page type that actually loads data with the same test; factor it out before the switch. The host = NULL is to silence a bogus gcc warning of an unitialised in the RAM_SAVE_COMPRESS_PAGE case, it doesn't seem to realise that host is always initialised by the if at the top in the cases the switch takes. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* ram_debug_dump_bitmap: Dump a migration bitmap as textDr. David Alan Gilbert2015-11-101-0/+39
| | | | | | | | | | | | | | Useful for debugging the migration bitmap and other bitmaps of the same format (including the sentmap in postcopy). The bitmap is printed to stderr. Lines that are all the expected value are excluded so the output can be quite compact for many bitmaps. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* qemu_ram_block_by_nameDr. David Alan Gilbert2015-11-101-20/+15
| | | | | | | | | | Add a function to find a RAMBlock by name; use it in two of the places that already open code that loop; we've got another use later in postcopy. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: code clean upLiang Li2015-11-041-7/+2
| | | | | | | | | Just clean up code, no behavior change. Signed-off-by: Liang Li <liang.z.li@intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com>al3 Reviewed-by: Amit Shah <amit.shah@redhat.com>al3 Signed-off-by: Juan Quintela <quintela@redhat.com>al3
* migration: rename cancel to cleanup in SaveVMHandlesLiang Li2015-11-041-1/+1
| | | | | | | | | 'cleanup' seems more appropriate than 'cancel'. Signed-off-by: Liang Li <liang.z.li@intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com>al3 Reviewed-by: Amit Shah <amit.shah@redhat.com>al3 Signed-off-by: Juan Quintela <quintela@redhat.com>al3
* migration: defer migration_end & blk_mig_cleanupLiang Li2015-11-041-1/+0
| | | | | | | | | | | | | | | | | | | | | Because of the patch 3ea3b7fa9af067982f34b of kvm, which introduces a lazy collapsing of small sptes into large sptes mechanism, now migration_end() is a time consuming operation because it calls memroy_global_dirty_log_stop(), which will trigger the dropping of small sptes operation and takes about dozens of milliseconds, so call migration_end() before all the vmsate data has already been transferred to the destination will prolong VM downtime. This operation should be deferred after all the data has been transferred to the destination. blk_mig_cleanup() can be deferred too. For a VM with 8G RAM, this patch can reduce the VM downtime about 30 ms. Signed-off-by: Liang Li <liang.z.li@intel.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>al3 Reviewed-by: Amit Shah <amit.shah@redhat.com>al3 Signed-off-by: Juan Quintela <quintela@redhat.com>al3
* migration: fix deadlockDenis V. Lunev2015-10-151-17/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Release qemu global mutex before call synchronize_rcu(). synchronize_rcu() waiting for all readers to finish their critical sections. There is at least one critical section in which we try to get QGM (critical section is in address_space_rw() and prepare_mmio_access() is trying to aquire QGM). Both functions (migration_end() and migration_bitmap_extend()) are called from main thread which is holding QGM. Thus there is a race condition that ends up with deadlock: main thread working thread Lock QGA | | Call KVM_EXIT_IO handler | | | Open rcu reader's critical section Migration cleanup bh | | | synchronize_rcu() is | waiting for readers | | prepare_mmio_access() is waiting for QGM \ / deadlock The patch changes bitmap freeing from direct g_free after synchronize_rcu to free inside call_rcu. Signed-off-by: Denis V. Lunev <den@openvz.org> Reported-by: Igor Redko <redkoi@virtuozzo.com> Tested-by: Igor Redko <redkoi@virtuozzo.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> CC: Anna Melekhova <annam@virtuozzo.com> CC: Juan Quintela <quintela@redhat.com> CC: Amit Shah <amit.shah@redhat.com> CC: Paolo Bonzini <pbonzini@redhat.com> CC: Wen Congyang <wency@cn.fujitsu.com>
* migration: Dynamic cpu throttling for auto-convergeJason J. Herne2015-09-301-59/+30
| | | | | | | | | | | | | | | | Remove traditional auto-converge static 30ms throttling code and replace it with a dynamic throttling algorithm. Additionally, be more aggressive when deciding when to start throttling. Previously we waited until four unproductive memory passes. Now we begin throttling after only two unproductive memory passes. Four seemed quite arbitrary and only waiting for two passes allows us to complete the migration faster. Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>
* ram_find_and_save_block: Split out the findingDr. David Alan Gilbert2015-09-291-25/+59
| | | | | | | | | | | | Split out the finding of the dirty page and all the wrap detection into a separate function since it was getting a bit hairy. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <1443018431-11170-3-git-send-email-dgilbert@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> [Fix comment -- Amit] Signed-off-by: Amit Shah <amit.shah@redhat.com>
* Move dirty page search state into separate structureDr. David Alan Gilbert2015-09-291-20/+35
| | | | | | | | | | | Pull the search state for one iteration of the dirty page search into a structure. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Message-Id: <1443018431-11170-2-git-send-email-dgilbert@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Amit Shah <amit.shah@redhat.com>
* migration/ram.c: Use RAMBlock rather than MemoryRegionDr. David Alan Gilbert2015-09-291-15/+11
| | | | | | | | | | | | | RAM migration mainly works on RAMBlocks but in a few places uses data from MemoryRegions to access the same information that's already held in RAMBlocks; clean it up just to avoid the MemoryRegion use. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <1439463094-5394-2-git-send-email-dgilbert@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Amit Shah <amit.shah@redhat.com>
* migration: reduce the count of strlen callLiang Li2015-07-151-5/+5
| | | | | | | | | | 'strlen' is called three times in 'save_page_header', it's inefficient. Signed-off-by: Liang Li <liang.z.li@intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: fix RCU deadlockPaolo Bonzini2015-07-091-1/+2
| | | | | | | migration_end calls synchronize_rcu() within a critical section. That causes a deadlock; move the call after rcu_read_unlock(). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* migration: extend migration_bitmapLi Zhijian2015-07-071-0/+28
| | | | | | | | | | | | Prevously, if we hotplug a device(e.g. device_add e1000) during migration is processing in source side, qemu will add a new ram block but migration_bitmap is not extended. In this case, migration_bitmap will overflow and lead qemu abort unexpectedly. Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: protect migration_bitmapLi Zhijian2015-07-071-6/+17
| | | | | | Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* Rework ram_control_load_hook to hook during block loadDr. David Alan Gilbert2015-07-071-1/+3
| | | | | | | | | | | We need the names of RAMBlocks as they're loaded for RDMA, reuse a slightly modified ram_control_load_hook: a) Pass a 'data' parameter to use for the name in the block-reg case b) Only some hook types now require the presence of a hook function. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* arch_init: Clean up the duplicate variable 'len' defining in ram_load()zhanghailiang2015-06-121-1/+0
| | | | | | | | | | There are two places that define 'len' variable, It's OK for compiling, but makes it difficult for reading. Remove the local one which defined in the inside 'while' loop. Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: reduce include filesJuan Quintela2015-06-121-16/+2
| | | | | | | | | | To make changes easier, with the copy, I maintained almost all include files. Now I remove the unnecessary ones on this patch. This compiles on linux x64 with all architectures configured, and cross-compiles for windows 32 and 64 bits. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
* migration: Add myself to the copyright list of both filesJuan Quintela2015-06-121-0/+4
| | | | | | | If anyone feels like adding himself to the list, just sent me a patch. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
* migration: move ram stuff to migration/ramJuan Quintela2015-06-121-0/+1639
For historic reasons, ram migration have been on arch_init.c. Just split it into migration/ram.c, the same that happened with block.c. There is only code movement, no changes altogether. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
OpenPOWER on IntegriCloud