summaryrefslogtreecommitdiffstats
path: root/stubs
diff options
context:
space:
mode:
authorPaolo Bonzini <pbonzini@redhat.com>2015-06-18 18:47:19 +0200
committerPaolo Bonzini <pbonzini@redhat.com>2015-07-01 15:45:50 +0200
commitafbe70535ff1a8a7a32910cc15ebecc0ba92e7da (patch)
tree1e12a91857ba9ce038e7c1c05fd901add373b36b /stubs
parent2e7f7a3c86f884a77296a137b7c730a4d580c5c9 (diff)
downloadhqemu-afbe70535ff1a8a7a32910cc15ebecc0ba92e7da.zip
hqemu-afbe70535ff1a8a7a32910cc15ebecc0ba92e7da.tar.gz
main-loop: introduce qemu_mutex_iothread_locked
This function will be used to avoid recursive locking of the iothread lock whenever address_space_rw/ld*/st* are called with the BQL held, which is almost always the case. Tracking whether the iothread is owned is very cheap (just use a TLS variable) but requires some care because now the lock must always be taken with qemu_mutex_lock_iothread(). Previously this wasn't the case. Outside TCG mode this is not a problem. In TCG mode, we need to be careful and avoid the "prod out of compiled code" step if already in a VCPU thread. This is easily done with a check on current_cpu, i.e. qemu_in_vcpu_thread(). Hopefully, multithreaded TCG will get rid of the whole logic to kick VCPUs whenever an I/O event occurs! Cc: Frederic Konrad <fred.konrad@greensocs.com> Message-Id: <1434646046-27150-3-git-send-email-pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'stubs')
-rw-r--r--stubs/iothread-lock.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/stubs/iothread-lock.c b/stubs/iothread-lock.c
index 5d8aca1..dda6f6b 100644
--- a/stubs/iothread-lock.c
+++ b/stubs/iothread-lock.c
@@ -1,6 +1,11 @@
#include "qemu-common.h"
#include "qemu/main-loop.h"
+bool qemu_mutex_iothread_locked(void)
+{
+ return true;
+}
+
void qemu_mutex_lock_iothread(void)
{
}
OpenPOWER on IntegriCloud