summaryrefslogtreecommitdiffstats
path: root/include/qemu/atomic.h
Commit message (Collapse)AuthorAgeFilesLines
* include/qemu/atomic.h: default to __atomic functionsAlex Bennée2019-11-291-61/+131
| | | | | | | | | | | | | | | | | The __atomic primitives have been available since GCC 4.7 and provide a richer interface for describing memory ordering requirements. As a bonus by using the primitives instead of hand-rolled functions we can use tools such as the ThreadSanitizer which need the use of well defined APIs for its analysis. If we have __ATOMIC defines we exclusively use the __atomic primitives for all our atomic access. Otherwise we fall back to the mixture of __sync and hand-rolled barrier cases. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1453976119-24372-4-git-send-email-alex.bennee@linaro.org> [Use __ATOMIC_SEQ_CST for atomic_mb_read/atomic_mb_set on !POWER. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Initial overlay of HQEMU 2.5.2 changes onto underlying 2.5.0 QEMU GIT tree2.5_overlayTimothy Pearson2019-11-291-5/+5
|
* atomics: add explicit compiler fence in __atomic memory barriersPaolo Bonzini2015-06-051-3/+9
| | | | | | | | | | | | | | | | | __atomic_thread_fence does not include a compiler barrier; in the C++11 memory model, fences take effect in combination with other atomic operations. GCC implements this by making __atomic_load and __atomic_store access memory as if the pointer was volatile, and leaves no trace whatsoever of acquire and release fences in the compiler's intermediate representation. In QEMU, we want memory barriers to act on all memory, but at the same time we would like to use __atomic_thread_fence for portability reasons. Add compiler barriers manually around the __atomic_thread_fence. Message-Id: <1433334080-14912-1-git-send-email-pbonzini@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* rcu: add rcu libraryPaolo Bonzini2015-02-021-0/+61
| | | | | | | | | | | | | | | | | | This includes a (mangled) copy of the liburcu code. The main changes are: 1) removing dependencies on many other header files in liburcu; 2) removing for simplicity the tentative busy waiting in synchronize_rcu, which has limited performance effects; 3) replacing futexes in synchronize_rcu with QemuEvents for Win32 portability. The API is the same as liburcu, so it should be possible in the future to require liburcu on POSIX systems for example and use our copy only on Windows. Among the various versions available I chose urcu-mb, which is the least invasive implementation even though it does not have the fastest rcu_read_{lock,unlock} implementation. The urcu flavor can be changed later, after benchmarking. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomic: fix position of volatile qualifierPaolo Bonzini2014-12-231-2/+2
| | | | | | | | What needs to be volatile is not the pointer, but the pointed-to value! Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomic.h: Fix build with clangPeter Maydell2013-11-211-3/+3
| | | | | | | | | | | | | clang defines __ATOMIC_SEQ_CST but its implementation of the __atomic_exchange() builtin differs from that of gcc. Move the __clang__ branch of the ifdef ladder to the top and fix its implementation (there is no such builtin as __sync_exchange), so we can compile with clang again. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1382435921-18438-1-git-send-email-peter.maydell@linaro.org Signed-off-by: Anthony Liguori <aliguori@amazon.com>
* add a header file for atomic operationsPaolo Bonzini2013-07-041-32/+166
| | | | | | | | We're already using them in several places, but __sync builtins are just too ugly to type, and do not provide seqcst load/store operations. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* block-migration: add lockPaolo Bonzini2013-03-111-0/+1
| | | | | | | | | | | | | | Some state is shared between the block migration code and its AIO callbacks. Once block migration will run outside the iothread, the block migration code and the AIO callbacks will be able to run concurrently. Protect the critical sections with a separate lock. Do the same for completed_sectors, which can be used from the monitor. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* misc: move include files to include/qemu/Paolo Bonzini2012-12-191-0/+67
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
OpenPOWER on IntegriCloud