diff options
author | kib <kib@FreeBSD.org> | 2015-07-30 15:47:53 +0000 |
---|---|---|
committer | kib <kib@FreeBSD.org> | 2015-07-30 15:47:53 +0000 |
commit | 88c35d6516fe8bf221803cefd19bc569dd41d56a (patch) | |
tree | c3f3c6150c26daac2925ea8cd6be6237d465f049 /sys/amd64/include/atomic.h | |
parent | f4c52859b87baba7030ff64113142879d192f5f6 (diff) | |
download | FreeBSD-src-88c35d6516fe8bf221803cefd19bc569dd41d56a.zip FreeBSD-src-88c35d6516fe8bf221803cefd19bc569dd41d56a.tar.gz |
Improve comments.
Submitted by: bde
MFC after: 2 weeks
Diffstat (limited to 'sys/amd64/include/atomic.h')
-rw-r--r-- | sys/amd64/include/atomic.h | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/sys/amd64/include/atomic.h b/sys/amd64/include/atomic.h index 30f594c..33d79b2 100644 --- a/sys/amd64/include/atomic.h +++ b/sys/amd64/include/atomic.h @@ -272,10 +272,10 @@ atomic_testandset_long(volatile u_long *p, u_int v) * addresses, so we need a Store/Load barrier for sequentially * consistent fences in SMP kernels. We use "lock addl $0,mem" for a * Store/Load barrier, as recommended by the AMD Software Optimization - * Guide, and not mfence. In the kernel, we use a private per-cpu - * cache line as the target for the locked addition, to avoid - * introducing false data dependencies. In user space, we use a word - * in the stack's red zone (-8(%rsp)). + * Guide, and not mfence. To avoid false data dependencies, we use a + * special address for "mem". In the kernel, we use a private per-cpu + * cache line. In user space, we use a word in the stack's red zone + * (-8(%rsp)). * * For UP kernels, however, the memory of the single processor is * always consistent, so we only need to stop the compiler from |