summaryrefslogtreecommitdiffstats
path: root/Documentation/core-api/local_ops.rst
blob: 1062ddba62c7608bb96f4211e2a6a0863f8a47c3 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206

.. _local_ops:

=================================================
Semantics and Behavior of Local Atomic Operations
=================================================

:Author: Mathieu Desnoyers


This document explains the purpose of the local atomic operations, how
to implement them for any given architecture and shows how they can be used
properly. It also stresses on the precautions that must be taken when reading
those local variables across CPUs when the order of memory writes matters.

.. note::

    Note that ``local_t`` based operations are not recommended for general
    kernel use. Please use the ``this_cpu`` operations instead unless there is
    really a special purpose. Most uses of ``local_t`` in the kernel have been
    replaced by ``this_cpu`` operations. ``this_cpu`` operations combine the
    relocation with the ``local_t`` like semantics in a single instruction and
    yield more compact and faster executing code.


Purpose of local atomic operations
==================================

Local atomic operations are meant to provide fast and highly reentrant per CPU
counters. They minimize the performance cost of standard atomic operations by
removing the LOCK prefix and memory barriers normally required to synchronize
across CPUs.

Having fast per CPU atomic counters is interesting in many cases: it does not
require disabling interrupts to protect from interrupt handlers and it permits
coherent counters in NMI handlers. It is especially useful for tracing purposes
and for various performance monitoring counters.

Local atomic operations only guarantee variable modification atomicity wrt the
CPU which owns the data. Therefore, care must taken to make sure that only one
CPU writes to the ``local_t`` data. This is done by using per cpu data and
making sure that we modify it from within a preemption safe context. It is
however permitted to read ``local_t`` data from any CPU: it will then appear to
be written out of order wrt other memory writes by the owner CPU.


Implementation for a given architecture
=======================================

It can be done by slightly modifying the standard atomic operations: only
their UP variant must be kept. It typically means removing LOCK prefix (on
i386 and x86_64) and any SMP synchronization barrier. If the architecture does
not have a different behavior between SMP and UP, including
``asm-generic/local.h`` in your architecture's ``local.h`` is sufficient.

The ``local_t`` type is defined as an opaque ``signed long`` by embedding an
``atomic_long_t`` inside a structure. This is made so a cast from this type to
a ``long`` fails. The definition looks like::

    typedef struct { atomic_long_t a; } local_t;


Rules to follow when using local atomic operations
==================================================

* Variables touched by local ops must be per cpu variables.
* *Only* the CPU owner of these variables must write to them.
* This CPU can use local ops from any context (process, irq, softirq, nmi, ...)
  to update its ``local_t`` variables.
* Preemption (or interrupts) must be disabled when using local ops in
  process context to make sure the process won't be migrated to a
  different CPU between getting the per-cpu variable and doing the
  actual local op.
* When using local ops in interrupt context, no special care must be
  taken on a mainline kernel, since they will run on the local CPU with
  preemption already disabled. I suggest, however, to explicitly
  disable preemption anyway to make sure it will still work correctly on
  -rt kernels.
* Reading the local cpu variable will provide the current copy of the
  variable.
* Reads of these variables can be done from any CPU, because updates to
  "``long``", aligned, variables are always atomic. Since no memory
  synchronization is done by the writer CPU, an outdated copy of the
  variable can be read when reading some *other* cpu's variables.


How to use local atomic operations
==================================

::

    #include <linux/percpu.h>
    #include <asm/local.h>

    static DEFINE_PER_CPU(local_t, counters) = LOCAL_INIT(0);


Counting
========

Counting is done on all the bits of a signed long.

In preemptible context, use ``get_cpu_var()`` and ``put_cpu_var()`` around
local atomic operations: it makes sure that preemption is disabled around write
access to the per cpu variable. For instance::

    local_inc(&get_cpu_var(counters));
    put_cpu_var(counters);

If you are already in a preemption-safe context, you can use
``this_cpu_ptr()`` instead::

    local_inc(this_cpu_ptr(&counters));



Reading the counters
====================

Those local counters can be read from foreign CPUs to sum the count. Note that
the data seen by local_read across CPUs must be considered to be out of order
relatively to other memory writes happening on the CPU that owns the data::

    long sum = 0;
    for_each_online_cpu(cpu)
            sum += local_read(&per_cpu(counters, cpu));

If you want to use a remote local_read to synchronize access to a resource
between CPUs, explicit ``smp_wmb()`` and ``smp_rmb()`` memory barriers must be used
respectively on the writer and the reader CPUs. It would be the case if you use
the ``local_t`` variable as a counter of bytes written in a buffer: there should
be a ``smp_wmb()`` between the buffer write and the counter increment and also a
``smp_rmb()`` between the counter read and the buffer read.


Here is a sample module which implements a basic per cpu counter using
``local.h``::

    /* test-local.c
     *
     * Sample module for local.h usage.
     */


    #include <asm/local.h>
    #include <linux/module.h>
    #include <linux/timer.h>

    static DEFINE_PER_CPU(local_t, counters) = LOCAL_INIT(0);

    static struct timer_list test_timer;

    /* IPI called on each CPU. */
    static void test_each(void *info)
    {
            /* Increment the counter from a non preemptible context */
            printk("Increment on cpu %d\n", smp_processor_id());
            local_inc(this_cpu_ptr(&counters));

            /* This is what incrementing the variable would look like within a
             * preemptible context (it disables preemption) :
             *
             * local_inc(&get_cpu_var(counters));
             * put_cpu_var(counters);
             */
    }

    static void do_test_timer(unsigned long data)
    {
            int cpu;

            /* Increment the counters */
            on_each_cpu(test_each, NULL, 1);
            /* Read all the counters */
            printk("Counters read from CPU %d\n", smp_processor_id());
            for_each_online_cpu(cpu) {
                    printk("Read : CPU %d, count %ld\n", cpu,
                            local_read(&per_cpu(counters, cpu)));
            }
            del_timer(&test_timer);
            test_timer.expires = jiffies + 1000;
            add_timer(&test_timer);
    }

    static int __init test_init(void)
    {
            /* initialize the timer that will increment the counter */
            init_timer(&test_timer);
            test_timer.function = do_test_timer;
            test_timer.expires = jiffies + 1;
            add_timer(&test_timer);

            return 0;
    }

    static void __exit test_exit(void)
    {
            del_timer_sync(&test_timer);
    }

    module_init(test_init);
    module_exit(test_exit);

    MODULE_LICENSE("GPL");
    MODULE_AUTHOR("Mathieu Desnoyers");
    MODULE_DESCRIPTION("Local Atomic Ops");
OpenPOWER on IntegriCloud