summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorGreg Kroah-Hartman <gregkh@linuxfoundation.org>2018-09-19 07:41:46 +0200
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2018-09-19 07:41:46 +0200
commitf21f7fa263ac005713f0a7a43179c5aea0fabe85 (patch)
treee260aa43863177bee64c14fe9d012a8f4022a730 /kernel
parenteba2d6b34a32bdc3585c5810633ec38f9472380c (diff)
parent83f365554e47997ec68dc4eca3f5dce525cd15c3 (diff)
downloadop-kernel-dev-f21f7fa263ac005713f0a7a43179c5aea0fabe85.zip
op-kernel-dev-f21f7fa263ac005713f0a7a43179c5aea0fabe85.tar.gz
Merge tag 'trace-v4.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Steven writes: "Vaibhav Nagarnaik found that modifying the ring buffer size could cause a huge latency in the system because it does a while loop to free pages without releasing the CPU (on non preempt kernels). In a case where there are hundreds of thousands of pages to free it could actually cause a system stall. A properly place cond_resched() solves this issue."
Diffstat (limited to 'kernel')
-rw-r--r--kernel/trace/ring_buffer.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 1d92d4a..65bd461 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1546,6 +1546,8 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
tmp_iter_page = first_page;
do {
+ cond_resched();
+
to_remove_page = tmp_iter_page;
rb_inc_page(cpu_buffer, &tmp_iter_page);
OpenPOWER on IntegriCloud