summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* ioatdma: Adding PCI IDs for Intel Atom S1200 product family ioatdma devicesDave Jiang2013-04-152-0/+12
| | | | | | | | | | These should be good for the IOAT DMA devices on the Intel Atom S1269, S1279, and S1289 platforms. We are also adding IOAT v3.3 definition for the new DMA engine. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* ioatdma: Adding Haswell devid for ioatdmaDave Jiang2013-04-153-6/+55
| | | | | | | | | | Adding Haswell PCI device IDs for ioatdma and simplify the detection of certain Xeon CPUs that has alignment bugs so that modifications can be changed at a single place going forward. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmaengine: OMAP: Register SDMA controller with Device Tree DMA driverJon Hunter2013-04-152-2/+40
| | | | | | | | | | | | | If the device-tree blob is present during boot, then register the SDMA controller with the device-tree DMA driver so that we can use device-tree to look-up DMA client information. Signed-off-by: Jon Hunter <jon-hunter@ti.com> Reviewed-by: Felipe Balbi <balbi@ti.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dw_dmac: remove unnecessary ENODEV checkAndy Shevchenko2013-04-151-1/+1
| | | | | | | | | If CONFIG_OF is not set the of_node of the device will always be NULL. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmaengine: dw_dmac: simplify master selectionArnd Bergmann2013-04-152-48/+33
| | | | | | | | | | | | | | | | | | | The patch to add the common DMA binding added a dummy dw_dma_slave structure into the dw_dma_chan structure in order to configure the masters correctly. It turns out that this can be simplified if we pick the DMA masters in the dwc_alloc_chan_resources function instead and save them in the dw_dma_chan structure directly. This could be simplified further once all users that today use dw_dma_slave for configuration get converted to device tree based setup instead. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dw_dmac: rename DT related methods to reflect their belongingAndy Shevchenko2013-04-151-11/+13
| | | | | | | | | | | | | | | | | | Since we will have not only DT cases in future let's rename DT related methods to reflect their belonging. The rename was done as follows: struct dw_dma_filter_args -> struct dw_dma_of_filter_args dw_dma_generic_filter() -> dw_dma_of_filter() dw_dma_xlate() -> dw_dma_of_xlate() dw_dma_id_table -> dw_dma_of_id_table There is no functional change. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dw_dmac: fix style of the commentsAndy Shevchenko2013-04-151-15/+15
| | | | | | | | | | Let's use capital letter as a first one in the comments. There is no functional changes. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dma: Make the 'mask' parameter of __dma_request_channel constLars-Peter Clausen2013-04-152-8/+12
| | | | | | | | The 'mask' parameter is not modified in __dma_request_channel and really shouldn't be. Make this explicit by making the parameter const. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmaengine:sirf:take clock and enable it while probingBarry Song2013-04-151-0/+11
| | | | | | | | | there is hardcode which enabled the clock of dmaengine before, this patch takes the clock by standard clock API and enable it in probe. Signed-off-by: Barry Song <Baohua.Song@csr.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dw_dmac: don't wait for FIFO_EMPTY endlessly in dwc_chan_pauseAndy Shevchenko2013-04-151-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | When we pause the channel after transfer is completed we might stuck in the dwc_chan_pause() because the FIFO_EMPTY flag will never be asserted. To avoid the endless loop we introduce a timeout here (*). The proper solution is to somehow get the residue in FIFO and avoid busyloop when transfer is done, but this task is not simple and fast. Unfortunately we can't use cpu_relax() in conjunction with jiffies checker, due to we have interrupts disabled by spin_lock_irqsave() and there is a big chance that no interrupts will come to update the jiffies.. (*) The worst case is AHB write * FIFO size / hclk = 5.12 us, where AHB write = 2 cycles, hclk = 100 MHz, burst size = 1 byte, FIFO size = 256 bytes. The proposed 40us timeout might be considered as a big one, though we enter to that state only when we have the transfer already completed. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dma: imx-dma: Remove redundant NULL check before kfreeSyam Sidhardhan2013-04-151-2/+1
| | | | | | | kfree on NULL pointer is a no-op. Signed-off-by: Syam Sidhardhan <s.syam@samsung.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dma: ipu: ipu_idmac: Fix section mismatchFabio Estevam2013-04-151-1/+1
| | | | | | | | | | | | | | | | Since commit 84c1e63c12 (dma: Remove erroneous __exit and __exit_p() references) the following section mismatch happens: WARNING: drivers/built-in.o(.text+0x20f94): Section mismatch in reference from the function ipu_remove() to the function .exit.text:ipu_idmac_exit() The function ipu_remove() references a function in an exit section. Often the function ipu_idmac_exit() has valid usage outside the exit section and the fix is to remove the __exit annotation of ipu_idmac_exit. Remove the '__exit' annotation from ipu_idmac_exit in order to fix it. Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com> Acked-by: Maxin B. John <maxin.john@enea.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dma: tegra: assume CONFIG_OFStephen Warren2013-04-151-15/+7
| | | | | | | | Tegra only supports, and always enables, device tree. Remove all ifdefs and runtime checks for DT support from the driver. Signed-off-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dma: pl330: Convert to devm_ioremap_resource()Sachin Kamat2013-04-151-3/+4
| | | | | | | | | Use the newly introduced devm_ioremap_resource() instead of devm_request_and_ioremap() which provides more consistent error handling. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Reviewed-by: Thierry Reding <thierry.reding@avionic-design.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmatest: append verify result to resultsAndy Shevchenko2013-04-152-56/+132
| | | | | | | | | | Comparison between buffers is stored to the dedicated structure. Note that the verify result is now accessible only via file 'results' in the debugfs. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmatest: gather test results in the linked listAndy Shevchenko2013-04-152-30/+223
| | | | | | | | | | | | | | | | | | | | | | The patch provides a storage for the test results in the linked list. The gathered data could be used after test is done. The new file 'results' represents gathered data of the in progress test. The messages collected are printed to the kernel log as well. Example of output: % cat /sys/kernel/debug/dmatest/results dma0chan0-copy0: #1: No errors with src_off=0x7bf dst_off=0x8ad len=0x3fea (0) The message format is unified across the different types of errors. A number in the parens represents additional information, e.g. error code, error counter, or status. Note that the buffer comparison is done in the old way, i.e. data is not collected and just printed out. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmatest: define MAX_ERROR_COUNT constantAndy Shevchenko2013-04-151-3/+6
| | | | | | | | Its meaning is to limit amount of error messages to be printed out when buffer mismatch is occured. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmatest: return actual state in 'run' fileAndy Shevchenko2013-04-152-2/+33
| | | | | | | | | | | | | | | | The following command should return actual state of the test. % cat /sys/kernel/debug/dmatest/run To wait for test done the user may perform a busy loop that checks the state. % while [ $(cat /sys/kernel/debug/dmatest/run) = "Y" ] > do > echo -n "." > sleep 1 > done > echo Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmatest: run test via debugfsAndy Shevchenko2013-04-152-2/+303
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of doing modprobe dmatest ... modprobe -r dmatest we allow user to run tests interactively. The dmatest could be built as module or inside kernel. Let's consider those cases. 1. When dmatest is built as a module... After mounting debugfs and loading the module, the /sys/kernel/debug/dmatest folder with nodes will be created. They are the same as module parameters with addition of the 'run' node that controls run and stop phases of the test. Note that in this case test will not run on load automatically. Example of usage: % echo dma0chan0 > /sys/kernel/debug/dmatest/channel % echo 2000 > /sys/kernel/debug/dmatest/timeout % echo 1 > /sys/kernel/debug/dmatest/iterations % echo 1 > /sys/kernel/debug/dmatest/run After a while you will start to get messages about current status or error like in the original code. Note that running a new test will stop any in progress test. 2. When built-in in the kernel... The module parameters that is supplied to the kernel command line will be used for the first performed test. After user gets a control, the test could be interrupted or re-run with same or different parameters. For the details see the above section "1. When dmatest is built as a module..." In both cases the module parameters are used as initial values for the test case. You always could check them at run-time by running % grep -H . /sys/module/dmatest/parameters/* Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmatest: split test parameters to separate structureAndy Shevchenko2013-04-151-47/+62
| | | | | | | | Better to keep test parameters separate from internal variables. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmatest: move dmatest_channels and nr_channels to dmatest_infoAndy Shevchenko2013-04-151-13/+16
| | | | | | | | | We don't need to have them global and later we would like to protect access to them as well. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmatest: create dmatest_info to keep test parametersAndy Shevchenko2013-04-151-47/+113
| | | | | | | | | | | | The proposed change will remove usage of the module parameters as global variables. In future it helps to run different test cases sequentially. The patch introduces the run_threaded_test() and stop_threaded_test() functions that could be used later outside of dmatest_init, dmatest_exit scope. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmatest: allocate memory for pq_coefs from heapAndy Shevchenko2013-04-151-2/+9
| | | | | | | | This will help in future to hide a global variable usage. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dmatest: cancel thread immediately when asked forAndy Shevchenko2013-04-151-1/+2
| | | | | | | | | | | | | | If user have the timeout alike issues and wants to cancel the thread immediately, the current call of wait_event_freezable_timeout is preventing to this until timeout is expired. Thus, user will experience the unnecessary delays. Adding kthread_should_stop() check inside wait_event_freezable_timeout() solves that. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* ioatdma: allow all channels to have irq coalescing supportDave Jiang2013-04-151-9/+3
| | | | | | | | | | Looks like only the RAID channels are allowed to have irq coalescing support in the existing code. Fixing that. The ioat3 cleanup code can handle memcpy ops anyways Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* ioatdma: make debug output more readableDave Jiang2013-04-152-2/+3
| | | | | | | | | Making OP field a hex instead of integer to make it more readable. Also add the dump out of the NEXT field. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dma: Remove erroneous __exit and __exit_p() referencesMaxin B. John2013-04-156-14/+14
| | | | | | | | | | | Removing the annotation with __exit and referencing with __exit_p() present in dma driver module remove hooks. Part of the __devexit and __devexit_p() purge. Signed-off-by: Maxin B. John <maxin.john@enea.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* dma: timb_dma: Fix compiler warningMaxin B. John2013-04-151-1/+1
| | | | | | | | Fix this compiler warning: warning: 'td_remove' defined but not used [-Wunused-function] Signed-off-by: Maxin B. John <maxin.john@enea.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* pch_dma: Use GFP_ATOMIC because called from interrupt contextTomoya MORINAGA2013-04-151-1/+1
| | | | | | | | | | pdc_desc_get() is called from pd_prep_slave_sg, and the function is called from interrupt context(e.g. Uart driver "pch_uart.c"). In fact, I saw kernel error message. So, GFP_ATOMIC must be used not GFP_NOIO. Signed-off-by: Tomoya MORINAGA <tomoya.rohm@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* DMA: PL330: allow submitting 2 requests at a timeJassi Brar2013-04-151-2/+1
| | | | | | | | | | | Fix the logic to allow mc programming of second transfer after first has been done, by removing immediate return upon success and iterating until we detect QFull or DMAC dying. Reported-by: Alvaro Moran <dirac3000@gmail.com> Tested-by: Alvaro Moran <dirac3000@gmail.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* Linux 3.9-rc7v3.9-rc7Linus Torvalds2013-04-141-1/+1
|
* Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds2013-04-148-21/+33
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Ingo Molnar: "Misc fixes" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mm: Flush lazy MMU when DEBUG_PAGEALLOC is set x86/mm/cpa/selftest: Fix false positive in CPA self test x86/mm/cpa: Convert noop to functional fix x86, mm: Patch out arch_flush_lazy_mmu_mode() when running on bare metal x86, mm, paravirt: Fix vmalloc_fault oops during lazy MMU updates
| * x86/mm: Flush lazy MMU when DEBUG_PAGEALLOC is setBoris Ostrovsky2013-04-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When CONFIG_DEBUG_PAGEALLOC is set page table updates made by kernel_map_pages() are not made visible (via TLB flush) immediately if lazy MMU is on. In environments that support lazy MMU (e.g. Xen) this may lead to fatal page faults, for example, when zap_pte_range() needs to allocate pages in __tlb_remove_page() -> tlb_next_batch(). Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: konrad.wilk@oracle.com Link: http://lkml.kernel.org/r/1365703192-2089-1-git-send-email-boris.ostrovsky@oracle.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * x86/mm/cpa/selftest: Fix false positive in CPA self testAndrea Arcangeli2013-04-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If the pmd is not present, _PAGE_PSE will not be set anymore. Fix the false positive. Reported-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Cc: Stefan Bader <stefan.bader@canonical.com> Cc: Andy Whitcroft <apw@canonical.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Borislav Petkov <bp@alien8.de> Link: http://lkml.kernel.org/r/1365687369-30802-1-git-send-email-aarcange@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * x86/mm/cpa: Convert noop to functional fixAndrea Arcangeli2013-04-111-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit: a8aed3e0752b ("x86/mm/pageattr: Prevent PSE and GLOABL leftovers to confuse pmd/pte_present and pmd_huge") introduced a valid fix but one location that didn't trigger the bug that lead to finding those (small) problems, wasn't updated using the right variable. The wrong variable was also initialized for no good reason, that may have been the source of the confusion. Remove the noop initialization accordingly. Commit a8aed3e0752b also erroneously removed one canon_pgprot pass meant to clear pmd bitflags not supported in hardware by older CPUs, that automatically gets corrected by this patch too by applying it to the right variable in the new location. Reported-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Acked-by: Borislav Petkov <bp@alien8.de> Cc: Andy Whitcroft <apw@canonical.com> Cc: Mel Gorman <mgorman@suse.de> Link: http://lkml.kernel.org/r/1365600505-19314-1-git-send-email-aarcange@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * x86, mm: Patch out arch_flush_lazy_mmu_mode() when running on bare metalBoris Ostrovsky2013-04-105-13/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Invoking arch_flush_lazy_mmu_mode() results in calls to preempt_enable()/disable() which may have performance impact. Since lazy MMU is not used on bare metal we can patch away arch_flush_lazy_mmu_mode() so that it is never called in such environment. [ hpa: the previous patch "Fix vmalloc_fault oops during lazy MMU updates" may cause a minor performance regression on bare metal. This patch resolves that performance regression. It is somewhat unclear to me if this is a good -stable candidate. ] Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: http://lkml.kernel.org/r/1364045796-10720-2-git-send-email-konrad.wilk@oracle.com Tested-by: Josh Boyer <jwboyer@redhat.com> Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Borislav Petkov <bp@suse.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: <stable@vger.kernel.org> SEE NOTE ABOVE
| * x86, mm, paravirt: Fix vmalloc_fault oops during lazy MMU updatesSamu Kallio2013-04-101-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In paravirtualized x86_64 kernels, vmalloc_fault may cause an oops when lazy MMU updates are enabled, because set_pgd effects are being deferred. One instance of this problem is during process mm cleanup with memory cgroups enabled. The chain of events is as follows: - zap_pte_range enables lazy MMU updates - zap_pte_range eventually calls mem_cgroup_charge_statistics, which accesses the vmalloc'd mem_cgroup per-cpu stat area - vmalloc_fault is triggered which tries to sync the corresponding PGD entry with set_pgd, but the update is deferred - vmalloc_fault oopses due to a mismatch in the PUD entries The OOPs usually looks as so: ------------[ cut here ]------------ kernel BUG at arch/x86/mm/fault.c:396! invalid opcode: 0000 [#1] SMP .. snip .. CPU 1 Pid: 10866, comm: httpd Not tainted 3.6.10-4.fc18.x86_64 #1 RIP: e030:[<ffffffff816271bf>] [<ffffffff816271bf>] vmalloc_fault+0x11f/0x208 .. snip .. Call Trace: [<ffffffff81627759>] do_page_fault+0x399/0x4b0 [<ffffffff81004f4c>] ? xen_mc_extend_args+0xec/0x110 [<ffffffff81624065>] page_fault+0x25/0x30 [<ffffffff81184d03>] ? mem_cgroup_charge_statistics.isra.13+0x13/0x50 [<ffffffff81186f78>] __mem_cgroup_uncharge_common+0xd8/0x350 [<ffffffff8118aac7>] mem_cgroup_uncharge_page+0x57/0x60 [<ffffffff8115fbc0>] page_remove_rmap+0xe0/0x150 [<ffffffff8115311a>] ? vm_normal_page+0x1a/0x80 [<ffffffff81153e61>] unmap_single_vma+0x531/0x870 [<ffffffff81154962>] unmap_vmas+0x52/0xa0 [<ffffffff81007442>] ? pte_mfn_to_pfn+0x72/0x100 [<ffffffff8115c8f8>] exit_mmap+0x98/0x170 [<ffffffff810050d9>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e [<ffffffff81059ce3>] mmput+0x83/0xf0 [<ffffffff810624c4>] exit_mm+0x104/0x130 [<ffffffff8106264a>] do_exit+0x15a/0x8c0 [<ffffffff810630ff>] do_group_exit+0x3f/0xa0 [<ffffffff81063177>] sys_exit_group+0x17/0x20 [<ffffffff8162bae9>] system_call_fastpath+0x16/0x1b Calling arch_flush_lazy_mmu_mode immediately after set_pgd makes the changes visible to the consistency checks. Cc: <stable@vger.kernel.org> RedHat-Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=914737 Tested-by: Josh Boyer <jwboyer@redhat.com> Reported-and-Tested-by: Krishna Raman <kraman@redhat.com> Signed-off-by: Samu Kallio <samu.kallio@aberdeencloud.com> Link: http://lkml.kernel.org/r/1364045796-10720-1-git-send-email-konrad.wilk@oracle.com Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | Merge branch 'sched-urgent-for-linus' of ↵Linus Torvalds2013-04-143-4/+32
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler fixes from Ingo Molnar: "Misc fixlets" * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/cputime: Fix accounting on multi-threaded processes sched/debug: Fix sd->*_idx limit range avoiding overflow sched_clock: Prevent 64bit inatomicity on 32bit systems sched: Convert BUG_ON()s in try_to_wake_up_local() to WARN_ON_ONCE()s
| * | sched/cputime: Fix accounting on multi-threaded processesStanislaw Gruszka2013-04-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recent commit 6fac4829 ("cputime: Use accessors to read task cputime stats") introduced a bug, where we account many times the cputime of the first thread, instead of cputimes of all the different threads. Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20130404085740.GA2495@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | sched/debug: Fix sd->*_idx limit range avoiding overflowlibin2013-04-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 201c373e8e ("sched/debug: Limit sd->*_idx range on sysctl") was an incomplete bug fix. This patch fixes sd->*_idx limit range to [0 ~ CPU_LOAD_IDX_MAX-1] avoiding array overflow caused by setting sd->*_idx to CPU_LOAD_IDX_MAX on sysctl. Signed-off-by: Libin <huawei.libin@huawei.com> Cc: <jiang.liu@huawei.com> Cc: <guohanjun@huawei.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/51626610.2040607@huawei.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | sched_clock: Prevent 64bit inatomicity on 32bit systemsThomas Gleixner2013-04-081-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sched_clock_remote() implementation has the following inatomicity problem on 32bit systems when accessing the remote scd->clock, which is a 64bit value. CPU0 CPU1 sched_clock_local() sched_clock_remote(CPU0) ... remote_clock = scd[CPU0]->clock read_low32bit(scd[CPU0]->clock) cmpxchg64(scd->clock,...) read_high32bit(scd[CPU0]->clock) While the update of scd->clock is using an atomic64 mechanism, the readout on the remote cpu is not, which can cause completely bogus readouts. It is a quite rare problem, because it requires the update to hit the narrow race window between the low/high readout and the update must go across the 32bit boundary. The resulting misbehaviour is, that CPU1 will see the sched_clock on CPU1 ~4 seconds ahead of it's own and update CPU1s sched_clock value to this bogus timestamp. This stays that way due to the clamping implementation for about 4 seconds until the synchronization with CLOCK_MONOTONIC undoes the problem. The issue is hard to observe, because it might only result in a less accurate SCHED_OTHER timeslicing behaviour. To create observable damage on realtime scheduling classes, it is necessary that the bogus update of CPU1 sched_clock happens in the context of an realtime thread, which then gets charged 4 seconds of RT runtime, which results in the RT throttler mechanism to trigger and prevent scheduling of RT tasks for a little less than 4 seconds. So this is quite unlikely as well. The issue was quite hard to decode as the reproduction time is between 2 days and 3 weeks and intrusive tracing makes it less likely, but the following trace recorded with trace_clock=global, which uses sched_clock_local(), gave the final hint: <idle>-0 0d..30 400269.477150: hrtimer_cancel: hrtimer=0xf7061e80 <idle>-0 0d..30 400269.477151: hrtimer_start: hrtimer=0xf7061e80 ... irq/20-S-587 1d..32 400273.772118: sched_wakeup: comm= ... target_cpu=0 <idle>-0 0dN.30 400273.772118: hrtimer_cancel: hrtimer=0xf7061e80 What happens is that CPU0 goes idle and invokes sched_clock_idle_sleep_event() which invokes sched_clock_local() and CPU1 runs a remote wakeup for CPU0 at the same time, which invokes sched_remote_clock(). The time jump gets propagated to CPU0 via sched_remote_clock() and stays stale on both cores for ~4 seconds. There are only two other possibilities, which could cause a stale sched clock: 1) ktime_get() which reads out CLOCK_MONOTONIC returns a sporadic wrong value. 2) sched_clock() which reads the TSC returns a sporadic wrong value. #1 can be excluded because sched_clock would continue to increase for one jiffy and then go stale. #2 can be excluded because it would not make the clock jump forward. It would just result in a stale sched_clock for one jiffy. After quite some brain twisting and finding the same pattern on other traces, sched_clock_remote() remained the only place which could cause such a problem and as explained above it's indeed racy on 32bit systems. So while on 64bit systems the readout is atomic, we need to verify the remote readout on 32bit machines. We need to protect the local->clock readout in sched_clock_remote() on 32bit as well because an NMI could hit between the low and the high readout, call sched_clock_local() and modify local->clock. Thanks to Siegfried Wulsch for bearing with my debug requests and going through the tedious tasks of running a bunch of reproducer systems to generate the debug information which let me decode the issue. Reported-by: Siegfried Wulsch <Siegfried.Wulsch@rovema.de> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1304051544160.21884@ionos Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org
| * | sched: Convert BUG_ON()s in try_to_wake_up_local() to WARN_ON_ONCE()sTejun Heo2013-03-211-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | try_to_wake_up_local() should only be invoked to wake up another task in the same runqueue and BUG_ON()s are used to enforce the rule. Missing try_to_wake_up_local() can stall workqueue execution but such stalls are likely to be finite either by another work item being queued or the one blocked getting unblocked. There's no reason to trigger BUG while holding rq lock crashing the whole system. Convert BUG_ON()s in try_to_wake_up_local() to WARN_ON_ONCE()s. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20130318192234.GD3042@htj.dyndns.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | | Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds2013-04-146-11/+28
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: "Misc fixlets" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf: Fix error return code ftrace: Fix strncpy() use, use strlcpy() instead of strncpy() perf: Fix strncpy() use, use strlcpy() instead of strncpy() perf: Fix strncpy() use, always make sure it's NUL terminated perf: Fix ring_buffer perf_output_space() boundary calculation perf/x86: Fix uninitialized pt_regs in intel_pmu_drain_bts_buffer()
| * | | perf: Fix error return codeWei Yongjun2013-04-121-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix to return -ENOMEM in the allocation error case instead of 0 (if pmu_bus_running == 1), as done elsewhere in this function. Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Cc: a.p.zijlstra@chello.nl Cc: paulus@samba.org Cc: acme@ghostprotocols.net Link: http://lkml.kernel.org/r/CAPgLHd8j_fWcgqe%3DKLWjpBj%2B%3Do0Pw6Z-SEq%3DNTPU08c2w1tngQ@mail.gmail.com [ Tweaked the error code setting placement and the changelog. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | ftrace: Fix strncpy() use, use strlcpy() instead of strncpy()Chen Gang2013-04-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For NUL terminated string we always need to set '\0' at the end. Signed-off-by: Chen Gang <gang.chen@asianux.com> Cc: rostedt@goodmis.org Cc: Frederic Weisbecker <fweisbec@gmail.com> Link: http://lkml.kernel.org/r/516243B7.9020405@asianux.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | perf: Fix strncpy() use, use strlcpy() instead of strncpy()Chen Gang2013-04-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For NUL terminated string we always need to set '\0' at the end. Signed-off-by: Chen Gang <gang.chen@asianux.com> Cc: rostedt@goodmis.org Cc: Frederic Weisbecker <fweisbec@gmail.com> Link: http://lkml.kernel.org/r/51624254.30301@asianux.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | perf: Fix strncpy() use, always make sure it's NUL terminatedChen Gang2013-04-081-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For NUL terminated string, always make sure that there's '\0' at the end. In our case we need a return value, so still use strncpy() and fix up the tail explicitly. (strlcpy() returns the size, not the pointer) Signed-off-by: Chen Gang <gang.chen@asianux.com> Cc: a.p.zijlstra@chello.nl <a.p.zijlstra@chello.nl> Cc: paulus@samba.org <paulus@samba.org> Cc: acme@ghostprotocols.net <acme@ghostprotocols.net> Link: http://lkml.kernel.org/r/51623E0B.7070101@asianux.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | perf: Fix ring_buffer perf_output_space() boundary calculationStephane Eranian2013-03-212-5/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes a flaw in perf_output_space(). In case the size of the space needed is bigger than the actual buffer size, there may be situations where the function would return true (i.e., there is space) when it should not. head > offset due to rounding of the masking logic. The problem can be tested by activating BTS on Intel processors. A BTS record can be as big as 16 pages. The following command fails: $ perf record -m 4 -c 1 -e branches:u my_test_program You will get a buffer corruption with this. Perf report won't be able to parse the perf.data. The fix is to first check that the requested space is smaller than the buffer size. If so, then the masking logic will work fine. If not, then there is no chance the record can be saved and it will be gracefully handled by upper code layers. [ In v2, we also make the logic for the writable more explicit by renaming it to rb->overwrite because it tells whether or not the buffer can overwrite its tail (suggested by PeterZ). ] Signed-off-by: Stephane Eranian <eranian@google.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: peterz@infradead.org Cc: jolsa@redhat.com Cc: fweisbec@gmail.com Link: http://lkml.kernel.org/r/20130318133327.GA3056@quad Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | perf/x86: Fix uninitialized pt_regs in intel_pmu_drain_bts_buffer()Stephane Eranian2013-03-211-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes an uninitialized pt_regs struct in drain BTS function. The pt_regs struct is propagated all the way to the code_get_segment() function from perf_instruction_pointer() and may get garbage. We cannot simply inherit the actual pt_regs from the interrupt because BTS must be flushed on context-switch or when the associated event is disabled. And there we do not have a pt_regs handy. Setting pt_regs to all zeroes may not be the best option but it is not clear what else to do given where the drain_bts_buffer() is called from. In V2, we move the memset() later in the code to avoid doing it when we end up returning early without doing the actual BTS processing. Also dropped the reg.val initialization because it is redundant with the memset() as suggested by PeterZ. Signed-off-by: Stephane Eranian <eranian@google.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: peterz@infradead.org Cc: sqazi@google.com Cc: ak@linux.intel.com Cc: jolsa@redhat.com Link: http://lkml.kernel.org/r/20130319151038.GA25439@quad Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | | | Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linuxLinus Torvalds2013-04-142-3/+9
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull drm fixes from Dave Airlie: "One fix for a hotplug locking regressions, and one fix for an oops if you unplug the monitor at an inopportune moment on the udl device." * 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: drm/fb-helper: Fix locking in drm_fb_helper_hotplug_event udl: handle EDID failure properly.
OpenPOWER on IntegriCloud