| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Pull ARM fixes from Russell King:
"A number of ARM updates for -rc, covering mostly ARM specific code,
but with one change to modpost.c to allow Thumb section mismatches to
be detected.
ARM changes include reporting when an attempt is made to boot a LPAE
kernel on hardware which does not support LPAE, rather than just being
silent about it.
A number of other minor fixes are included too"
* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
ARM: 7992/1: boot: compressed: ignore bswapsdi2.S
ARM: 7991/1: sa1100: fix compile problem on Collie
ARM: fix noMMU kallsyms symbol filtering
ARM: 7980/1: kernel: improve error message when LPAE config doesn't match CPU
ARM: 7964/1: Detect section mismatches in thumb relocations
ARM: 7963/1: mm: report both sections from PMD
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On 2-level page table systems, the PMD has 2 section entries. Report
these, otherwise ARM_PTDUMP will miss reporting permission changes on
odd section boundaries.
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|\ \
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.linaro.org/people/mszyprowski/linux-dma-mapping
Pull DMA-mapping fixes from Marek Szyprowski:
"This contains fixes for incorrect atomic test in dma-mapping subsystem
for ARM and x86 architecture"
* 'fixes-for-v3.14' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping:
x86: dma-mapping: fix GFP_ATOMIC macro usage
ARM: dma-mapping: fix GFP_ATOMIC macro usage
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
GFP_ATOMIC is not a single gfp flag, but a macro which expands to the other
flags and LACK of __GFP_WAIT flag. To check if caller wanted to perform an
atomic allocation, the code must test __GFP_WAIT flag presence. This patch
fixes the issue introduced in v3.6-rc5
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
CC: stable@vger.kernel.org
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
During __v{6,7}_setup, we invalidate the TLBs since we are about to
enable the MMU on return to head.S. Unfortunately, without a subsequent
dsb instruction, the invalidation is not guaranteed to have completed by
the time we write to the sctlr, potentially exposing us to junk/stale
translations cached in the TLB.
This patch reworks the init functions so that the dsb used to ensure
completion of cache/predictor maintenance is also used to ensure
completion of the TLB invalidation.
Cc: <stable@vger.kernel.org>
Reported-by: Albin Tonnerre <Albin.Tonnerre@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The stage-2 memory attributes are distinct from the Hyp memory
attributes and the Stage-1 memory attributes. We were using the stage-1
memory attributes for stage-2 mappings causing device mappings to be
mapped as normal memory. Add the S2 equivalent defines for memory
attributes and fix the comments explaining the defines while at it.
Add a prot_pte_s2 field to the mem_type struct and fill out the field
for device mappings accordingly.
Cc: <stable@vger.kernel.org> [3.9+]
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Commit 65939301acdb (arm: set initrd_start/initrd_end for fdt scan)
caused the FDT initrd_start and initrd_end to override the
phys_initrd_start and phys_initrd_size set by the initrd= kernel
parameter. With this patch initrd_start and initrd_end will be
overridden if phys_initrd_start and phys_initrd_size are set by the
kernel initrd= parameter.
Fixes: 65939301acdb (arm: set initrd_start/initrd_end for fdt scan)
Signed-off-by: Ben Peddell <klightspeed@killerwolves.net>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Pull ARM updates from Russell King:
"In this set, we have:
- Refactoring of some of the old StrongARM-1100 GPIO code to make
things simpler by Dmitry Eremin-Solenikov
- Read-only and non-executable support for modules on ARM from Laura
Abbot
- Removal of unnecessary set_drvdata() calls in AMBA code
- Some non-executable support for kernel lowmem mappings at the 1MB
section granularity, and dumping of kernel page tables via debugfs
- Some improvements for the timer/clock code on Footbridge platforms,
and cleanup some of the LED code there
- Fix fls/ffs() signatures to match x86 to prevent build warnings,
particularly where these are used with min/max() macros
- Avoid using the bootmem allocator on ARM (patches from Santosh
Shilimkar)
- Various asid/unaligned access updates from Will Deacon"
* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (51 commits)
ARM: SMP implementations are not supposed to return from smp_ops.cpu_die()
ARM: ignore memory below PHYS_OFFSET
Fix select-induced Kconfig warning for ZBOOT_ROM
ARM: fix ffs/fls implementations to match x86
ARM: 7935/1: sa1100: collie: add gpio-keys configuration
ARM: 7932/1: bcm: Add DEBUG_LL console support
ARM: 7929/1: Remove duplicate SCHED_HRTICK config option
ARM: 7928/1: kconfig: select HAVE_EFFICIENT_UNALIGNED_ACCESS for CPUv6+ && MMU
ARM: 7927/1: dcache: select DCACHE_WORD_ACCESS for big-endian CPUs
ARM: 7926/1: mm: flesh out and fix the comments in the ASID allocator
ARM: 7925/1: mm: keep track of last ASID allocation to improve bitmap searching
ARM: 7924/1: mm: don't bother with reserved ttbr0 when running with LPAE
ARM: PCI: add legacy IDE IRQ implementation
ARM: footbridge: cleanup LEDs code
ARM: pgd allocation: retry on failure
ARM: footbridge: add one-shot mode for DC21285 timer
ARM: footbridge: add sched_clock implementation
ARM: 7922/1: l2x0: add Marvell Tauros3 support
ARM: 7877/1: use built-in byte swap function
ARM: 7921/1: mcpm: remove redundant dsb instructions prior to sev
...
|
| |\ \ \ |
|
| | |\ \ \
| | | | | |
| | | | | |
| | | | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone into devel-stable
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Now with dma_mask series merged and max*pfn has consistent meaning on ARM
as rest of the arch's thanks to RMK's mega series, lets switch ARM code
to NO_BOOTMEM. With NO_BOOTMEM change, now we use memblock allocator to
reserve space for crash kernel to have one less dependency with nobootmem
allocator wrapper.
Tested with both flat memory and sparse (faked) memory models with highmem
enabled.
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
If allowed by call to memblock_allow_resize() - The Memblock core will
try to allocate additional memory and rearrange its internal data in
case, if there are more then INIT_MEMBLOCK_REGIONS(128) memory regions
of any type have been allocated. If this happens before Low memory is
mapped (which is done now by map_lowmem()) the system will hang, because
the Memblock core will try to operate with virtual addresses which
aren't mapped yet.
In ARM code, the memblock resizing is allowed (memblock_allow_resize())
from arm_memblock_init() which is called before map_lowmem(), so
this may lead to an error as described above.
Hence, allow Memblock resizing later during init, from bootmem_init()
when all appropriate mappings are ready.
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
With commit 26ba47b1 {ARM: 7805/1: mm: change max*pfn to include
the physical offset of memory}, the max_pfn already contain
PHYS_PFN_OFFSET, so it shouldn't be taken into account again.
While at it, use use set_max_mapnr() helper.
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
| | | | | | | |
| | \ \ \ \ | |
| | \ \ \ \ | |
| | \ \ \ \ | |
| | \ \ \ \ | |
| | \ \ \ \ | |
| | \ \ \ \ | |
| | \ \ \ \ | |
| |\ \ \ \ \ \ \ \ \
| | |_|_|_|_|/ / / /
| |/| | | | | / / /
| | | |_|_|_|/ / /
| | |/| | | | / /
| | | | |_|_|/ /
| | | |/| | | /
| | | | | | |/
| | | | | |/| |
for-next
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
The ASID allocator has to deal with some pretty horrible behaviours by
the CPU, so expand on some of the comments in there so I remember why
we can never allocate ASID zero to a userspace task.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Since we only clear entries in the ASID bitmap on a rollover event, the
bitmap tends to consist of a block of consecutive set bits followed by
a block of consecutive clear bits. The exception to this rule is for
ASIDs which have been carried over from a previous generation, but
these are bound by the number of CPUs.
This patch optimises our bitmap searching strategy, so that we search
from the last successful allocation, rather than search from index 1
each time we allocate a new ASID.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
With the new ASID allocation algorithm, active ASIDs at the time of a
rollover event will be marked as reserved, so active mm_structs can
continue to operate with the same ASID as before. This in turn means
that we don't need to worry about allocating a new ASID to an mm that
is currently active (installed in TTBR0).
Since updating the pgd and ASID is atomic on LPAE systems (by virtue of
the two being fields in the same hardware register), we can dispose of
the reserved TTBR0 and rely on whatever tables we currently have live.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Make pgd allocation retry on failure; we really need this to succeed
otherwise fork() can trigger OOMs.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This adds support for the Marvell Tauros3 cache controller which
is compatible with pl310 cache controller but broadcasts L1 cache
operations to L2 cache. While updating the binding documentation,
clean up the list of possible compatibles. Also reorder driver
compatibles to allow non-ARM derivated to be compatible to ARM
cache controller compatibles.
Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | |_|_|/ /
| |/| | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Set-associative caches on all v7 implementations map the index bits
to physical addresses LSBs and tag bits to MSBs. As the last level
of cache on current and upcoming ARM systems grows in size,
this means that under normal DRAM controller configurations, the
current v7 cache flush routine using set/way operations triggers a
DRAM memory controller precharge/activate for every cache line
writeback since the cache routine cleans lines by first fixing the
index and then looping through ways (index bits are mapped to lower
physical addresses on all v7 cache implementations; this means that,
with last level cache sizes in the order of MBytes, lines belonging
to the same set but different ways map to different DRAM pages).
Given the random content of cache tags, swapping the order between
indexes and ways loops do not prevent DRAM pages precharge and
activate cycles but at least, on average, improves the chances that
either multiple lines hit the same page or multiple lines belong to
different DRAM banks, improving throughput significantly.
This patch swaps the inner loops in the v7 cache flushing routine
to carry out the clean operations first on all sets belonging to
a given way (looping through sets) and then decrementing the way.
Benchmarks showed that by swapping the ordering in which sets and
ways are decremented in the v7 cache flushing routine, that uses
set/way operations, time required to flush caches is reduced
significantly, owing to improved writebacks throughput to the DRAM
controller.
Benchmarks results vary and depend heavily on the last level of
cache tag RAM content when cache is cleaned and invalidated, ranging
from 2x throughput when all tag RAM entries contain dirty lines
mapping to sequential pages of RAM to 1x (ie no improvement) when
all tag RAM accesses trigger a DRAM precharge/activate cycle, as the
current code implies on most DRAM controller configurations.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The CMA region was being marked executable:
0xdc04e000-0xdc050000 8K RW x MEM/CACHED/WBRA
0xdc060000-0xdc100000 640K RW x MEM/CACHED/WBRA
0xdc4f5000-0xdc500000 44K RW x MEM/CACHED/WBRA
0xdcce9000-0xe0000000 52316K RW x MEM/CACHED/WBRA
This is mainly due to the badly worded MT_MEMORY_DMA_READY symbol, but
there are also a few other places in dma-mapping which should be
corrected to use the right constant. Fix all these places:
0xdc04e000-0xdc050000 8K RW NX MEM/CACHED/WBRA
0xdc060000-0xdc100000 640K RW NX MEM/CACHED/WBRA
0xdc280000-0xdc300000 512K RW NX MEM/CACHED/WBRA
0xdc6fc000-0xe0000000 58384K RW NX MEM/CACHED/WBRA
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Other architectures define various set_memory functions to allow
attributes to be changed (e.g. set_memory_x, set_memory_rw, etc.)
Currently, these functions are missing on ARM. Define these in an
appropriate manner for ARM.
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Add basic NX support for kernel lowmem mappings. We mark any section
which does not overlap kernel text as non-executable, preventing it
from being used to write code and then execute directly from there.
This does not change the alignment of the sections, so the kernel
image doesn't grow significantly via this change, so we can do this
without needing a config option.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Document the permissions which the various MT_MEMORY* mapping types
will provide.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | |/ /
| | |/| |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This patch allows the kernel page tables to be dumped via a debugfs file,
allowing kernel developers to check the layout of the kernel page tables
and the verify the various permissions and type settings.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Paul Walmsley <paul@pwsan.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Tejun Heo <tj@kernel.org>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
| |_|/ /
|/| | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Commit 4b59e6c47309 ("mm, show_mem: suppress page counts in
non-blockable contexts") introduced SHOW_MEM_FILTER_PAGE_COUNT to
suppress PFN walks on large memory machines. Commit c78e93630d15 ("mm:
do not walk all of system memory during show_mem") avoided a PFN walk in
the generic show_mem helper which removes the requirement for
SHOW_MEM_FILTER_PAGE_COUNT in that case.
This patch removes PFN walkers from the arch-specific implementations
that report on a per-node or per-zone granularity. ARM and unicore32
still do a PFN walk as they report memory usage on each bank which is a
much finer granularity where the debugging information may still be of
use. As the remaining arches doing PFN walks have relatively small
amounts of memory, this patch simply removes SHOW_MEM_FILTER_PAGE_COUNT.
[akpm@linux-foundation.org: fix parisc]
Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: James Bottomley <jejb@parisc-linux.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This reverts commit 787b0d5c1ca7ff24feb6f92e4c7f4410ee7d81a8 since
it is no longer required after 7909/1 was applied, and it causes
build regressions when ARM_PATCH_PHYS_VIRT is disabled and DMA_ZONE
is enabled.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When given a compound high page, __flush_dcache_page will only flush
the first page of the compound page repeatedly rather than the entire
set of constituent pages.
This error was introduced by:
0b19f93 ARM: mm: Add support for flushing HugeTLB pages.
This patch corrects the logic such that all constituent pages are now
flushed.
Cc: stable@vger.kernel.org # 3.10+
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Current code is using PHYS_OFFSET to calculate the arm_dma_limit which
will lead to wrong calculations in cases where PHYS_OFFSET is updated
runtime.
So fix the code by using __pv_phys_offset instead of PHYS_OFFSET.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Peter reports that OMAP audio broke with the recent fix for these
checks, caused by OMAP audio using a 64-bit DMA mask. We should
allow 64-bit DMA masks even with 32-bit dma_addr_t if we can be sure
the amount of RAM we have won't allow the 32-bit dma_addr_t to
overflow. Unfortunately, the checks to detect overflow were not
correct.
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Commit f6f91b0d9fd9 (ARM: allow kuser helpers to be removed from the
vector page) required two pages for the vectors code. Although the
code setting up the initial page tables was updated, the code which
allocates page tables for new processes wasn't, neither was the code
which tears down the mappings. Fix this.
Fixes: f6f91b0d9fd9 ("ARM: allow kuser helpers to be removed from the vector page")
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: <stable@vger.kernel.org>
|
|/
|
|
|
|
|
|
| |
Some buses have negative offsets, which causes the DMA mask checks to
falsely fail. Fix this by using the actual amount of memory fitted in
the system.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Pull ARM fixes from Russell King:
"Some small fixes for this merge window, most of them quite self
explanatory - the biggest thing here is a fix for the ARMv7 LPAE
suspend/resume support"
* 'fixes' of git://git.linaro.org/people/rmk/linux-arm:
ARM: 7894/1: kconfig: select GENERIC_CLOCKEVENTS if HAVE_ARM_ARCH_TIMER
ARM: 7893/1: bitops: only emit .arch_extension mp if CONFIG_SMP
ARM: 7892/1: Fix warning for V7M builds
ARM: 7888/1: seccomp: not compatible with ARM OABI
ARM: 7886/1: make OABI default to off
ARM: 7885/1: Save/Restore 64-bit TTBR registers on LPAE suspend/resume
ARM: 7884/1: mm: Fix ECC mem policy printk
ARM: 7883/1: fix mov to mvn conversion in case of 64 bit phys_addr_t and BE
ARM: 7882/1: mm: fix __phys_to_virt to work with 64 bit phys_addr_t in BE case
ARM: 7881/1: __fixup_smp read of SCU config should do byteswap in BE case
ARM: Fix nommu.c build warning
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
LPAE enabled kernels use the 64-bit version of TTBR0 and TTBR1
registers. If we're running an LPAE kernel, fill the upper half
of TTBR0 with 0 because we're setting it to the idmap here (the
idmap is guaranteed to be < 4Gb) and fully restore TTBR1 instead
of just restoring the lower 32 bits. Failure to do so can cause
failures on resume from suspend when these registers are only
half restored.
Signed-off-by: Mahesh Sivasubramanian <msivasub@codeaurora.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
ECC policy can be applied to the whole system
when this bit is implemented by SoC vendor
(IMP - bit 9 - in L1 page table entry format).
When this bit is not implemented by SoC vendor
it doesn't mean that system has no other way
how to do ECC.
This patch ensures to show this message only when ECC
is requested via cmd line ecc=on and runs on
appropriate ARM core.
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The 0-day kernel build robot found this new warning:
arch/arm/mm/nommu.c:303:17: warning: 'struct proc_info_list' declared inside parameter list [enabled by default]
arch/arm/mm/nommu.c:303:17: warning: its scope is only this definition or declaration, which is probably not what you want [enabled by default]
Fix it by including the appropriate header.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We're going to introduce split page table lock for PMD level. Let's
rename existing split ptlock for PTE level to avoid confusion.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Tested-by: Alex Thorlton <athorlton@sgi.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: "Eric W . Biederman" <ebiederm@xmission.com>
Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Dave Jones <davej@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Robin Holt <robinmholt@gmail.com>
Cc: Sedat Dilek <sedat.dilek@gmail.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Pull ARM updates from Russell King:
"Included in this series are:
1. BE8 (modern big endian) changes for ARM from Ben Dooks
2. big.Little support from Nicolas Pitre and Dave Martin
3. support for LPAE systems with all system memory above 4GB
4. Perf updates from Will Deacon
5. Additional prefetching and other performance improvements from Will.
6. Neon-optimised AES implementation fro Ard.
7. A number of smaller fixes scattered around the place.
There is a rather horrid merge conflict in tools/perf - I was never
notified of the conflict because it originally occurred between Will's
tree and other stuff. Consequently I have a resolution which Will
forwarded me, which I'll forward on immediately after sending this
mail.
The other notable thing is I'm expecting some build breakage in the
crypto stuff on ARM only with Ard's AES patches. These were merged
into a stable git branch which others had already pulled, so there's
little I can do about this. The problem is caused because these
patches have a dependency on some code in the crypto git tree - I
tried requesting a branch I can pull to resolve these, and all I got
each time from the crypto people was "we'll revert our patches then"
which would only make things worse since I still don't have the
dependent patches. I've no idea what's going on there or how to
resolve that, and since I can't split these patches from the rest of
this pull request, I'm rather stuck with pushing this as-is or
reverting Ard's patches.
Since it should "come out in the wash" I've left them in - the only
build problems they seem to cause at the moment are with randconfigs,
and since it's a new feature anyway. However, if by -rc1 the
dependencies aren't in, I think it'd be best to revert Ard's patches"
I resolved the perf conflict roughly as per the patch sent by Russell,
but there may be some differences. Any errors are likely mine. Let's
see how the crypto issues work out..
* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (110 commits)
ARM: 7868/1: arm/arm64: remove atomic_clear_mask() in "include/asm/atomic.h"
ARM: 7867/1: include: asm: use 'int' instead of 'unsigned long' for 'oldval' in atomic_cmpxchg().
ARM: 7866/1: include: asm: use 'long long' instead of 'u64' within atomic.h
ARM: 7871/1: amba: Extend number of IRQS
ARM: 7887/1: Don't smp_cross_call() on UP devices in arch_irq_work_raise()
ARM: 7872/1: Support arch_irq_work_raise() via self IPIs
ARM: 7880/1: Clear the IT state independent of the Thumb-2 mode
ARM: 7878/1: nommu: Implement dummy early_paging_init()
ARM: 7876/1: clear Thumb-2 IT state on exception handling
ARM: 7874/2: bL_switcher: Remove cpu_hotplug_driver_{lock,unlock}()
ARM: footbridge: fix build warnings for netwinder
ARM: 7873/1: vfp: clear vfp_current_hw_state for dying cpu
ARM: fix misplaced arch_virt_to_idmap()
ARM: 7848/1: mcpm: Implement cpu_kill() to synchronise on powerdown
ARM: 7847/1: mcpm: Factor out logical-to-physical CPU translation
ARM: 7869/1: remove unused XSCALE_PMU Kconfig param
ARM: 7864/1: Handle 64-bit memory in case of 32-bit phys_addr_t
ARM: 7863/1: Let arm_add_memory() always use 64-bit arguments
ARM: 7862/1: pcpu: replace __get_cpu_var_uses
ARM: 7861/1: cacheflush: consolidate single-CPU ARMv7 cache disabling code
...
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The exception handling code fails to clear the IT state, potentially
leading to incorrect execution of the fixup if the size of the IT
block is more than one.
Let fixup_exception do the IT sanitizing if a fixup has been found,
and restore CPSR from the stack when returning from a data abort.
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
DMA mapping permissions were being derived from pgprot_kernel directly
without using PAGE_KERNEL. This causes them to be marked with executable
permission, which is not what we want. Fix this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
arch/arm/include/asm/atomic.h
arch/arm/include/asm/hardirq.h
arch/arm/kernel/smp.c
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
No-MMU configurations currenty fail to build because they are missing
the early_paging_init() symbol.
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | |\ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
git://git.baserock.org/delta/linux into devel-stable
Conflicts:
arch/arm/kernel/head.S
This series has been well tested and it would be great to get this
merged now.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
If we are in BE8 mode, we must deal with the instruction stream being
in LE order when data is being loaded in BE order. Ensure the data is
swapped before processing to avoid thre following:
Change to using <asm/opcodes.h> to provide the necessary conversion
functions to change the byte ordering.
This stops the following warning messages from the kernel on a fault:
Unhandled fault: alignment exception (0x001) at 0xbfa09567
Alignment trap: not handling instruction 030091e8 at [<80333e8c>]
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Add ARM_BE8() helper to wrap any code conditional on being
compile when CONFIG_ARM_ENDIAN_BE8 is selected and convert
existing places where this is to use it.
Acked-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
|
| | | |/
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The Kconfig for arch/arm/mach-ixp4xx has a local definition
of ARCH_SUPPORTS_BIG_ENDIAN which could be used elsewhere.
This means that if IXP4xx is selected and this symbol is
selected eleswhere then an warning is produced.
Clean the following error up by making the symbol be
selected by the main ARCH_IXP4XX definition and have a
common definition in arch/arm/mm/Kconfig
warning: (ARCH_xxx) selects ARCH_SUPPORTS_BIG_ENDIAN which has unmet direct dependencies (ARCH_IXP4XX)
warning: (ARCH_xxx) selects ARCH_SUPPORTS_BIG_ENDIAN which has unmet direct dependencies (ARCH_IXP4XX)
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch adds a step in the init sequence, in order to recreate
the kernel code/data page table mappings prior to full paging
initialization. This is necessary on LPAE systems that run out of
a physical address space outside the 4G limit. On these systems,
this implementation provides a machine descriptor hook that allows
the PHYS_OFFSET to be overridden in a machine specific fashion.
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: R Sricharan <r.sricharan@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
|