diff options
author | Marek Szyprowski <m.szyprowski@samsung.com> | 2015-04-23 12:46:16 +0100 |
---|---|---|
committer | Will Deacon <will.deacon@arm.com> | 2015-04-27 11:39:50 +0100 |
commit | 6829e274a623187c24f7cfc0e3d35f25d087fcc5 (patch) | |
tree | a6ca873b29878ee7bc8c328af8a64bae0f49a37e | |
parent | 6544e67bfb1bf55bcf3c0f6b37631917e9acfb74 (diff) | |
download | op-kernel-dev-6829e274a623187c24f7cfc0e3d35f25d087fcc5.zip op-kernel-dev-6829e274a623187c24f7cfc0e3d35f25d087fcc5.tar.gz |
arm64: dma-mapping: always clear allocated buffers
Buffers allocated by dma_alloc_coherent() are always zeroed on Alpha,
ARM (32bit), MIPS, PowerPC, x86/x86_64 and probably other architectures.
It turned out that some drivers rely on this 'feature'. Allocated buffer
might be also exposed to userspace with dma_mmap() call, so clearing it
is desired from security point of view to avoid exposing random memory
to userspace. This patch unifies dma_alloc_coherent() behavior on ARM64
architecture with other implementations by unconditionally zeroing
allocated buffer.
Cc: <stable@vger.kernel.org> # v3.14+
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
-rw-r--r-- | arch/arm64/mm/dma-mapping.c | 6 |
1 files changed, 2 insertions, 4 deletions
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index ef7d112..e0f14ee 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -67,8 +67,7 @@ static void *__alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags) *ret_page = phys_to_page(phys); ptr = (void *)val; - if (flags & __GFP_ZERO) - memset(ptr, 0, size); + memset(ptr, 0, size); } return ptr; @@ -113,8 +112,7 @@ static void *__dma_alloc_coherent(struct device *dev, size_t size, *dma_handle = phys_to_dma(dev, page_to_phys(page)); addr = page_address(page); - if (flags & __GFP_ZERO) - memset(addr, 0, size); + memset(addr, 0, size); return addr; } else { return swiotlb_alloc_coherent(dev, size, dma_handle, flags); |