diff options
author | Haavard Skinnemoen <haavard.skinnemoen@atmel.com> | 2008-07-27 13:54:08 +0200 |
---|---|---|
committer | Haavard Skinnemoen <haavard.skinnemoen@atmel.com> | 2008-07-27 13:54:08 +0200 |
commit | eda3d8f5604860aae1bb9996bb5efc4213778369 (patch) | |
tree | 9d3887d2665bcc5f5abf200758794545c7b2c69b /Documentation/unaligned-memory-access.txt | |
parent | 87a9f704658a40940e740b1d73d861667e9164d3 (diff) | |
parent | 8be1a6d6c77ab4532e4476fdb8177030ef48b52c (diff) | |
download | op-kernel-dev-eda3d8f5604860aae1bb9996bb5efc4213778369.zip op-kernel-dev-eda3d8f5604860aae1bb9996bb5efc4213778369.tar.gz |
Merge commit 'upstream/master'
Diffstat (limited to 'Documentation/unaligned-memory-access.txt')
-rw-r--r-- | Documentation/unaligned-memory-access.txt | 32 |
1 files changed, 29 insertions, 3 deletions
diff --git a/Documentation/unaligned-memory-access.txt b/Documentation/unaligned-memory-access.txt index b0472ac..f866c72 100644 --- a/Documentation/unaligned-memory-access.txt +++ b/Documentation/unaligned-memory-access.txt @@ -218,9 +218,35 @@ If use of such macros is not convenient, another option is to use memcpy(), where the source or destination (or both) are of type u8* or unsigned char*. Due to the byte-wise nature of this operation, unaligned accesses are avoided. + +Alignment vs. Networking +======================== + +On architectures that require aligned loads, networking requires that the IP +header is aligned on a four-byte boundary to optimise the IP stack. For +regular ethernet hardware, the constant NET_IP_ALIGN is used. On most +architectures this constant has the value 2 because the normal ethernet +header is 14 bytes long, so in order to get proper alignment one needs to +DMA to an address which can be expressed as 4*n + 2. One notable exception +here is powerpc which defines NET_IP_ALIGN to 0 because DMA to unaligned +addresses can be very expensive and dwarf the cost of unaligned loads. + +For some ethernet hardware that cannot DMA to unaligned addresses like +4*n+2 or non-ethernet hardware, this can be a problem, and it is then +required to copy the incoming frame into an aligned buffer. Because this is +unnecessary on architectures that can do unaligned accesses, the code can be +made dependent on CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS like so: + +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS + skb = original skb +#else + skb = copy skb +#endif + -- -Author: Daniel Drake <dsd@gentoo.org> +Authors: Daniel Drake <dsd@gentoo.org>, + Johannes Berg <johannes@sipsolutions.net> With help from: Alan Cox, Avuton Olrich, Heikki Orsila, Jan Engelhardt, -Johannes Berg, Kyle McMartin, Kyle Moffett, Randy Dunlap, Robert Hancock, -Uli Kunitz, Vadim Lobanov +Kyle McMartin, Kyle Moffett, Randy Dunlap, Robert Hancock, Uli Kunitz, +Vadim Lobanov |