diff options
author | Eric Dumazet <dada1@cosmosbay.com> | 2008-02-06 01:37:56 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2008-02-06 10:41:09 -0800 |
commit | 1bf47346d75790ebd2563d909d48046961c7ffd5 (patch) | |
tree | 0f478764beb8dc4e0c71c5f3d6a657535579fe3a /include | |
parent | 6b2fb3c65844452bb9e8b449d50863d1b36c5dc0 (diff) | |
download | op-kernel-dev-1bf47346d75790ebd2563d909d48046961c7ffd5.zip op-kernel-dev-1bf47346d75790ebd2563d909d48046961c7ffd5.tar.gz |
kernel/sys.c: get rid of expensive divides in groups_sort()
groups_sort() can be quite long if user loads a large gid table.
This is because GROUP_AT(group_info, some_integer) uses an integer divide.
So having to do XXX thousand divides during one syscall can lead to very
high latencies. (NGROUPS_MAX=65536)
In the past (25 Mar 2006), an analog problem was found in groups_search()
(commit d74beb9f33a5f16d2965f11b275e401f225c949d ) and at that time I
changed some variables to unsigned int.
I believe that a more generic fix is to make sure NGROUPS_PER_BLOCK is
unsigned.
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/sched.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index 9c13be3..7c8ca05 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -810,7 +810,7 @@ static inline int above_background_load(void) struct io_context; /* See blkdev.h */ #define NGROUPS_SMALL 32 -#define NGROUPS_PER_BLOCK ((int)(PAGE_SIZE / sizeof(gid_t))) +#define NGROUPS_PER_BLOCK ((unsigned int)(PAGE_SIZE / sizeof(gid_t))) struct group_info { int ngroups; atomic_t usage; |