summaryrefslogtreecommitdiffstats
path: root/fs/file.c
Commit message (Collapse)AuthorAgeFilesLines
* vfs: avoid large kmalloc()s for the fdtableAndrew Morton2011-04-281-7/+11
| | | | | | | | | | | | | | | | | | | | | | | | Azurit reports large increases in system time after 2.6.36 when running Apache. It was bisected down to a892e2d7dcdfa6c76e6 ("vfs: use kmalloc() to allocate fdmem if possible"). That patch caused the vfs to use kmalloc() for very large allocations and this is causing excessive work (and presumably excessive reclaim) within the page allocator. Fix it by falling back to vmalloc() earlier - when the allocation attempt would have been considered "costly" by reclaim. Reported-by: azurIt <azurit@pobox.sk> Tested-by: azurIt <azurit@pobox.sk> Acked-by: Changli Gao <xiaosuo@gmail.com> Cc: Americo Wang <xiyou.wangcong@gmail.com> Cc: Jiri Slaby <jslaby@suse.cz> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vfs: use kmalloc() to allocate fdmem if possibleChangli Gao2010-08-111-34/+23
| | | | | | | | | | | | | | | | | | | | | | | Use kmalloc() to allocate fdmem if possible. vmalloc() is used as a fallback solution for fdmem allocation. A new helper function __free_fdtable() is introduced to reduce the lines of code. A potential bug, vfree() a memory allocated by kmalloc(), is fixed. [akpm@linux-foundation.org: use __GFP_NOWARN, uninline alloc_fdmem() and free_fdmem()] Signed-off-by: Changli Gao <xiaosuo@gmail.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Jiri Slaby <jslaby@suse.cz> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Avi Kivity <avi@redhat.com> Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* fs: remove all rcu head initializations, except on_stack initializationsPaul E. McKenney2010-06-141-3/+0
| | | | | | | | | | | | Remove all rcu head inits. We don't care about the RCU head state before passing it to call_rcu() anyway. Only leave the "on_stack" variants so debugobjects can keep track of objects on stack. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andries Brouwer <aeb@cwi.nl>
* fs: use rlimit helpersJiri Slaby2010-03-061-1/+1
| | | | | | | | | | | | | Make sure compiler won't do weird things with limits. E.g. fetching them twice may return 2 different values after writable limits are implemented. I.e. either use rlimit helpers added in commit 3e10e716abf3 ("resource: add helpers for fetching rlimits") or ACCESS_ONCE if not applicable. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vfs: Apply lockdep-based checking to rcu_dereference() usesPaul E. McKenney2010-02-251-1/+1
| | | | | | | | | | | | | | | | | | | | | Add lockdep-ified RCU primitives to alloc_fd(), files_fdtable() and fcheck_files(). Cc: Alexander Viro <viro@zeniv.linux.org.uk> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: laijs@cn.fujitsu.com Cc: dipankar@in.ibm.com Cc: mathieu.desnoyers@polymtl.ca Cc: josh@joshtriplett.org Cc: dvhltc@us.ibm.com Cc: niv@us.ibm.com Cc: peterz@infradead.org Cc: rostedt@goodmis.org Cc: Valdis.Kletnieks@vt.edu Cc: dhowells@redhat.com Cc: Alexander Viro <viro@zeniv.linux.org.uk> LKML-Reference: <1266887105-1528-8-git-send-email-paulmck@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* headers: remove sched.h from interrupt.hAlexey Dobriyan2009-10-111-0/+1
| | | | | | | | After m68k's task_thread_info() doesn't refer to current, it's possible to remove sched.h from interrupt.h and not break m68k! Many thanks to Heiko Carstens for allowing this. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
* [PATCH] merge locate_fd() and get_unused_fd()Al Viro2008-08-011-0/+61
| | | | | | | New primitive: alloc_fd(start, flags). get_unused_fd() and get_unused_fd_flags() become wrappers on top of it. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] fix RLIM_NOFILE handlingAl Viro2008-07-261-0/+9
| | | | | | | | | | | | * dup2() should return -EBADF on exceeded sysctl_nr_open * dup() should *not* return -EINVAL even if you have rlimit set to 0; it should get -EMFILE instead. Check for orig_start exceeding rlimit taken to sys_fcntl(). Failing expand_files() in dup{2,3}() now gets -EMFILE remapped to -EBADF. Consequently, remaining checks for rlimit are taken to expand_files(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] avoid multiplication overflows and signedness issues for max_fdsAl Viro2008-05-161-0/+4
| | | | | | | Limit sysctl_nr_open - we don't want ->max_fds to exceed MAX_INT and we don't want size calculation for ->fd[] to overflow. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] dup_fd() part 4 - race fixAl Viro2008-05-161-2/+8
| | | | | | | | | | | | | | | | | Parent _can_ be a clone task, contrary to the comment. Moreover, more files could be opened while we allocate a copy, in which case we end up copying only part into new descriptor table. Since what we get _is_ affected by all changes in the old range, we can get rather weird effects - e.g. dup2(0, 1024); close(0); in parallel with fork() resulting in child that sees the effect of close(), but not that of dup2() done just before that close(). What we need is to recalculate the open_count after having reacquired ->file_lock and if external fdtable we'd just allocated is too small for it, free the sucker and redo allocation. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] dup_fd() - part 3Al Viro2008-05-161-28/+15
| | | | | | merge alloc_files() into dup_fd(), leave setting newf->fdt until the end Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] dup_fd() part 2Al Viro2008-05-161-8/+16
| | | | | | | | use alloc_fdtable() instead of expand_files(), get rid of pointless grabbing newf->file_lock, kill magic in copy_fdtable() that used to be there only to skip copying when called from dup_fd(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] dup_fd() fixes, part 1Al Viro2008-05-161-0/+130
| | | | | | Move the sucker to fs/file.c in preparation to the rest Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] take init_files to fs/file.cAl Viro2008-05-161-0/+13
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] fix sysctl_nr_open bugsAl Viro2008-05-011-2/+20
| | | | | | | | | * if luser with root sets it to something that is not a multiple of BITS_PER_LONG, the system is screwed. * if it gets decreased at the wrong time, we can get expand_files() returning success and _not_ increasing the size of table as asked. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] split linux/file.hAl Viro2008-05-011-0/+1
| | | | | | Initial splitoff of the low-level stuff; taken to fdtable.h Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* get rid of NR_OPEN and introduce a sysctl_nr_openEric Dumazet2008-02-061-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | NR_OPEN (historically set to 1024*1024) actually forbids processes to open more than 1024*1024 handles. Unfortunatly some production servers hit the not so 'ridiculously high value' of 1024*1024 file descriptors per process. Changing NR_OPEN is not considered safe because of vmalloc space potential exhaust. This patch introduces a new sysctl (/proc/sys/fs/nr_open) wich defaults to 1024*1024, so that admins can decide to change this limit if their workload needs it. [akpm@linux-foundation.org: export it for sparc64] Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] fdtable: Provide free_fdtable() wrapperVadim Lobanov2006-12-221-1/+1
| | | | | | | | | | | | | | Christoph Hellwig has expressed concerns that the recent fdtable changes expose the details of the RCU methodology used to release no-longer-used fdtable structures to the rest of the kernel. The trivial patch below addresses these concerns by introducing the appropriate free_fdtable() calls, which simply wrap the release RCU usage. Since free_fdtable() is a one-liner, it makes sense to promote it to an inline helper. Signed-off-by: Vadim Lobanov <vlobanov@speakeasy.net> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] fdtable: Implement new pagesize-based fdtable allocatorVadim Lobanov2006-12-101-136/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch provides an improved fdtable allocation scheme, useful for expanding fdtable file descriptor entries. The main focus is on the fdarray, as its memory usage grows 128 times faster than that of an fdset. The allocation algorithm sizes the fdarray in such a way that its memory usage increases in easy page-sized chunks. The overall algorithm expands the allowed size in powers of two, in order to amortize the cost of invoking vmalloc() for larger allocation sizes. Namely, the following sizes for the fdarray are considered, and the smallest that accommodates the requested fd count is chosen: pagesize / 4 pagesize / 2 pagesize <- memory allocator switch point pagesize * 2 pagesize * 4 ...etc... Unlike the current implementation, this allocation scheme does not require a loop to compute the optimal fdarray size, and can be done in efficient straightline code. Furthermore, since the fdarray overflows the pagesize boundary long before any of the fdsets do, it makes sense to optimize run-time by allocating both fdsets in a single swoop. Even together, they will still be, by far, smaller than the fdarray. The fdtable->open_fds is now used as the anchor for the fdset memory allocation. Signed-off-by: Vadim Lobanov <vlobanov@speakeasy.net> Cc: Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Dipankar Sarma <dipankar@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] fdtable: Remove the free_files fieldVadim Lobanov2006-12-101-19/+8
| | | | | | | | | | | | | | | | | | | An fdtable can either be embedded inside a files_struct or standalone (after being expanded). When an fdtable is being discarded after all RCU references to it have expired, we must either free it directly, in the standalone case, or free the files_struct it is contained within, in the embedded case. Currently the free_files field controls this behavior, but we can get rid of it entirely, as all the necessary information is already recorded. We can distinguish embedded and standalone fdtables using max_fds, and if it is embedded we can divine the relevant files_struct using container_of(). Signed-off-by: Vadim Lobanov <vlobanov@speakeasy.net> Cc: Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Dipankar Sarma <dipankar@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] fdtable: Make fdarray and fdsets equal in sizeVadim Lobanov2006-12-101-34/+22
| | | | | | | | | | | | | | | | | | | | | | | | Currently, each fdtable supports three dynamically-sized arrays of data: the fdarray and two fdsets. The code allows the number of fds supported by the fdarray (fdtable->max_fds) to differ from the number of fds supported by each of the fdsets (fdtable->max_fdset). In practice, it is wasteful for these two sizes to differ: whenever we hit a limit on the smaller-capacity structure, we will reallocate the entire fdtable and all the dynamic arrays within it, so any delta in the memory used by the larger-capacity structure will never be touched at all. Rather than hogging this excess, we shouldn't even allocate it in the first place, and keep the capacities of the fdarray and the fdsets equal. This patch removes fdtable->max_fdset. As an added bonus, most of the supporting code becomes simpler. Signed-off-by: Vadim Lobanov <vlobanov@speakeasy.net> Cc: Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Dipankar Sarma <dipankar@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] file: kill unnecessary timer in fdtable_deferTejun Heo2006-12-071-27/+2
| | | | | | | | | | | | | | | | | | free_fdtable_rc() schedules timer to reschedule fddef->wq if schedule_work() on it returns 0. However, schedule_work() guarantees that the target work is executed at least once after the scheduling regardless of its return value. 0 return simply means that the work was already pending and thus no further action was required. Another problem is that it used contant '5' as @expires argument to mod_timer(). Kill unnecessary fddef->timer. Signed-off-by: Tejun Heo <htejun@gmail.com> Cc: Dipankar Sarma <dipankar@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* WorkStruct: Pass the work_struct pointer instead of context dataDavid Howells2006-11-221-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pass the work_struct pointer to the work function rather than context data. The work function can use container_of() to work out the data. For the cases where the container of the work_struct may go away the moment the pending bit is cleared, it is made possible to defer the release of the structure by deferring the clearing of the pending bit. To make this work, an extra flag is introduced into the management side of the work_struct. This governs auto-release of the structure upon execution. Ordinarily, the work queue executor would release the work_struct for further scheduling or deallocation by clearing the pending bit prior to jumping to the work function. This means that, unless the driver makes some guarantee itself that the work_struct won't go away, the work function may not access anything else in the work_struct or its container lest they be deallocated.. This is a problem if the auxiliary data is taken away (as done by the last patch). However, if the pending bit is *not* cleared before jumping to the work function, then the work function *may* access the work_struct and its container with no problems. But then the work function must itself release the work_struct by calling work_release(). In most cases, automatic release is fine, so this is the default. Special initiators exist for the non-auto-release case (ending in _NAR). Signed-Off-By: David Howells <dhowells@redhat.com>
* [PATCH] expand_fdtable(): remove pointless unlock+lockAndrew Morton2006-09-291-2/+0
| | | | | | | | This unlock/lock on a super-unlikely path isn't worth the kernel text. Cc: Vadim Lobanov <vlobanov@speakeasy.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Clean up expand_fdtable() and expand_files()Vadim Lobanov2006-09-291-41/+35
| | | | | | | | | | | Perform a code cleanup against the expand_fdtable() and expand_files() functions inside fs/file.c. It aims to make the flow of code within these functions simpler and easier to understand, via added comments and modest refactoring. Signed-off-by: Vadim Lobanov <vlobanov@speakeasy.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] alloc_fdtable() cleanupAndrew Morton2006-09-271-4/+2
| | | | | | | free_fdset(NULL, ...) is legal. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] alloc_fdtable() expansion fixAndrew Morton2006-07-121-1/+1
| | | | | | | | | | | We're supposed to go the next power of two if nfds==nr. Of `nr', not of `nfsd'. Spotted by Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] fix fdset leakageKirill Korotaev2006-07-121-1/+3
| | | | | | | | | | | | | | When found, it is obvious. nfds calculated when allocating fdsets is rewritten by calculation of size of fdtable, and when we are unlucky, we try to free fdsets of wrong size. Found due to OpenVZ resource management (User Beancounters). Signed-off-by: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Signed-off-by: Kirill Korotaev <dev@openvz.org> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] fix weird logic in alloc_fdtable()Andrew Morton2006-07-101-7/+3
| | | | | | | | | | There's a fairly obvious infinite loop in there. Also, use roundup_pow_of_two() rather than open-coding stuff. Cc: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] for_each_possible_cpu: fixes for generic partKAMEZAWA Hiroyuki2006-03-281-1/+1
| | | | | | | | replaces for_each_cpu with for_each_possible_cpu(). Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Shrinks sizeof(files_struct) and better layoutEric Dumazet2006-03-231-20/+14
| | | | | | | | | | | | | | | | | | | | | | | | 1) Reduce the size of (struct fdtable) to exactly 64 bytes on 32bits platforms, lowering kmalloc() allocated space by 50%. 2) Reduce the size of (files_struct), using a special 32 bits (or 64bits) embedded_fd_set, instead of a 1024 bits fd_set for the close_on_exec_init and open_fds_init fields. This save some ram (248 bytes per task) as most tasks dont open more than 32 files. D-Cache footprint for such tasks is also reduced to the minimum. 3) Reduce size of allocated fdset. Currently two full pages are allocated, that is 32768 bits on x86 for example, and way too much. The minimum is now L1_CACHE_BYTES. UP and SMP should benefit from this patch, because most tasks will touch only one cache line when open()/close() stdin/stdout/stderr (0/1/2), (next_fd, close_on_exec_init, open_fds_init, fd_array[0 .. 2] being in the same cache line) Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] percpu data: only iterate over possible CPUsEric Dumazet2006-02-051-2/+1
| | | | | | | | | | | | | | | | | | | | | | | percpu_data blindly allocates bootmem memory to store NR_CPUS instances of cpudata, instead of allocating memory only for possible cpus. As a preparation for changing that, we need to convert various 0 -> NR_CPUS loops to use for_each_cpu(). (The above only applies to users of asm-generic/percpu.h. powerpc has gone it alone and is presently only allocating memory for present CPUs, so it's currently corrupting memory). Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: James Bottomley <James.Bottomley@steeleye.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Jens Axboe <axboe@suse.de> Cc: Anton Blanchard <anton@samba.org> Acked-by: William Irwin <wli@holomorphy.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Fix the fdtable freeing in the case of vmalloced fdset/arraysDipankar Sarma2005-09-141-7/+3
| | | | | | | | | | | | | | | Noted by David Miller: "The bug is that free_fd_array() takes a "num" argument, but when calling it from __free_fdtable() we're instead passing in the size in bytes (ie. "num * sizeof(struct file *)")." Yes it is a bug. I think I messed it up while merging newer changes with an older version where I was using size in bytes to optimize. Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] files: files struct with RCUDipankar Sarma2005-09-091-131/+258
| | | | | | | | | | | | | | | | | | | | | Patch to eliminate struct files_struct.file_lock spinlock on the reader side and use rcu refcounting rcuref_xxx api for the f_count refcounter. The updates to the fdtable are done by allocating a new fdtable structure and setting files->fdt to point to the new structure. The fdtable structure is protected by RCU thereby allowing lock-free lookup. For fd arrays/sets that are vmalloced, we use keventd to free them since RCU callbacks can't sleep. A global list of fdtable to be freed is not scalable, so we use a per-cpu list. If keventd is already handling the current cpu's work, we use a timer to defer queueing of that work. Since the last publication, this patch has been re-written to avoid using explicit memory barriers and use rcu_assign_pointer(), rcu_dereference() premitives instead. This required that the fd information is kept in a separate structure (fdtable) and updated atomically. Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] files: break up files structDipankar Sarma2005-09-091-17/+25
| | | | | | | | | | | | | | | In order for the RCU to work, the file table array, sets and their sizes must be updated atomically. Instead of ensuring this through too many memory barriers, we put the arrays and their sizes in a separate structure. This patch takes the first step of putting the file table elements in a separate structure fdtable that is embedded withing files_struct. It also changes all the users to refer to the file table using files_fdtable() macro. Subsequent applciation of RCU becomes easier after this. Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com> Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-161-0/+254
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
OpenPOWER on IntegriCloud