summaryrefslogtreecommitdiffstats
path: root/ipc/sem.c
Commit message (Collapse)AuthorAgeFilesLines
* ipc/sem.c: fix race with concurrent semtimedop() timeouts and IPC_RMIDManfred Spraul2011-07-251-2/+11
| | | | | | | | | | | | | | | | | | | | If a semaphore array is removed and in parallel a sleeping task is woken up (signal or timeout, does not matter), then the woken up task does not wait until wake_up_sem_queue_do() is completed. This will cause crashes, because wake_up_sem_queue_do() will read from a stale pointer. The fix is simple: Regardless of anything, always call get_queue_result(). This function waits until wake_up_sem_queue_do() has finished it's task. Addresses https://bugzilla.kernel.org/show_bug.cgi?id=27142 Reported-by: Yuriy Yevtukhov <yuriy@ucoz.com> Reported-by: Harald Laabs <kernel@dasr.de> Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: <stable@kernel.org> [2.6.35+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc,rcu: Convert call_rcu(free_un) to kfree_rcu()Lai Jiangshan2011-07-201-8/+2
| | | | | | | | | | | The rcu callback free_un() just calls a kfree(), so we use kfree_rcu() instead of the call_rcu(free_un). Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* Fix common misspellingsLucas De Marchi2011-03-311-1/+1
| | | | | | Fixes generated by 'codespell' and manually reviewed. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
* userns: user namespaces: convert several capable() callsSerge E. Hallyn2011-03-231-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | CAP_IPC_OWNER and CAP_IPC_LOCK can be checked against current_user_ns(), because the resource comes from current's own ipc namespace. setuid/setgid are to uids in own namespace, so again checks can be against current_user_ns(). Changelog: Jan 11: Use task_ns_capable() in place of sched_capable(). Jan 11: Use nsown_capable() as suggested by Bastian Blank. Jan 11: Clarify (hopefully) some logic in futex and sched.c Feb 15: use ns_capable for ipc, not nsown_capable Feb 23: let copy_ipcs handle setting ipc_ns->user_ns Feb 23: pass ns down rather than taking it from current [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Serge E. Hallyn <serge.hallyn@canonical.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: Daniel Lezcano <daniel.lezcano@free.fr> Acked-by: David Howells <dhowells@redhat.com> Cc: James Morris <jmorris@namei.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sys_semctl: fix kernel stack leakageDan Rosenberg2010-10-011-0/+2
| | | | | | | | | | | | | | | | | | | | | | The semctl syscall has several code paths that lead to the leakage of uninitialized kernel stack memory (namely the IPC_INFO, SEM_INFO, IPC_STAT, and SEM_STAT commands) during the use of the older, obsolete version of the semid_ds struct. The copy_semid_to_user() function declares a semid_ds struct on the stack and copies it back to the user without initializing or zeroing the "sem_base", "sem_pending", "sem_pending_last", and "undo" pointers, allowing the leakage of 16 bytes of kernel stack memory. The code is still reachable on 32-bit systems - when calling semctl() newer glibc's automatically OR the IPC command with the IPC_64 flag, but invoking the syscall directly allows users to use the older versions of the struct. Signed-off-by: Dan Rosenberg <dan.j.rosenberg@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: bugfix for semop() not reporting successful operationManfred Spraul2010-07-201-7/+39
| | | | | | | | | | | | | | | | | | | | | | | | | The last change to improve the scalability moved the actual wake-up out of the section that is protected by spin_lock(sma->sem_perm.lock). This means that IN_WAKEUP can be in queue.status even when the spinlock is acquired by the current task. Thus the same loop that is performed when queue.status is read without the spinlock acquired must be performed when the spinlock is acquired. Thanks to kamezawa.hiroyu@jp.fujitsu.com for noticing lack of the memory barrier. Addresses https://bugzilla.kernel.org/show_bug.cgi?id=16255 [akpm@linux-foundation.org: clean up kerneldoc, checkpatch warning and whitespace] Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Reported-by: Luca Tettamanti <kronos.it@gmail.com> Tested-by: Luca Tettamanti <kronos.it@gmail.com> Reported-by: Christoph Lameter <cl@linux-foundation.org> Cc: Maciej Rutecki <maciej.rutecki@gmail.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: use ERR_CASTJulia Lawall2010-05-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use ERR_CAST(x) rather than ERR_PTR(PTR_ERR(x)). The former makes more clear what is the purpose of the operation, which otherwise looks like a no-op. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ type T; T x; identifier f; @@ T f (...) { <+... - ERR_PTR(PTR_ERR(x)) + x ...+> } @@ expression x; @@ - ERR_PTR(PTR_ERR(x)) + ERR_CAST(x) // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Cc: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: update description of the implementationManfred Spraul2010-05-271-50/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ipc/sem.c begins with a 15 year old description about bugs in the initial implementation in Linux-1.0. The patch replaces that with a top level description of the current code. A TODO could be derived from this text: The opengroup man page for semop() does not mandate FIFO. Thus there is no need for a semaphore array list of pending operations. If - this list is removed - the per-semaphore array spinlock is removed (possible if there is no list to protect) - sem_otime is moved into the semaphores and calculated on demand during semctl() then the array would be read-mostly - which would significantly improve scaling for applications that use semaphore arrays with lots of entries. The price would be expensive semctl() calls: for(i=0;i<sma->sem_nsems;i++) spin_lock(sma->sem_lock); <do stuff> for(i=0;i<sma->sem_nsems;i++) spin_unlock(sma->sem_lock); I'm not sure if the complexity is worth the effort, thus here is the documentation of the current behavior first. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Zach Brown <zach.brown@oracle.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: move wake_up_process out of the spinlock sectionManfred Spraul2010-05-271-32/+91
| | | | | | | | | | | | | | | | | | | | | | | | The wake-up part of semtimedop() consists out of two steps: - the right tasks must be identified. - they must be woken up. Right now, both steps run while the array spinlock is held. This patch reorders the code and moves the actual wake_up_process() behind the point where the spinlock is dropped. The code also moves setting sem->sem_otime to one place: It does not make sense to set the last modify time multiple times. [akpm@linux-foundation.org: repair kerneldoc] [akpm@linux-foundation.org: fix uninitialised retval] Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Zach Brown <zach.brown@oracle.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: optimize update_queue() for bulk wakeup callsManfred Spraul2010-05-271-13/+97
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following series of patches tries to fix the spinlock contention reported by Chris Mason - his benchmark exposes problems of the current code: - In the worst case, the algorithm used by update_queue() is O(N^2). Bulk wake-up calls can enter this worst case. The patch series fix that. Note that the benchmark app doesn't expose the problem, it just should be fixed: Real world apps might do the wake-ups in another order than perfect FIFO. - The part of the code that runs within the semaphore array spinlock is significantly larger than necessary. The patch series fixes that. This change is responsible for the main improvement. - The cacheline with the spinlock is also used for a variable that is read in the hot path (sem_base) and for a variable that is unnecessarily written to multiple times (sem_otime). The last step of the series cacheline-aligns the spinlock. This patch: The SysV semaphore code allows to perform multiple operations on all semaphores in the array as atomic operations. After a modification, update_queue() checks which of the waiting tasks can complete. The algorithm that is used to identify the tasks is O(N^2) in the worst case. For some cases, it is simple to avoid the O(N^2). The patch adds a detection logic for some cases, especially for the case of an array where all sleeping tasks are single sembuf operations and a multi-sembuf operation is used to wake up multiple tasks. A big database application uses that approach. The patch fixes wakeup due to semctl(,,SETALL,) - the initial version of the patch breaks that. [akpm@linux-foundation.org: make do_smart_update() static] Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Zach Brown <zach.brown@oracle.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc: remove unreachable code in sem.cAmerigo Wang2009-12-161-2/+1
| | | | | | | | | This line is unreachable, remove it. [akpm@linux-foundation.org: remove unneeded initialisation of `err'] Signed-off-by: WANG Cong <amwang@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: optimize single sops when semval is zeroManfred Spraul2009-12-161-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | If multiple simple decrements on the same semaphore are pending, then the current code scans all decrement operations, even if the semaphore value is already 0. The patch optimizes that: if the semaphore value is 0, then there is no need to scan the q->alter entries. Note that this is a common case: It happens if 100 decrements by one are pending and now an increment by one increases the semaphore value from 0 to 1. Without this patch, all 100 entries are scanned. With the patch, only one entry is scanned, then woken up. Then the new rule triggers and the scanning is aborted, without looking at the remaining 99 tasks. With this patch, single sop increment/decrement by 1 are now O(1). (same as with Nick's patch) Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: optimize single semop operationsManfred Spraul2009-12-161-11/+40
| | | | | | | | | | | | | | | | | | | | | | sysv sem has the concept of semaphore arrays that consist out of multiple semaphores. Atomic operations that affect multiple semaphores are supported. The patch optimizes single semaphore operation calls that affect only one semaphore: It's not necessary to scan all pending operations, it is sufficient to scan the per-semaphore list. The idea is from Nick Piggin version of an ipc sem improvement, the implementation is different: The code tries to keep as much common code as possible. As the result, the patch is simpler, but optimizes fewer cases. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: add a per-semaphore pending listManfred Spraul2009-12-161-5/+32
| | | | | | | | | | | | | | | | | | | | | Based on Nick's findings: sysv sem has the concept of semaphore arrays that consist out of multiple semaphores. Atomic operations that affect multiple semaphores are supported. The patch is the first step for optimizing simple, single semaphore operations: In addition to the global list of all pending operations, a 2nd, per-semaphore list with the simple operations is added. Note: this patch does not make sense by itself, the new list is used nowhere. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: optimize if semops failManfred Spraul2009-12-161-2/+2
| | | | | | | | | | | | | | | | | | | Reduce the amount of scanning of the list of pending semaphore operations: If try_atomic_semop failed, then no changes were applied. Thus no need to restart. Additionally, this patch correct an incorrect comment: It's possible to wait for arbitrary semaphore values (do a dec by <x>, wait-for-zero, inc by <x> in one atomic operation) Both changes are from Nick Piggin, the patch is the result of a different split of the individual changes. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: sem preempt improveNick Piggin2009-12-161-15/+23
| | | | | | | | | | | | | | | | | | | | | The strange sysv semaphore wakeup scheme has a kind of busy-wait lock involved, which could deadlock if preemption is enabled during the "lock". It is an implementation detail (due to a spinlock being held) that this is actually the case. However if "spinlocks" are made preemptible, or if the sem lock is changed to a sleeping lock for example, then the wakeup would become buggy. So this might be a bugfix for -rt kernels. Imagine waker being preempted by wakee and never clearing IN_WAKEUP -- if wakee has higher RT priority then there is a priority inversion deadlock. Even if there is not a priority inversion to cause a deadlock, then there is still time wasted spinning. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: sem use list operationsNick Piggin2009-12-161-44/+31
| | | | | | | | | | | | | | Replace the handcoded list operations in update_queue() with the standard list_for_each_entry macros. list_for_each_entry_safe() must be used, because list entries can disappear immediately uppon the wakeup event. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: sem optimise undo list searchNick Piggin2009-12-161-6/+20
| | | | | | | | | | | | | | | | | | | Around a month ago, there was some discussion about an improvement of the sysv sem algorithm: Most (at least: some important) users only use simple semaphore operations, therefore it's worthwile to optimize this use case. This patch: Move last looked up sem_undo struct to the head of the task's undo list. Attempt to move common entries to the front of the list so search time is reduced. This reduces lookup_undo on oprofile of problematic SAP workload by 30% (see patch 4 for a description of SAP workload). Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc ns: fix memory leak (idr)Serge E. Hallyn2009-12-161-0/+1
| | | | | | | | | | | | | | | | | We have apparently had a memory leak since 7ca7e564e049d8b350ec9d958ff25eaa24226352 "ipc: store ipcs into IDRs" in 2007. The idr of which 3 exist for each ipc namespace is never freed. This patch simply frees them when the ipcns is freed. I don't believe any idr_remove() are done from rcu (and could therefore be delayed until after this idr_destroy()), so the patch should be safe. Some quick testing showed no harm, and the memory leak fixed. Caught by kmemleak. Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* rculist: use list_entry_rcu in places where it's appropriateJiri Pirko2009-04-151-2/+2
| | | | | | | | | | | Use previously introduced list_entry_rcu instead of an open-coded list_entry + rcu_dereference combination. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: dipankar@in.ibm.com LKML-Reference: <20090414181715.GA3634@psychotron.englab.brq.redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* [CVE-2009-0029] System call wrappers part 25Heiko Carstens2009-01-141-4/+5
| | | | Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* [CVE-2009-0029] System call wrapper special casesHeiko Carstens2009-01-141-1/+8
| | | | | | | | | | | | System calls with an unsigned long long argument can't be converted with the standard wrappers since that would include a cast to long, which in turn means that we would lose the upper 32 bit on 32 bit architectures. Also semctl can't use the standard wrapper since it has a 'union' parameter. So we handle them as special case and add some extra wrappers instead. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* ipc: do not goto to the next lineDenis V. Lunev2009-01-061-1/+0
| | | | | | | Signed-off-by: Denis V. Lunev <den@openvz.org> Reviewed-by: WANG Cong <wangcong@zeuux.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: update my addressAlan Cox2009-01-051-1/+1
| | | | | Signed-off-by: Alan Cox <alan@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: make free_un() staticAdrian Bunk2008-10-161-1/+1
| | | | | | Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: rewrite undo list lockingManfred Spraul2008-07-251-54/+93
| | | | | | | | | | | | | | | | | | | | | | The attached patch: - reverses the locking order of ulp->lock and sem_lock: Previously, it was first ulp->lock, then inside sem_lock. Now it's the other way around. - converts the undo structure to rcu. Benefits: - With the old locking order, IPC_RMID could not kfree the undo structures. The stale entries remained in the linked lists and were released later. - The patch fixes a a race in semtimedop(): if both IPC_RMID and a semget() that recreates exactly the same id happen between find_alloc_undo() and sem_lock, then semtimedop() would access already kfree'd memory. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Reviewed-by: Nadia Derbey <Nadia.Derbey@bull.net> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: convert sem_array.sem_pending to struct list_headManfred Spraul2008-07-251-57/+35
| | | | | | | | | | | | sem_array.sem_pending is a double linked list, the attached patch converts it to struct list_head. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Reviewed-by: Nadia Derbey <Nadia.Derbey@bull.net> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: remove unused entries from struct sem_queueManfred Spraul2008-07-251-2/+0
| | | | | | | | | | | sem_queue.sma and sem_queue.id were never used, the attached patch removes them. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Reviewed-by: Nadia Derbey <Nadia.Derbey@bull.net> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc/sem.c: convert undo structures to struct list_headManfred Spraul2008-07-251-74/+89
| | | | | | | | | | | | The undo structures contain two linked lists, the attached patch replaces them with generic struct list_head lists. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc: sysvsem: implement sys_unshare(CLONE_SYSVSEM)Manfred Spraul2008-04-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sys_unshare(CLONE_NEWIPC) doesn't handle the undo lists properly, this can cause a kernel memory corruption. CLONE_NEWIPC must detach from the existing undo lists. Fix, part 1: add support for sys_unshare(CLONE_SYSVSEM) The original reason to not support it was the potential (inevitable?) confusion due to the fact that sys_unshare(CLONE_SYSVSEM) has the inverse meaning of clone(CLONE_SYSVSEM). Our two most reasonable options then appear to be (1) fully support CLONE_SYSVSEM, or (2) continue to refuse explicit CLONE_SYSVSEM, but always do it anyway on unshare(CLONE_SYSVSEM). This patch does (1). Changelog: Apr 16: SEH: switch to Manfred's alternative patch which removes the unshare_semundo() function which always refused CLONE_SYSVSEM. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Pavel Emelyanov <xemul@openvz.org> Cc: Michael Kerrisk <mtk.manpages@googlemail.com> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC: consolidate all xxxctl_down() functionsPierre Peiffer2008-04-291-38/+4
| | | | | | | | | | | | | | | | | | | semctl_down(), msgctl_down() and shmctl_down() are used to handle the same set of commands for each kind of IPC. They all start to do the same job (they retrieve the ipc and do some permission checks) before handling the commands on their own. This patch proposes to consolidate this by moving these same pieces of code into one common function called ipcctl_pre_down(). It simplifies a little these xxxctl_down() functions and increases a little the maintainability. Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Acked-by: Serge Hallyn <serue@us.ibm.com> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC: introduce ipc_update_perm()Pierre Peiffer2008-04-291-4/+1
| | | | | | | | | | | | The IPC_SET command performs the same permission setting for all IPCs. This patch introduces a common ipc_update_perm() function to update these permissions and makes use of it for all IPCs. Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Acked-by: Serge Hallyn <serue@us.ibm.com> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC: get rid of the use *_setbuf structure.Pierre Peiffer2008-04-291-26/+14
| | | | | | | | | | | | | | | | | | All IPCs make use of an intermetiate *_setbuf structure to handle the IPC_SET command. This is not really needed and, moreover, it complicates a little bit the code. This patch gets rid of the use of it and uses directly the semid64_ds/ msgid64_ds/shmid64_ds structure. In addition of removing one struture declaration, it also simplifies and improves a little bit the common 64-bits path. Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Acked-by: Serge Hallyn <serue@us.ibm.com> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC/semaphores: remove one unused parameter from semctl_down()Pierre Peiffer2008-04-291-3/+3
| | | | | | | | | | | semctl_down() takes one unused parameter: semnum. This patch proposes to get rid of it. Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Acked-by: Serge Hallyn <serue@us.ibm.com> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC/semaphores: move the rwmutex handling inside semctl_downPierre Peiffer2008-04-291-11/+13
| | | | | | | | | | | | | | | | | semctl_down is called with the rwmutex (the one which protects the list of ipcs) taken in write mode. This patch moves this rwmutex taken in write-mode inside semctl_down. This has the advantages of reducing a little bit the window during which this rwmutex is taken, clarifying sys_semctl, and finally of having a coherent behaviour with [shm|msg]ctl_down Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Acked-by: Serge Hallyn <serue@us.ibm.com> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC/semaphores: code factorisationPierre Peiffer2008-04-291-30/+31
| | | | | | | | | | | Trivial patch which adds some small locking functions and makes use of them to factorize some part of the code and to make it cleaner. Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Acked-by: Serge Hallyn <serue@us.ibm.com> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC: use ipc_buildid() directly from ipc_addid()Pierre Peiffer2008-04-291-2/+0
| | | | | | | | | | | | | | | By continuing to consolidate a little the IPC code, each id can be built directly in ipc_addid() instead of having it built from each callers of ipc_addid() And I also remove shm_addid() in order to have, as much as possible, the same code for shm/sem/msg. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC: consolidate sem_exit_ns(), msg_exit_ns() and shm_exit_ns()Pierre Peiffer2008-02-081-22/+5
| | | | | | | | | | | | | | | | | | | | sem_exit_ns(), msg_exit_ns() and shm_exit_ns() are all called when an ipc_namespace is released to free all ipcs of each type. But in fact, they do the same thing: they loop around all ipcs to free them individually by calling a specific routine. This patch proposes to consolidate this by introducing a common function, free_ipcs(), that do the job. The specific routine to call on each individual ipcs is passed as parameter. For this, these ipc-specific 'free' routines are reworked to take a generic 'struct ipc_perm' as parameter. Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Cc: Cedric Le Goater <clg@fr.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC: make struct ipc_ids static in ipc_namespacePierre Peiffer2008-02-081-22/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | Each ipc_namespace contains a table of 3 pointers to struct ipc_ids (3 for msg, sem and shm, structure used to store all ipcs) These 'struct ipc_ids' are dynamically allocated for each icp_namespace as the ipc_namespace itself (for the init namespace, they are initialized with pointers to static variables instead) It is so for historical reason: in fact, before the use of idr to store the ipcs, the ipcs were stored in tables of variable length, depending of the maximum number of ipc allowed. Now, these 'struct ipc_ids' have a fixed size. As they are allocated in any cases for each new ipc_namespace, there is no gain of memory in having them allocated separately of the struct ipc_namespace. This patch proposes to make this table static in the struct ipc_namespace. Thus, we can allocate all in once and get rid of all the code needed to allocate and free these ipc_ids separately. Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Acked-by: Cedric Le Goater <clg@fr.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC/semaphores: consolidate SEM_STAT and IPC_STAT commandsPierre Peiffer2008-02-081-22/+16
| | | | | | | | | | | | | | | | These commands (SEM_STAT and IPC_STAT) are rather doing the same things (only the meaning of the id given as input and the return value differ). However, for the semaphores, they are handled in two different places (two different functions). This patch consolidates this for clarification by handling these both commands in the same place in semctl_nolock(). It also removes one unused parameter for this function. Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* namespaces: move the IPC namespace under IPC_NS optionPavel Emelyanov2008-02-081-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the IPC namespace management code is spread over the ipc/*.c files. I moved this code into ipc/namespace.c file which is compiled out when needed. The linux/ipc_namespace.h file is used to store the prototypes of the functions in namespace.c and the stubs for NAMESPACES=n case. This is done so, because the stub for copy_ipc_namespace requires the knowledge of the CLONE_NEWIPC flag, which is in sched.h. But the linux/ipc.h file itself in included into many many .c files via the sys.h->sem.h sequence so adding the sched.h into it will make all these .c depend on sched.h which is not that good. On the other hand the knowledge about the namespaces stuff is required in 4 .c files only. Besides, this patch compiles out some auxiliary functions from ipc/sem.c, msg.c and shm.c files. It turned out that moving these functions into namespaces.c is not that easy because they use many other calls and macros from the original file. Moving them would make this patch complicated. On the other hand all these functions can be consolidated, so I will send a separate patch doing this a bit later. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Acked-by: Serge Hallyn <serue@us.ibm.com> Cc: Cedric Le Goater <clg@fr.ibm.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Herbert Poetzl <herbert@13thfloor.at> Cc: Kirill Korotaev <dev@sw.ru> Cc: Sukadev Bhattiprolu <sukadev@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC: fix error check in all new xxx_lock() and xxx_exit_ns() functionsPierre Peiffer2008-02-061-3/+14
| | | | | | | | | | | | | | | | | | | | | | | | | In the new implementation of the [sem|shm|msg]_lock[_check]() routines, we use the return value of ipc_lock() in container_of() without any check. But ipc_lock may return a errcode. The use of this errcode in container_of() may alter this errcode, and we don't want this. And in xxx_exit_ns, the pointer return by idr_find is of type 'struct kern_ipc_per'... Today, the code will work as is because the member used in these container_of() is the first member of its container (offset == 0), the errcode isn't changed then. But in the general case, we can't count on this assumption and this may lead later to a real bug if we don't correct this. Again, the proposed solution is simple and correct. But, as pointed by Nadia, with this solution, the same check will be done several times (in all sub-callers...), what is not very funny/optimal... Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC: fix error case when idr-cache is empty in ipcget()Pierre Peiffer2007-10-191-2/+2
| | | | | | | | | | | | | | | With the use of idr to store the ipc, the case where the idr cache is empty, when idr_get_new is called (this may happen even if we call idr_pre_get() before), is not well handled: it lets semget()/shmget()/msgget() return ENOSPC when this cache is empty, what 1. does not reflect the facts and 2. does not conform to the man(s). This patch fixes this by retrying the whole process of allocation in this case. Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IPC: cleanup some code and wrong comments about semundo list managmentPierre Peiffer2007-10-191-40/+6
| | | | | | | | | | | | | | | | | | | | Some comments about sem_undo_list seem wrong. About the comment above unlock_semundo: "... If task2 now exits before task1 releases the lock (by calling unlock_semundo()), then task1 will never call spin_unlock(). ..." This is just wrong, I see no reason for which task1 will not call spin_unlock... The rest of this comment is also wrong... Unless I miss something (of course). Finally, (un)lock_semundo functions are useless, so remove them for simplification. (this avoids an useless if statement) Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net> Cc: Nadia Derbey <Nadia.Derbey@bull.net> Acked-by: Serge Hallyn <serue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc: remove unneeded parametersNadia Derbey2007-10-191-6/+4
| | | | | | | | | Remvoe the unneeded parameters from ipc_checkid() and ipc_buildid() interfaces. Signed-off-by: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* fix idr_find() lockingNadia Derbey2007-10-191-14/+30
| | | | | | | | | | | | | | | | | | This is a patch that fixes the way idr_find() used to be called in ipc_lock(): in all the paths that don't imply an update of the ipcs idr, it was called without the idr tree being locked. The changes are: . in ipc_ids, the mutex has been changed into a reader/writer semaphore. . ipc_lock() now takes the mutex as a reader during the idr_find(). . a new routine ipc_lock_down() has been defined: it doesn't take the mutex, assuming that it is being held by the caller. This is the routine that is now called in all the update paths. Signed-off-by: Nadia Derbey <Nadia.Derbey@bull.net> Acked-by: Jarek Poplawski <jarkao2@o2.pl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc: fix wrong commentsNadia Derbey2007-10-191-0/+14
| | | | | | | | | This patch fixes the wrong / obsolete comments in the ipc code. Also adds a missing lock around ipc_get_maxid() in shm_get_stat(). Signed-off-by: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Storing ipcs into IDRsNadia Derbey2007-10-191-6/+17
| | | | | | | | | | | | | | This patch converts casts of struct kern_ipc_perm to . struct msg_queue . struct sem_array . struct shmid_kernel into the equivalent container_of() macro. It improves code maintenance because the code need not change if kern_ipc_perm is no longer at the beginning of the containing struct. Signed-off-by: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc: integrate ipc_checkid() into ipc_lock()Nadia Derbey2007-10-191-37/+33
| | | | | | | | | | | | This patch introduces a new ipc_lock_check() routine interface: . each time ipc_checkid() is called, this is done after calling ipc_lock(). ipc_checkid() is now called from inside ipc_lock_check(). [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix RCU locking] Signed-off-by: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ipc: unify the syscalls codeNadia Derbey2007-10-191-47/+29
| | | | | | | | | | This patch introduces a change into the sys_msgget(), sys_semget() and sys_shmget() routines: they now share a common code, which is better for maintainability. Signed-off-by: Nadia Derbey <Nadia.Derbey@bull.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
OpenPOWER on IntegriCloud