summaryrefslogtreecommitdiffstats
path: root/sys/netncp/ncp_sock.c
Commit message (Collapse)AuthorAgeFilesLines
* Mechanically substitute flags from historic mbuf allocator withglebius2012-12-051-1/+1
| | | | | | | | | malloc(9) flags within sys. Exceptions: - sys/contrib not touched - sys/mbuf.h edited manually
* Switch to our preferred 2-clause BSD license.joel2010-04-071-6/+0
| | | | Approved by: bp
* Retire the MALLOC and FREE macros. They are an abomination unto style(9).des2008-10-231-2/+2
| | | | MFC after: 3 months
* Replaced the misleading uses of a historical artefact M_TRYWAIT with M_WAIT.ru2008-03-251-1/+1
| | | | | | | | | | Removed dead code that assumed that M_TRYWAIT can return NULL; it's not true since the advent of MBUMA. Reviewed by: arch There are ongoing disputes as to whether we want to switch to directly using UMA flags M_WAITOK/M_NOWAIT for mbuf(9) allocation.
* Refactor select to reduce contention and hide internal implementationjeff2007-12-161-105/+0
| | | | | | | | | | | | | | | | | | | | | details from consumers. - Track individual selecters on a per-descriptor basis such that there are no longer collisions and after sleeping for events only those descriptors which triggered events must be rescaned. - Protect the selinfo (per descriptor) structure with a mtx pool mutex. mtx pool mutexes were chosen to preserve api compatibility with existing code which does nothing but bzero() to setup selinfo structures. - Use a per-thread wait channel rather than a global wait channel. - Hide select implementation details in a seltd structure which is opaque to the rest of the kernel. - Provide a 'selsocket' interface for those kernel consumers who wish to select on a socket when they have no fd so they no longer have to be aware of select implementation details. Tested by: kris Reviewed on: arch
* Commit 14/14 of sched_lock decomposition.jeff2007-06-051-11/+11
| | | | | | | | | | | - Use thread_lock() rather than sched_lock for per-thread scheduling sychronization. - Use the per-process spinlock rather than the sched_lock for per-process scheduling synchronization. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
* Use pause() rather than tsleep() on stack variables and function pointers.jhb2007-02-271-2/+1
|
* - Fix ncp_poll() to not panic if the socket doesn't have any pending data.jhb2006-08-031-7/+27
| | | | | | | | | | | | | | | | | | | | We have to adjust curthread's state enough so that it appears to be in a poll(2) or select(2) call so that selrecord() will work and then teardown that state after calling sopoll(). - Fix some minor nits in nearby ncp_sock_rselect() and in the identical nbssn_rselect() function in the netsmb code: - Don't call nb_poll()/ncp_poll() now that ncp_poll() already fakes up poll(2) state since the rselect() functions already do that. Just invoke sopoll() directly. - To make things slightly more intuitive, store the results of sopoll() in a new 'revents' variable rather than 'error' since that's what sopoll() actually returns. - If the requested timeout time has been exceeded by the time we get ready to block, then return EWOULDBLOCK rather than 0 to signal a timeout as this is what the calling code expects. Tested by: Eric Christeson <eric.j.christeson AT gmail> (1) MFC after: 1 week
* soreceive_generic(), and sopoll_generic(). Add new functions sosend(),rwatson2006-07-241-9/+8
| | | | | | | | | | | | | | | | soreceive(), and sopoll(), which are wrappers for pru_sosend, pru_soreceive, and pru_sopoll, and are now used univerally by socket consumers rather than either directly invoking the old so*() functions or directly invoking the protocol switch method (about an even split prior to this commit). This completes an architectural change that was begun in 1996 to permit protocols to provide substitute implementations, as now used by UDP. Consumers now uniformly invoke sosend(), soreceive(), and sopoll() to perform these operations on sockets -- in particular, distributed file systems and socket system calls. Architectural head nod: sam, gnn, wollman
* /* -> /*- for license, minor formatting changesimp2005-01-071-1/+1
|
* Use __FBSDID().obrien2003-06-111-2/+3
|
* Catch up with KSE changes.fjoe2003-02-261-65/+79
| | | | Reviewed by: tjr
* Back out M_* changes, per decision of the TRB.imp2003-02-191-1/+1
| | | | Approved by: trb
* Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.alfred2003-01-211-1/+1
| | | | Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
* Back out my lats commit of locking down a socket, it conflicts with hsu's work.tanimura2002-05-311-4/+1
| | | | Requested by: hsu
* CURSIG() is not a macro so rename it cursig().julian2002-05-291-1/+1
| | | | Obtained from: KSE tree
* Lock down a socket, milestone 1.tanimura2002-05-201-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | o Add a mutex (sb_mtx) to struct sockbuf. This protects the data in a socket buffer. The mutex in the receive buffer also protects the data in struct socket. o Determine the lock strategy for each members in struct socket. o Lock down the following members: - so_count - so_options - so_linger - so_state o Remove *_locked() socket APIs. Make the following socket APIs touching the members above now require a locked socket: - sodisconnect() - soisconnected() - soisconnecting() - soisdisconnected() - soisdisconnecting() - sofree() - soref() - sorele() - sorwakeup() - sotryfree() - sowakeup() - sowwakeup() Reviewed by: alfred
* Include sys/lock.h and sys/mutex.h so that this compiles.jhb2001-05-151-0/+2
|
* Back out scanning file descriptors with holding a process lock.tanimura2001-05-151-2/+16
| | | | | selrecord() requires allproc sx in pfind(), resulting in lock order reversal between allproc and a process lock.
* - Convert msleep(9) in select(2) and poll(2) to cv_*wait*(9).tanimura2001-05-141-9/+13
| | | | | | | | | | | | | - Since polling should not involve sleeping, keep holding a process lock upon scanning file descriptors. - Hold a reference to every file descriptor prior to entering polling loop in order to avoid lock order reversal between lockmgr and p_mtx upon calling fdrop() in fo_poll(). (NOTE: this work has not been done for netncp and netsmb yet because a socket itself has no reference counts.) Reviewed by: jhb
* Major update of NCP requester:bp2001-03-101-29/+29
| | | | | | | | | | | | | | | | | Use mchain API to work with mbuf chains. Do not depend on INET and IPX options. Allocate ncp_rq structure dynamically to prevent possible stack overflows. Let ncp_request() dispose control structure if request failed. Move all NCP wrappers to ncp_ncp.c file and all NCP request processing functions to ncp_rq.c file. Improve reconnection logic. Misc style fixes.
* * Rename M_WAIT mbuf subsystem flag to M_TRYWAIT.bmilekic2000-12-211-1/+1
| | | | | | | | | | | | | | | | | | This is because calls with M_WAIT (now M_TRYWAIT) may not wait forever when nothing is available for allocation, and may end up returning NULL. Hopefully we now communicate more of the right thing to developers and make it very clear that it's necessary to check whether calls with M_(TRY)WAIT also resulted in a failed allocation. M_TRYWAIT basically means "try harder, block if necessary, but don't necessarily wait forever." The time spent blocking is tunable with the kern.ipc.mbuf_wait sysctl. M_WAIT is now deprecated but still defined for the next little while. * Fix a typo in a comment in mbuf.h * Fix some code that was actually passing the mbuf subsystem's M_WAIT to malloc(). Made it pass M_WAITOK instead. If we were ever to redefine the value of the M_WAIT flag, this could have became a big problem.
* Remove unnessary includes.bp1999-10-121-3/+0
|
* Import kernel part of ncplib: netncp and nwfsbp1999-10-021-0/+442
Reviewed by: msmith, peter Obtained from: ncplib
OpenPOWER on IntegriCloud