summaryrefslogtreecommitdiffstats
path: root/sys/kern/uipc_mbuf.c
Commit message (Collapse)AuthorAgeFilesLines
* remove now invalid check from m_sanitykmacy2007-04-141-10/+5
| | | | panic on m_sanity check failure with INVARIANTS
* Unbreak writes of 0 bytes. Zero byte writes happen when only ancillaryandre2007-01-221-2/+5
| | | | | | | | | | | control data but no payload data is passed. Change m_uiotombuf() to return at least one empty mbuf if the requested length was zero. Add comment to sosend_dgram and sosend_generic(). Diagnoses by: jhb Regression test by: rwatson Pointy hat to. andre
* The prepend function did not handle non-pkthdr's correctly.rrs2006-12-211-2/+7
| | | | | | | | | | | It always called MH_ALIGN for small lengths being prepended (less than MHLEN). This meant that if you did a prepend on a non M_PKTHDR the system would panic with the KASSERT in MH_ALIGN. Instead we are not aware of this and do a MH_ALIGN or M_ALIGN as appropriate. Reviewed by: andre Approved by: gnn
* Rename m_getm() to m_getm2() and rewrite it to allocate up to page sizedandre2006-11-021-86/+89
| | | | | | | | | | | | | | mbuf clusters. Add a flags parameter to accept M_PKTHDR and M_EOR mbuf chain flags. Provide compatibility macro for m_getm() calling m_getm2() with M_PKTHDR set. Rewrite m_uiotombuf() to use m_getm2() for mbuf allocation and do the uiomove() in a tight loop over the mbuf chain. Add a flags parameter to accept mbuf flags to be passed to m_getm2(). Adjust all callers for the extra parameter. Sponsored by: TCP/IP Optimization Fundraise 2005 MFC after: 3 month
* Complete break-out of sys/sys/mac.h into sys/security/mac/mac_framework.hrwatson2006-10-221-1/+2
| | | | | | | | | | | | | begun with a repo-copy of mac.h to mac_framework.h. sys/mac.h now contains the userspace and user<->kernel API and definitions, with all in-kernel interfaces moved to mac_framework.h, which is now included across most of the kernel instead. This change is the first step in a larger cleanup and sweep of MAC Framework interfaces in the kernel, and will not be MFC'd. Obtained from: TrustedBSD Project Sponsored by: SPARTA
* atomic_fetchadd_int is used by mb_free_ext(), but itrrs2006-09-211-1/+1
| | | | | | | | | | | | | | | | | returns the previous value that the "add" effected (In this case we are adding -1), afterwhich we compare it to '0'... to see if we free the mbuf... we should be comparing it to '1'... Note that this only effects when there is contention since there is a first part to the comparison that checks to see if its '1'. So this bug would only crop up if two CPU's are trying to free the same mbuf refcount at the same time. This will happen in SCTP but I doubt can happen in TCP or UDP. PR: N/A Submitted by: rrs Reviewed by: gnn,sam Approved by: gnn,sam
* Move some functions and definitions from uipc_socket2.c to uipc_socket.c:rwatson2006-06-101-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | - Move sonewconn(), which creates new sockets for incoming connections on listen sockets, so that all socket allocate code is together in uipc_socket.c. - Move 'maxsockets' and associated sysctls to uipc_socket.c with the socket allocation code. - Move kern.ipc sysctl node to uipc_socket.c, add a SYSCTL_DECL() for it to sysctl.h and remove lots of scattered implementations in various IPC modules. - Sort sodealloc() after soalloc() in uipc_socket.c for dependency order reasons. Statisticize soalloc() and sodealloc() as they are now required only in uipc_socket.c, and are internal to the socket implementation. After this change, socket allocation and deallocation is entirely centralized in one file, and uipc_socket2.c consists entirely of socket buffer manipulation and default protocol switch functions. MFC after: 1 month
* promote fast ipsec's m_clone routine for public use; it is renamedsam2006-03-151-0/+153
| | | | | | | m_unshare and the caller can now control how mbufs are allocated Reviewed by: andre, luigi, mlaier MFC after: 1 week
* spell pdata correctly, we now will only dump maxlen of each mbuf in thejmg2006-03-141-1/+1
| | | | | chain, instead of the entire mbuf... This should probably be reworked so that it prints at max maxlen bytes for the entire chain...
* For consistency sake, use >= MINCLSIZE rather than > MINCLSIZE to determinejhb2006-03-071-1/+1
| | | | | | | | | | whether or not to allocate a full mbuf cluster rather than just a plain mbuf when adding on additional mbufs in m_getm(). In practice, there wasn't any resulting mem trashing since m_getm() doesn't ever allocate an mbuf with a packet header, and MINCLSIZE is the available payload in an mbuf with a header rather than the available payload in a plain mbuf. Discussed with: andre (lightly)
* The sysctls kern.ipc.[max_linkhdr|max_protohdr|max_hdr|max_datalen]andre2006-02-181-7/+8
| | | | | | | | | | | can't be changed from userland. Make them read-only and provide descriptions. kern.ipc.max_datalen must never be less than one byte. Enforce this with a panic in net_init_domain(). Sponsored by: TCP/IP Optimization Fundraise 2005 MFC after: 3 days
* Replace the 4k fixed sized jumbo mbuf clusters with PAGE_SIZE sizedandre2006-02-171-2/+2
| | | | | | | | | | | | | | jumbo mbuf clusters. To make the variable size clear they are named MJUMPAGESIZE. Having jumbo clusters with the native PAGE_SIZE is more useful than a fixed 4k size according the device driver writers using this API. The 9k and 16k jumbo mbuf clusters remain unchanged. Requested by: glebius, gallatin Sponsored by: TCP/IP Optimization Fundraise 2005 MFC after: 3 days
* When using m_dup(9) to copy more than MHLEN bytes of data, don't create anemaste2005-12-141-1/+2
| | | | | | | | | | | mbuf chain that starts with a cluster containing just MHLEN bytes. This happened because m_dup called m_get or m_getcl depending on the amount of data to copy, but then always set the size available in the first mbuf to MHLEN. Submitted by: Matt Koivisto <mkoivisto at sandvine dot com> Approved by: jmg Silence from: rwatson (mentor)
* Add an API for jumbo mbuf cluster allocation and also provideandre2005-12-081-0/+3
| | | | | | | | | | | | | | | | | | | | | | 4k clusters in addition to 9k and 16k ones. struct mbuf *m_getjcl(int how, short type, int flags, int size) void *m_cljget(struct mbuf *m, int how, int size) m_getjcl() returns an mbuf with a cluster of the specified size attached like m_getcl() does for 2k clusters. m_cljget() is different from m_clget() as it can allocate clusters without attaching them to an mbuf. In that case the return value is the pointer to the cluster of the requested size. If an mbuf was specified, it gets the cluster attached to it and the return value can be safely ignored. For size both take MCLBYTES, MJUM4BYTES, MJUM9BYTES, MJUM16BYTES. Reviewed by: glebius Tested by: glebius Sponsored by: TCP/IP Optimization Fundraise 2005
* Free only those mbuf+clusters back to the packet zone that were allocatedandre2005-11-051-1/+3
| | | | | | | | | | from there. All others get broken up and free'd individually to the mbuf and cluster zones. The packet zone is a secondary zone to the mbuf zone. There is currently a limitation in UMA which prevents decreasing the packet zone stock when the mbuf and cluster zone are drained and all their members are part of packets. When this is fixed this change may be reverted.
* Fix a logic error introduced with mandatory mbuf cluster refcounting andandre2005-11-041-2/+4
| | | | freeing of mbufs+clusters back to the packet zone.
* Mandatory mbuf cluster reference counting and groundwork for UMAandre2005-11-021-91/+97
| | | | | | | | | | | | | | | | | | | | | | | | based jumbo 9k and jumbo 16k cluster support. All mbuf's with external storage attached are mandatory reference counted. For clusters and jumbo clusters UMA provides the refcnt storage directly. It does not have to be separatly allocated. Any other type of external storage gets its own refcnt allocated from an UMA mbuf refcnt zone instead of normal kernel malloc. The refcount API MEXT_ADD_REF() and MEXT_REM_REF() is no longer publically accessible. The proper m_* functions have to be used. mb_ctor_clust() and mb_dtor_clust() both handle normal 2K as well as 9k and 16k clusters. Clusters and jumbo clusters may be obtained without attaching it immideatly to an mbuf. This is for high performance cluster allocation in network drivers where mbufs are attached after the cluster has been filled. Tested by: rwatson Sponsored by: TCP/IP Optimizations Fundraise 2005
* Changes and cleanups to m_sanity():andre2005-08-301-18/+17
| | | | | | | | | o for() instead of while() looping over mbuf chain o paren's around all flag checks o more verbose function and purpose description o some more style changes Based on feedback from: sam
* Unbreak m_demote() and put back the 'all' flag. Without it we cannotandre2005-08-301-1/+3
| | | | correctly test for m_nextpkt in an mbuf chain.
* o Remove the 'all' flag from m_demote(). Users can simply call it withandre2005-08-301-5/+3
| | | | | | | | m_demote(m->m_next) if they wish to start at the second mbuf in chain. o Test m_type with == instead of &. o Check m_nextpkt against NULL instead of implicit 0. Based on feedback from: sam
* Add m_copymdata(struct mbuf *m, struct mbuf *n, int off, int len,andre2005-08-291-0/+145
| | | | | | | | | | | | | | int prep, int how). Copies the data portion of mbuf (chain) n starting from offset off for length len to mbuf (chain) m. Depending on prep the copied data will be appended or prepended. The function ensures that the mbuf (chain) m will be fully writeable by making real (not refcnt) copies of mbuf clusters. For the prepending the function returns a pointer to the new start of mbuf chain m and leaves as much leading space as possible in the new first mbuf. Reviewed by: glebius
* Add m_sanity(struct mbuf *m, int sanitize) to do some heavy sanityandre2005-08-291-0/+95
| | | | | | | | | | checking on mbuf's and mbuf chains. Set sanitize to 1 to garble illegal things and have them blow up later when used/accessed. m_sanity()'s main purpose is for KASSERT()'s and debugging of non- kosher mbuf manipulation (of which we have a number of). Reviewed by: glebius
* Add m_demote(struct mbuf *m, int all) to clean up mbuf (chain) fromandre2005-08-291-0/+24
| | | | | | | | | | | any tags and packet headers. If "all" is set then the first mbuf in the chain will be cleaned too. This function is used before an mbuf, that arrived as packet with m->flags & M_PKTHDR, is appended to an mbuf chain using m->m_next (not m->m_nextpkt). Reviewed by: glebius
* add m_align, a function to align any type of mbuf (i.e. itsam2005-07-301-0/+19
| | | | | | is a superset of M_ALIGN and MH_ALIGN) Reviewed by: several
* Change m_uiotombuf so it will accept offset at which data should be copiedemax2005-05-041-2/+5
| | | | | | | | | | to the mbuf. Offset cannot exceed MHLEN bytes. This is currently used to fix Ethernet header alignment problem on alpha and sparc64. Also change all users of m_uiotombuf to pass proper offset. Reviewed by: jmg, sam Tested by: Sten Spans "sten AT blinkenlights DOT nl" MFC after: 1 week
* add m_copyup function.. This can be used to help make our ip stack lessjmg2005-03-171-0/+48
| | | | | | | | | | alignment restrictive, and help performance on some ethernet cards which currently copy the entire packet a couple bytes to get the packet aligned properly... Wordsmithing by: dwhite Obtained from: NetBSD (code only) I'll clean it up later: rwatson
* allow the destination of m_move_pkthdr to have externalsam2005-03-081-3/+3
| | | | | | storage (e.g. a cluster) Glanced at by: rwatson, silby
* The m_ext reference counts are potentially shared and modifiedalc2005-03-061-6/+4
| | | | | | | | | | | asynchronously by different threads. Thus, declare as volatile the reference count that is accessed through m_ext's pointer, ref_cnt. Revert the previous change, revision 1.144, that casts as volatile a single dereference of ref_cnt. Reviewed by: bmilekic, dwhite Problem reported by: kris MFC after: 3 days
* Insert volatile cast to discourage gcc from optimizing the read outsidedwhite2005-03-031-1/+4
| | | | | | | of the while loop. Suggested by: alc MFC after: 1 day
* change m_adj to reclaim unused mbufs instead of zero'ing m_lensam2005-02-241-2/+4
| | | | | | | | when trim'ing space off the back of a chain; this is indirect solution to a potential null ptr deref Noticed by: Coverity Prevent analysis tool (null ptr deref) Reviewed by: dg, rwatson
* remove dead codesam2005-02-231-4/+0
| | | | | Noticed by: Coverity Prevent analysis tool Reviewed by: silby
* Optimize the way reference counting is performed with Mbufs. Webmilekic2005-02-101-21/+37
| | | | | | | | | | | | | | do not need to perform an extra memory fetch in the Packet (Mbuf+Cluster) constructor to initialize the reference counter anymore. The reference counts are located in a separate memory region (in the slab header, because this zone is UMA_ZONE_REFCNT), so the memory fetch resulted very often in a cache miss. Additionally, and perhaps more significantly, optimize the free mbuf+cluster (packet) case, which is very common, to no longer require an atomic operation on free (to verify the reference counter) if the reference on the cluster has never been increased (also very common). Reduces an atomic on mbuf free on average. Original patch submitted by: Gerrit Nagelhout <gnagelhout@sandvine.com>
* Make a bunch of malloc types static.phk2005-02-101-1/+1
| | | | Found by: src/tools/tools/kernxref
* /* -> /*- for copyright notices, minor format tweaks as necessaryimp2005-01-061-1/+1
|
* fix m_append for case where additional mbufs are requiredsam2004-12-151-2/+2
|
* add m_append utility function to be used in forthcoming changessam2004-12-081-0/+46
|
* improve the mbuf m_print function.. Only pull length from pkthdr if therejmg2004-09-281-5/+20
| | | | | | | | | | | is one, detect mbuf loops and stop, add an extra arg so you can only print the first x bytes of the data per mbuf (print all if arg is -1), print flags using %b (bitmask)... No code in the tree appears to use m_print, and it's just a maner of adding -1 as an additional arg to m_print to restore original behavior.. MFC after: 4 days
* Back out just a portion of Alfred's last commit. Remove the MBUF_CHECKbmilekic2004-07-211-2/+0
| | | | | | | | (WITNESS) for code paths that always call uma_zalloc_arg() shortly after where the check was, because uma_zalloc_arg() already does a similar check. No objections from Alfred. Thanks Alfred.
* Make sure we don't call mbuf allocation functions with mutexes held.alfred2004-07-211-0/+8
| | | | Discussed with: rwatson
* Gah! Plug a mbuf leak I introduced in the last commit.bmilekic2004-06-111-2/+3
| | | | | | I don the pointy-hat. Problem reported by: Peter Holm <pho@>
* Plug a race where upon free this scenario could occur:bmilekic2004-06-101-15/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (time grows downward) thread 1 thread 2 ------------|------------ dec ref_cnt | | dec ref_cnt <-- ref_cnt now zero cmpset | free all | return | | alloc again,| reuse prev | ref_cnt | | cmpset, read | already freed | ref_cnt ------------|------------ This should fix that by performing only a single atomic test-and-set that will serve to decrement the ref_cnt, only if it hasn't changed since the earlier read, otherwise it'll loop and re-read. This forces ordering of decrements so that truly the thread which did the LAST decrement is the one that frees. This is how atomic-instruction-based refcnting should probably be handled. Submitted by: Julian Elischer
* Fix a panic happening when m_getm() is called with len < MCLBYTES.mux2004-06-091-1/+1
| | | | | | Reported by: ale Tested by: ale Reviewed by: bosko
* Bring in mbuma to replace mballoc.bmilekic2004-05-311-38/+197
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mbuma is an Mbuf & Cluster allocator built on top of a number of extensions to the UMA framework, all included herein. Extensions to UMA worth noting: - Better layering between slab <-> zone caches; introduce Keg structure which splits off slab cache away from the zone structure and allows multiple zones to be stacked on top of a single Keg (single type of slab cache); perhaps we should look into defining a subset API on top of the Keg for special use by malloc(9), for example. - UMA_ZONE_REFCNT zones can now be added, and reference counters automagically allocated for them within the end of the associated slab structures. uma_find_refcnt() does a kextract to fetch the slab struct reference from the underlying page, and lookup the corresponding refcnt. mbuma things worth noting: - integrates mbuf & cluster allocations with extended UMA and provides caches for commonly-allocated items; defines several zones (two primary, one secondary) and two kegs. - change up certain code paths that always used to do: m_get() + m_clget() to instead just use m_getcl() and try to take advantage of the newly defined secondary Packet zone. - netstat(1) and systat(1) quickly hacked up to do basic stat reporting but additional stats work needs to be done once some other details within UMA have been taken care of and it becomes clearer to how stats will work within the modified framework. From the user perspective, one implication is that the NMBCLUSTERS compile-time option is no longer used. The maximum number of clusters is still capped off according to maxusers, but it can be made unlimited by setting the kern.ipc.nmbclusters boot-time tunable to zero. Work should be done to write an appropriate sysctl handler allowing dynamic tuning of kern.ipc.nmbclusters at runtime. Additional things worth noting/known issues (READ): - One report of 'ips' (ServeRAID) driver acting really slow in conjunction with mbuma. Need more data. Latest report is that ips is equally sucking with and without mbuma. - Giant leak in NFS code sometimes occurs, can't reproduce but currently analyzing; brueffer is able to reproduce but THIS IS NOT an mbuma-specific problem and currently occurs even WITHOUT mbuma. - Issues in network locking: there is at least one code path in the rip code where one or more locks are acquired and we end up in m_prepend() with M_WAITOK, which causes WITNESS to whine from within UMA. Current temporary solution: force all UMA allocations to be M_NOWAIT from within UMA for now to avoid deadlocks unless WITNESS is defined and we can determine with certainty that we're not holding any locks when we're M_WAITOK. - I've seen at least one weird socketbuffer empty-but- mbuf-still-attached panic. I don't believe this to be related to mbuma but please keep your eyes open, turn on debugging, and capture crash dumps. This change removes more code than it adds. A paper is available detailing the change and considering various performance issues, it was presented at BSDCan2004: http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf Please read the paper for Future Work and implementation details, as well as credits. Testing and Debugging: rwatson, brueffer, Ketrien I. Saihr-Kesenchedra, ... Reviewed by: Lots of people (for different parts)
* constify the last argument of m_copyback.luigi2004-04-181-1/+1
|
* Remove advertising clause from University of California Regent's license,imp2004-04-051-4/+0
| | | | | | per letter dated July 22, 1999. Approved by: core
* Style fixes: don't indent variable names.silby2004-02-051-6/+6
| | | | Submitted by: bde
* Style fixessilby2004-02-041-6/+0
| | | | Submitted by: bde
* Rewrite sendfile's header support so that headers are now sent in the firstsilby2004-02-011-0/+56
| | | | | | | | | | | | packet along with data, instead of in their own packet. When serving files of size (packetsize - headersize) or smaller, this will result in one less packet crossing the network. Quick testing with thttpd and http_load has shown a noticeable performance improvement in this case (350 vs 330 fetches per second.) Included in this commit are two support routines, iov_to_uio, and m_uiotombuf; these routines are used by sendfile to construct the header mbuf chain that will be linked to the rest of the data in the socket buffer.
* Fix another 0 / NULL mixup.silby2003-12-251-1/+1
|
* Catch a few places where NULL (pointer) was used where 0 (integer) waspeter2003-12-231-1/+1
| | | | expected.
OpenPOWER on IntegriCloud