summaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/intel
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'master' of ↵David S. Miller2012-08-228-8/+187
|\ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next Jeff Kirsher says: ==================== This series contains updates to ethtool.h, e1000, e1000e, and igb to implement MDI/MDIx control. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * igb: update to allow reading/setting MDI stateJesse Brandeburg2012-08-212-0/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the implementation for igb to allow forcing MDI state via ethtool, allowing users to work around some improperly behaving switches. Forcing in this driver is for now only allowed when auto-neg is enabled. Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> CC: Carolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * e1000e: implement MDI/MDI-X controlJesse Brandeburg2012-08-211-2/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some users report issues with link failing when connected to certain switches. This gives the user the ability to control the MDI state from the driver, allowing users to work around some improperly behaving switches. Forcing in this driver is for now only allowed when auto-neg is enabled. This is in regards to the related ethtool app patch and bugzilla.kernel.org bug 11998 Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> CC: bruce.w.allan@intel.com CC: n.poppelier@xs4all.nl CC: bastien@durel.org CC: jsveiga@it.eng.br Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * e1000: configure and read MDI settingsJesse Brandeburg2012-08-212-0/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the implementation in e1000 to allow ethtool to force MDI state, allowing users to work around some improperly behaving switches. Forcing in this driver is for now only allowed when auto-neg is enabled. To use must have the matching version of ethtool app that supports this functionality. Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> CC: Tushar Dave <tushar.n.dave@intel.com> Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * igb: implement 580 MDI setting supportJesse Brandeburg2012-08-212-4/+30
| | | | | | | | | | | | | | | | | | | | | | | | In order for igb to support MDI setting support via ethtool this code is needed to allow setting the MDI state via software. This is in regards to the related ethtool patch Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * e1000e: implement 82577/579 MDI setting supportBruce W Allan2012-08-211-2/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In order for e1000e to support MDI setting support via ethtool this code is needed to allow setting the MDI state via software. This is in regards to the related ethtool patch and fixes bugzilla.kernel.org bug 11998 Signed-off-by: Bruce W Allan <bruce.w.allan@intel.com> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller2012-08-227-49/+71
|\ \ | |/ |/|
| * ixgbe: add missing bracesEmil Tantilov2012-08-101-1/+2
| | | | | | | | | | | | | | | | | | | | This patch adds missing braces around the 10gig link check to include the check for KR support. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Reported-by: Sascha Wildner <saw@online.de> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * igb: Fix register defines for all non-82575 hardwareAlexander Duyck2012-08-091-2/+6
| | | | | | | | | | | | | | | | | | It looks like the register defines for DCA were never updated after going from 82575 to 82576. This change addresses that by updating the defines. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
| * e1000e: fix panic while dumping packets on Tx hang with IOMMUEmil Tantilov2012-08-091-10/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch resolves a "BUG: unable to handle kernel paging request at ..." oops while dumping packet data. The issue occurs with IOMMU enabled due to the address provided by phys_to_virt(). This patch avoids phys_to_virt() by using skb->data and the address of the pages allocated for Rx. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
| * igb: fix panic while dumping packets on Tx hang with IOMMUEmil Tantilov2012-08-091-10/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch resolves a "BUG: unable to handle kernel paging request at ..." oops while dumping packet data. The issue occurs with IOMMU enabled due to the address provided by phys_to_virt(). This patch avoids phys_to_virt() by making using skb->data and the address of the pages allocated for Rx. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
| * igb: add delay to allow igb loopback test to succeed on 8086:10c9Stefan Assmann2012-08-071-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some 8086:10c9 NICs have a problem completing the ethtool loopback test. The result looks like this: ethtool -t eth1 The test result is FAIL The test extra info: Register test (offline) 0 Eeprom test (offline) 0 Interrupt test (offline) 0 Loopback test (offline) 13 Link test (on/offline) 0 A bisect clearly points to commit a95a07445ee97a2fef65befafbadcc30ca1bd145. However that seems to only trigger the bug. While adding some printk the problem disappeared, so this might be a timing issue. After some trial and error I discovered that adding a small delay just before igb_write_phy_reg() in igb_integrated_phy_loopback() allows the loopback test to succeed. I was unable to figure out the root cause so far but I expect it to be somewhere in the following executing path igb_integrated_phy_loopback ->igb_write_phy_reg_igp ->igb_write_phy_reg_mdic ->igb_acquire_phy_82575 ->igb_acquire_swfw_sync_82575 The problem could only be observed on 8086:10c9 NICs so far and not all of them show the behaviour. I did not restrict the workaround to this type of NIC as it should do no harm to other igb NICs. With the patch below the loopback test succeeded 500 times in a row using a NIC that would otherwise fail. Signed-off-by: Stefan Assmann <sassmann@kpanic.de> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
| * e1000e: 82571 Tx Data Corruption during Tx hang recoveryTushar Dave2012-08-071-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A bus trace shows that while executing e1000e_down, TCTL is cleared except for the PSP bit. This occurs while in the middle of fetching a TSO packet since the Tx packet buffer is full at that point. Before the device is reset, the e1000_watchdog_task starts to run from the middle (it was apparently pre-empted earlier, although that is not in the trace) and sets TCTL.EN. At that point, 82571 transmits the corrupted packet, apparently because TCTL.MULR was cleared in the middle of fetching a packet, which is forbidden. Driver should just clear TCTL.EN in e1000_reset_hw_82571 instead of clearing the entire register, so as not to change any settings in the middle of fetching a packet. Signed-off-by: Tushar Dave <tushar.n.dave@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
| * e1000e: NIC goes up and immediately goes downTushar Dave2012-08-071-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Found that commit d478eb44 was a bad commit. If the link partner is transmitting codeword (even if NULL codeword), then the RXCW.C bit will be set so check for RXCW.CW is unnecessary. Ref: RH BZ 840642 Reported-by: Fabio Futigami <ffutigam@redhat.com> Signed-off-by: Tushar Dave <tushar.n.dave@intel.com> CC: Marcelo Ricardo Leitner <mleitner@redhat.com> CC: stable <stable@vger.kernel.org> [2.6.38+] Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
| * igb: don't break user visible strings over multiple lines in igb_ethtool.cJesper Juhl2012-08-041-12/+11
| | | | | | | | | | | | | | | | | | Even when they go beyond 80 characters, user visible strings should be on one line to make them easy to grep for. Signed-off-by: Jesper Juhl <jj@chaosbits.net> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
| * igb: correct hardware type (i210/i211) check in igb_loopback_test()Jesper Juhl2012-08-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the original code ... if ((adapter->hw.mac.type == e1000_i210) || (adapter->hw.mac.type == e1000_i210)) { ... the second check of 'adapter->hw.mac.type' is pointless since it tests for the exact same value as the first. Signed-off-by: Jesper Juhl <jj@chaosbits.net> Acked-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
| * igb: Fix for failure to init on some 82576 devices.Carolyn Wyborny2012-08-041-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | Move nvm invalid size check to before size assigned by mac_type for 82575 and later parts in get_invariants function. This fixes a problem found on some 82576 devices where the part will not initialize because the nvm_read function pointer ends up getting assigned to the incorrect function. Reported By: Stefan Assmann <sassmann@redhat.com> Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
* | ixgbe: Rewrite code related to configuring IFCS bit in Tx descriptorAlexander Duyck2012-08-162-6/+11
| | | | | | | | | | | | | | | | | | | | | | This change updates the code related to configuring the transmit frame checksum. Specifically I have updated the code so that we can only skip inserting the checksum in the case that we are not performing some other offload that will modify the frame data. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
* | ixgbe: Roll RSC code into non-EOP codeAlexander Duyck2012-08-161-32/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | This change moves the RSC code into the non-EOP descriptor handling function. The main motivation behind this change is to help reduce the overhead in the non-RSC case. Previously the non-RSC path code would always be checking for append count even if RSC had been disabled. Now this code is completely skipped in a single conditional check instead of having to make two separate checks. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
* | ixgbe: Make allocating skb and placing data in it a separate functionAlexander Duyck2012-08-161-77/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch creates a function named ixgbe_fetch_rx_buffer. The sole purpose of this function is to retrieve a single buffer off of the ring and to place it in an skb. The advantage to doing this is that it helps improve the readability since I can decrease the indentation and for the code in this section. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
* | ixgbe: Copybreak sooner to avoid get_page/put_page and offset change overheadAlexander Duyck2012-08-161-15/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change makes it so that if only the first 256 bytes of a buffer are used we just copy the data out and leave the offset and page count unchanged. There are multiple advantages to this. First it allows us to reuse the page much more in the case of pages larger than 4K. It also allows us to avoid some expensive atomic operations in the form of get_page/put_page. In perf I have seen CPU utilization for put_page drop from 3.5% to 1.8% as a result of this patch when doing small packet routing, and packet rates increased by about 3%. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
* | ixgbe: Make pull tail function separate from rest of cleanup_headersAlexander Duyck2012-08-161-37/+57
| | | | | | | | | | | | | | | | | | | | | | This change creates a separate function for functionality similar to pskb_pull_tail. The main motivation for moving it to a separate function is so that later I can just skip this function in the case where we have already copied the buffer into skb->head. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
* | ixgbe: Have the CPU take ownership of the buffers soonerAlexander Duyck2012-08-161-14/+38
| | | | | | | | | | | | | | | | | | | | | | This patch makes it so that we will always have ownership of the buffers by the time we get to ixgbe_add_rx_frag. This is necessary as I am planning to add a copy-break to ixgbe_add_rx_frag and in order for that to function correctly we need the CPU to have ownership of the buffer. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
* | ixgbe: Only use double buffering if page size is less than 8KAlexander Duyck2012-08-162-19/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change makes it so that we do not use double buffering if the page size is larger than 4K. Instead we will simply walk through the page using up to 3K per receive, and if we receive less than we only move the offset by that amount. We will free the page when there is no longer any space left that we can use instead of checking the page count to see if we can cycle back to the start. The main motivation behind this is to avoid the unnecessary truesize cost for using a half page when most packets are 2K or smaller. With this new approach the largest possible truesize for a page fragment will be 3K when PAGE_SIZE is larger than 4K. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
* | ixgbe: combine ixgbe_add_rx_frag and ixgbe_can_reuse_pageAlexander Duyck2012-08-161-39/+34
| | | | | | | | | | | | | | | | | | | | | | This patch combines ixgbe_add_rx_frag and ixgbe_can_reuse_page into a single function. The main motivation behind this is to make better use of the values so that we don't have to load them from memory and into registers twice. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
* | ixgbe: Remove code that was initializing Rx page offsetAlexander Duyck2012-08-161-26/+1
|/ | | | | | | | | | | | | | This change reverts an earlier patch that introduced ixgbe_init_rx_page_offset. The idea behind the function was to provide some variation in the starting offset for the page in order to reduce hot-spots in the cache. However it doesn't appear to provide any significant benefit in the testing I have done. It has however been a source of several bugs, and it blocks us from being able to use 2K fragments on larger page sizes. So the decision I made was to remove it. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
* netvm: propagate page->pfmemalloc from skb_alloc_page to skbMel Gorman2012-07-313-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The skb->pfmemalloc flag gets set to true iff during the slab allocation of data in __alloc_skb that the the PFMEMALLOC reserves were used. If page splitting is used, it is possible that pages will be allocated from the PFMEMALLOC reserve without propagating this information to the skb. This patch propagates page->pfmemalloc from pages allocated for fragments to the skb. It works by reintroducing and expanding the skb_alloc_page() API to take an skb. If the page was allocated from pfmemalloc reserves, it is automatically copied. If the driver allocates the page before the skb, it should call skb_propagate_pfmemalloc() after the skb is allocated to ensure the flag is copied properly. Failure to do so is not critical. The resulting driver may perform slower if it is used for swap-over-NBD or swap-over-NFS but it should not result in failure. [davem@davemloft.net: API rename and consistency] Signed-off-by: Mel Gorman <mgorman@suse.de> Acked-by: David S. Miller <davem@davemloft.net> Cc: Neil Brown <neilb@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Christie <michaelc@cs.wisc.edu> Cc: Eric B Munson <emunson@mgebm.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Cc: Mel Gorman <mgorman@suse.de> Cc: Christoph Lameter <cl@linux.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ixgbe: fix panic while dumping packets on Tx hang with IOMMUEmil Tantilov2012-07-261-5/+6
| | | | | | | | | | | | | | This patch resolves a "BUG: unable to handle kernel paging request at ..." oops while dumping packet data. The issue occurs with IOMMU enabled due to the address provided by phys_to_virt(). This patch makes use of skb->data on Tx and the virtual address of the pages allocated for Rx. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* ixgbe: Fix build with PCI_IOV enabled.David S. Miller2012-07-221-1/+1
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* e1000e: advertise transmit time stampingRichard Cochran2012-07-221-0/+1
| | | | | | | | | | | | This driver now offers software transmit time stamping, so it should advertise that fact via ethtool. Compile tested only. Signed-off-by: Richard Cochran <richardcochran@gmail.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: e1000-devel@lists.sourceforge.net Signed-off-by: David S. Miller <davem@davemloft.net>
* e1000: advertise transmit time stampingRichard Cochran2012-07-221-0/+1
| | | | | | | | | | | | This driver now offers software transmit time stamping, so it should advertise that fact via ethtool. Compile tested only. Signed-off-by: Richard Cochran <richardcochran@gmail.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: e1000-devel@lists.sourceforge.net Signed-off-by: David S. Miller <davem@davemloft.net>
* igb: reset PHY in the link_up process to recover PHY setting after power down.Akeem G. Abodunrin2012-07-211-1/+2
| | | | | | | | | | | There was a previous patch to resolve issue with 82576 losing PHY setting after PHY power down. However that previous implementation triggered speed mismatch and occasional link lost. Now, this patch resolves both initial PHY setting and speed mismatch issues. Signed-off-by: Akeem G. Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Use 1TC DCB instead of disabling DCB for MSI and legacy interruptsAlexander Duyck2012-07-212-7/+17
| | | | | | | | | | | | This change makes it so that we can use 1TC DCB in the case of MSI and legacy interrupts. The advantage to this is that it allows us to fully support FCoE w/ DCB instead of having to drop to link flow control only when using these interrupt modes. Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: add support for new 82599 deviceDon Skidmore2012-07-212-0/+2
| | | | | | | | This patch adds support for a new 82599 device that supports WoL. Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: remove extra unused queues in DCB + FCoE caseJohn Fastabend2012-07-211-5/+8
| | | | | | | | | | With DCB and FCoE configured extra queues may be allocated and never used. After this patch we calculate the max correctly. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: fix RAR entry counting for generic and fdb_add()John Fastabend2012-07-211-8/+19
| | | | | | | | | | | | | | | | | | | | | Do RAR entry accounting correctly so that errors are reported and promisc mode is set correctly when the number of entries exceeds the hardware limits. This can happen with many macvlan devices attached to the PF or by adding many fdb entries in SR-IOV modes. Also this includes a small refactor to fdb_add() to avoid having so many nested if/else statements after adding a check for the number or RAR entries. The max entries for the PF is currently 16 we allow 15 additional entries to account for the defined MAC. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Use num_tcs.pg_tcs as upper limit for TC when checking based on UPAlexander Duyck2012-07-211-4/+8
| | | | | | | | | | | | | | | | | | | | This change makes it so the function ixgbe_dcb_get_tc_from_up will use the num_tcs.pg_tcs to determine the starting value for determining a traffic class based on a user priority. The main motivation for this change is to address possible bad configurations in which more TCs worth of data are populated then there are actual TCs. By limiting this value we can at least make certain we are not providing a map with values that are out of range. As a result any user priorities that are setup in the configuration with a traffic class mapping higher than what the hardware supports will be reported as being on TC 0. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Stephen Ko <stephen.s.ko@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Reduce Rx header size to what is actually usedAlexander Duyck2012-07-212-9/+10
| | | | | | | | | | | | | | | | | | The recent changes to netdev_alloc_skb actually make it so that the size of the buffer now actually has a more direct input on the truesize. So in order to make best use of the piece of a page we are allocated I am reducing the IXGBE_RX_HDR_SIZE to 256 so that our truesize will be reduced by 256 bytes as well. This should result in performance improvements since the number of uses per page should increase from 4 to 6 in the case of a 4K page. In addition we should see socket performance improvements due to the truesize dropping to less than 1K for buffers less than 256 bytes. Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbevf: Fix namespace issue with ixgbe_write_eitrGreg Rose2012-07-212-25/+19
| | | | | | | | | Make the function static to cleanup namespace. Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <Sibai.li@intel.com Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Fix handling of FDIR_HASH flagAlexander Duyck2012-07-212-24/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | This change makes it so that we can use the atr_sample_rate to determine if we are capable of supporting ATR. The advantage to this approach is that it allows us to now determine the setting of the IXGBE_FLAG_FDIR_HASH_CAPABLE based on the queueing scheme, instead of the queueing scheme being based on the flag. Using this approach there are essentially 5 conditions that must be checked prior to trying to enable ATR: 1. Is SR-IOV disabled? 2. Are the number of TCs <= 1? 3. Is RSS queueing limit greater than 1? 4. Is atr_sample_rate set? 5. Is Flow Director perfect filtering disabled? If any of these conditions are enabled they should disable ATR filtering. Note that in the case of conditions 1 through 4 being met we will set things up for ATR queueing, however if test 5 fails we will still leave the queues allocated for use by perfect filters. The reason for this is to allow for us to switch back and forth between ntuple and ATR without needing to reallocate the descriptor rings. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbevf: Add support for PCI error handlingAlexander Duyck2012-07-211-0/+80
| | | | | | | | This change adds support for handling IO errors and slot resets. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbevf: Add lock around mailbox ops to prevent simultaneous accessAlexander Duyck2012-07-212-2/+41
| | | | | | | | | | This change adds a spinlock around the mailbox accesses to prevent simultaneous access to the mailboxes. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Change how we check for pre-existing and assigned VFsAlexander Duyck2012-07-214-94/+45
| | | | | | | | | | | | | | | | | | This patch does two things. First it drops the unnecessary work of searching for enabled VFs when we first bring up the adapter and instead just uses pci_num_vf to determine how many VFs are enabled on the adapter. The second thing it does is drop the use of vfdev from the vf_data_storage structure. Instead we just search the entire system for a VF that has us as it's PF, and then if that VF is assigned we indicate that the VFs are assigned. This allows us to still check for assigned VFs even if the vfinfo allocation has failed, or vfinfo has been freed. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Greg Rose <gregory.v.rose@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Drop probe_vf and merge functionality into ixgbe_enable_sriovAlexander Duyck2012-07-213-32/+28
| | | | | | | | | | | | This is meant to fix a bug in which we were not checking for pre-existing VFs if we were not setting the max_vfs value at driver load. What happens now is that we always call ixgbe_enable_sriov and this checks for pre-existing VFs ore requested VFs prior to deciding on no SR-IOV. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* Merge branch 'master' of ↵David S. Miller2012-07-2010-248/+344
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next Jerr Kirsher says: ==================== This series contains updates to ixgbe. ... Alexander Duyck (9): ixgbe: Use VMDq offset to indicate the default pool ixgbe: Fix memory leak when SR-IOV VFs are direct assigned ixgbe: Drop references to deprecated pci_ DMA api and instead use dma_ API ixgbe: Cleanup configuration of FCoE registers ixgbe: Merge all FCoE percpu values into a single structure ixgbe: Make FCoE allocation and configuration closer to how rings work ixgbe: Correctly set SAN MAC RAR pool to default pool of PF ixgbe: Only enable anti-spoof on VF pools ixgbe: Enable FCoE FSO and CRC offloads based on CAPABLE instead of ENABLED flag ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * ixgbe: Enable FCoE FSO and CRC offloads based on CAPABLE instead of ENABLED flagAlexander Duyck2012-07-192-8/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of only setting the FCOE segmentation offload and CRC offload flags if we enable FCoE, we could just set them always since there are no modifications needed to the hardware or adapter FCoE structure in order to use these features. The advantage to this is that if FCoE enablement fails, for example because SR-IOV was enabled on 82599, we will still have use of the FCoE segmentation offload and Tx/Rx CRC offloads which should still help to improve the FCoE performance. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * ixgbe: Only enable anti-spoof on VF poolsAlexander Duyck2012-07-191-9/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current logic is enabling anti-spoof on all pools and then clearing anti-spoof on just the first PF pool. The correct approach is to only set anti-spoof on the VF pools and to leave all of the PF pools unchecked. This allows for items such as FCoE to use adjacent pools within the PF for transmit and receive queues without the traffic being blocked by this security feature. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Sibai Li <sibai.li@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * ixgbe: Correctly set SAN MAC RAR pool to default pool of PFAlexander Duyck2012-07-196-3/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | This change corrects an issue in which an FCoE enabled adapter was always setting the FCoE SAN MAC MPSAR register to 0x1. This results in the first VF being assigned the SAN MAC address in the case of SR-IOV and as such is incorrect. To resolve this I am adding a new function that will update the SAN MAC pool address after reset. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * ixgbe: Make FCoE allocation and configuration closer to how rings workAlexander Duyck2012-07-194-122/+154
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes the behavior of the FCoE configuration so that it is much closer to how the main body of the ixgbe driver works for ring allocation. The first piece is the ixgbe_fcoe_ddp_enable/disable calls. These allocate the percpu values and if successful set the fcoe_ddp_xid value indicating that we can support DDP. The next piece is the ixgbe_setup/free_ddp_resources calls. These are called on open/close and will allocate and free the DMA pools. Finally ixgbe_configure_fcoe is now just register configuration. It can go through and enable the registers for the FCoE redirection offload, and FIP configuration without any interference from the DDP pool allocation. The net result of all this is two fold. First it adds a certain amount of exception handling. So for example if ixgbe_setup_fcoe_resources fails we will actually generate an error in open and refuse to bring up the interface. Secondly it provides a much more graceful failure case than the previous model which would skip setting up the registers for FCoE on failure to allocate DDP resources leaving no Rx functionality enabled instead of just disabling DDP. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * ixgbe: Merge all FCoE percpu values into a single structureAlexander Duyck2012-07-193-86/+86
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change merges the 2 statistics values for noddp and noddp_ext_buff and the dma_pool into a single structure that can be allocated per CPU. The advantages to this are several fold. First we only need to do one alloc_percpu call now instead of 3, so that means less overhead for handling memory allocation failures. Secondly in the case of ixgbe_fcoe_ddp_setup we only need to call get_cpu once which makes things a bit cleaner since we can drop a put_cpu() from the exception path. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
OpenPOWER on IntegriCloud