summaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2014-10-08 21:40:54 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2014-10-08 21:40:54 -0400
commit35a9ad8af0bb0fa3525e6d0d20e32551d226f38e (patch)
tree15b4b33206818886d9cff371fd2163e073b70568 /drivers/net/ethernet
parentd5935b07da53f74726e2a65dd4281d0f2c70e5d4 (diff)
parent64b1f00a0830e1c53874067273a096b228d83d36 (diff)
downloadop-kernel-dev-35a9ad8af0bb0fa3525e6d0d20e32551d226f38e.zip
op-kernel-dev-35a9ad8af0bb0fa3525e6d0d20e32551d226f38e.tar.gz
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller: "Most notable changes in here: 1) By far the biggest accomplishment, thanks to a large range of contributors, is the addition of multi-send for transmit. This is the result of discussions back in Chicago, and the hard work of several individuals. Now, when the ->ndo_start_xmit() method of a driver sees skb->xmit_more as true, it can choose to defer the doorbell telling the driver to start processing the new TX queue entires. skb->xmit_more means that the generic networking is guaranteed to call the driver immediately with another SKB to send. There is logic added to the qdisc layer to dequeue multiple packets at a time, and the handling mis-predicted offloads in software is now done with no locks held. Finally, pktgen is extended to have a "burst" parameter that can be used to test a multi-send implementation. Several drivers have xmit_more support: i40e, igb, ixgbe, mlx4, virtio_net Adding support is almost trivial, so export more drivers to support this optimization soon. I want to thank, in no particular or implied order, Jesper Dangaard Brouer, Eric Dumazet, Alexander Duyck, Tom Herbert, Jamal Hadi Salim, John Fastabend, Florian Westphal, Daniel Borkmann, David Tat, Hannes Frederic Sowa, and Rusty Russell. 2) PTP and timestamping support in bnx2x, from Michal Kalderon. 3) Allow adjusting the rx_copybreak threshold for a driver via ethtool, and add rx_copybreak support to enic driver. From Govindarajulu Varadarajan. 4) Significant enhancements to the generic PHY layer and the bcm7xxx driver in particular (EEE support, auto power down, etc.) from Florian Fainelli. 5) Allow raw buffers to be used for flow dissection, allowing drivers to determine the optimal "linear pull" size for devices that DMA into pools of pages. The objective is to get exactly the necessary amount of headers into the linear SKB area pre-pulled, but no more. The new interface drivers use is eth_get_headlen(). From WANG Cong, with driver conversions (several had their own by-hand duplicated implementations) by Alexander Duyck and Eric Dumazet. 6) Support checksumming more smoothly and efficiently for encapsulations, and add "foo over UDP" facility. From Tom Herbert. 7) Add Broadcom SF2 switch driver to DSA layer, from Florian Fainelli. 8) eBPF now can load programs via a system call and has an extensive testsuite. Alexei Starovoitov and Daniel Borkmann. 9) Major overhaul of the packet scheduler to use RCU in several major areas such as the classifiers and rate estimators. From John Fastabend. 10) Add driver for Intel FM10000 Ethernet Switch, from Alexander Duyck. 11) Rearrange TCP_SKB_CB() to reduce cache line misses, from Eric Dumazet. 12) Add Datacenter TCP congestion control algorithm support, From Florian Westphal. 13) Reorganize sk_buff so that __copy_skb_header() is significantly faster. From Eric Dumazet" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1558 commits) netlabel: directly return netlbl_unlabel_genl_init() net: add netdev_txq_bql_{enqueue, complete}_prefetchw() helpers net: description of dma_cookie cause make xmldocs warning cxgb4: clean up a type issue cxgb4: potential shift wrapping bug i40e: skb->xmit_more support net: fs_enet: Add NAPI TX net: fs_enet: Remove non NAPI RX r8169:add support for RTL8168EP net_sched: copy exts->type in tcf_exts_change() wimax: convert printk to pr_foo() af_unix: remove 0 assignment on static ipv6: Do not warn for informational ICMP messages, regardless of type. Update Intel Ethernet Driver maintainers list bridge: Save frag_max_size between PRE_ROUTING and POST_ROUTING tipc: fix bug in multicast congestion handling net: better IFF_XMIT_DST_RELEASE support net/mlx4_en: remove NETDEV_TX_BUSY 3c59x: fix bad split of cpu_to_le32(pci_map_single()) net: bcmgenet: fix Tx ring priority programming ...
Diffstat (limited to 'drivers/net/ethernet')
-rw-r--r--drivers/net/ethernet/3com/3c509.c6
-rw-r--r--drivers/net/ethernet/3com/3c515.c25
-rw-r--r--drivers/net/ethernet/3com/3c59x.c29
-rw-r--r--drivers/net/ethernet/Kconfig2
-rw-r--r--drivers/net/ethernet/Makefile2
-rw-r--r--drivers/net/ethernet/adi/bfin_mac.c3
-rw-r--r--drivers/net/ethernet/agere/Kconfig31
-rw-r--r--drivers/net/ethernet/agere/Makefile5
-rw-r--r--drivers/net/ethernet/agere/et131x.c4121
-rw-r--r--drivers/net/ethernet/agere/et131x.h1433
-rw-r--r--drivers/net/ethernet/allwinner/sun4i-emac.c2
-rw-r--r--drivers/net/ethernet/altera/altera_tse_main.c66
-rw-r--r--drivers/net/ethernet/amd/au1000_eth.c6
-rw-r--r--drivers/net/ethernet/amd/nmclan_cs.c2
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-common.h11
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-dcb.c1
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c1
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-desc.c6
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-dev.c1
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-drv.c1
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c2
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-main.c1
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-mdio.c1
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-ptp.c1
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe.h2
-rw-r--r--drivers/net/ethernet/arc/Kconfig18
-rw-r--r--drivers/net/ethernet/arc/Makefile4
-rw-r--r--drivers/net/ethernet/arc/emac.h8
-rw-r--r--drivers/net/ethernet/arc/emac_arc.c95
-rw-r--r--drivers/net/ethernet/arc/emac_main.c129
-rw-r--r--drivers/net/ethernet/arc/emac_mdio.c7
-rw-r--r--drivers/net/ethernet/arc/emac_rockchip.c229
-rw-r--r--drivers/net/ethernet/broadcom/Kconfig1
-rw-r--r--drivers/net/ethernet/broadcom/b44.c2
-rw-r--r--drivers/net/ethernet/broadcom/bcmsysport.c31
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x.h93
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c140
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h19
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c5
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_dump.h14
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c44
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h222
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h257
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c1048
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h178
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c169
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h85
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c48
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h3
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c10
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c9
-rw-r--r--drivers/net/ethernet/broadcom/cnic.c6
-rw-r--r--drivers/net/ethernet/broadcom/genet/bcmgenet.c88
-rw-r--r--drivers/net/ethernet/broadcom/genet/bcmgenet.h6
-rw-r--r--drivers/net/ethernet/broadcom/genet/bcmmii.c91
-rw-r--r--drivers/net/ethernet/brocade/bna/bna_enet.c9
-rw-r--r--drivers/net/ethernet/brocade/bna/bna_tx_rx.c6
-rw-r--r--drivers/net/ethernet/brocade/bna/bnad.c2
-rw-r--r--drivers/net/ethernet/cadence/at91_ether.c1
-rw-r--r--drivers/net/ethernet/cadence/macb.c6
-rw-r--r--drivers/net/ethernet/calxeda/xgmac.c1
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4.h5
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c32
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/sge.c218
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/t4_hw.c27
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/t4_hw.h9
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/t4_regs.h20
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c107
-rw-r--r--drivers/net/ethernet/cisco/enic/enic.h1
-rw-r--r--drivers/net/ethernet/cisco/enic/enic_ethtool.c39
-rw-r--r--drivers/net/ethernet/cisco/enic/enic_main.c50
-rw-r--r--drivers/net/ethernet/cisco/enic/vnic_dev.c3
-rw-r--r--drivers/net/ethernet/davicom/dm9000.c3
-rw-r--r--drivers/net/ethernet/dec/tulip/dmfe.c152
-rw-r--r--drivers/net/ethernet/ec_bhf.c101
-rw-r--r--drivers/net/ethernet/emulex/benet/be.h30
-rw-r--r--drivers/net/ethernet/emulex/benet/be_cmds.c182
-rw-r--r--drivers/net/ethernet/emulex/benet/be_cmds.h48
-rw-r--r--drivers/net/ethernet/emulex/benet/be_ethtool.c173
-rw-r--r--drivers/net/ethernet/emulex/benet/be_hw.h12
-rw-r--r--drivers/net/ethernet/emulex/benet/be_main.c368
-rw-r--r--drivers/net/ethernet/emulex/benet/be_roce.c1
-rw-r--r--drivers/net/ethernet/ethoc.c2
-rw-r--r--drivers/net/ethernet/freescale/fec.h204
-rw-r--r--drivers/net/ethernet/freescale/fec_main.c1218
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c211
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/fs_enet.h9
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/mac-fcc.c29
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/mac-fec.c29
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/mac-scc.c29
-rw-r--r--drivers/net/ethernet/hp/hp100.c4
-rw-r--r--drivers/net/ethernet/intel/Kconfig19
-rw-r--r--drivers/net/ethernet/intel/Makefile1
-rw-r--r--drivers/net/ethernet/intel/e1000/e1000.h19
-rw-r--r--drivers/net/ethernet/intel/e1000/e1000_ethtool.c187
-rw-r--r--drivers/net/ethernet/intel/e1000/e1000_hw.c78
-rw-r--r--drivers/net/ethernet/intel/e1000/e1000_hw.h2
-rw-r--r--drivers/net/ethernet/intel/e1000/e1000_main.c498
-rw-r--r--drivers/net/ethernet/intel/fm10k/Makefile33
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k.h530
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_common.c534
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_common.h65
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c174
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c259
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c1071
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_iov.c536
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_main.c1979
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_mbx.c2125
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_mbx.h307
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_netdev.c1435
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_pci.c2166
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_pf.c1880
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_pf.h135
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_ptp.c463
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_tlv.c863
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_tlv.h186
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_type.h770
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_vf.c578
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_vf.h78
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e.h9
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_adminq.c8
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_common.c10
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_debugfs.c3
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_ethtool.c70
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_main.c259
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_prototype.h6
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_txrx.c182
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_txrx.h1
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c50
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h2
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_adminq.c9
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_common.c8
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_prototype.h6
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_txrx.c8
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_txrx.h1
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40evf_main.c2
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_82575.c31
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_82575.h4
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_hw.h5
-rw-r--r--drivers/net/ethernet/intel/igb/igb.h1
-rw-r--r--drivers/net/ethernet/intel/igb/igb_ethtool.c24
-rw-r--r--drivers/net/ethernet/intel/igb/igb_main.c220
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe.h117
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c7
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c160
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_main.c316
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c41
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c14
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_type.h7
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/ethtool.c2
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/ixgbevf.h1
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c4
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/vf.c15
-rw-r--r--drivers/net/ethernet/lantiq_etop.c1
-rw-r--r--drivers/net/ethernet/marvell/Kconfig2
-rw-r--r--drivers/net/ethernet/marvell/pxa168_eth.c219
-rw-r--r--drivers/net/ethernet/marvell/skge.c6
-rw-r--r--drivers/net/ethernet/marvell/sky2.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/cmd.c14
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_ethtool.c45
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_main.c17
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_netdev.c1
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_port.c17
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_rx.c23
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_tx.c400
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/eq.c30
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/fw.c47
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/fw.h2
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/main.c482
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/mlx4.h3
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/mlx4_en.h101
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/cmd.c77
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eq.c14
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fw.c81
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/main.c230
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/qp.c60
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/uar.c4
-rw-r--r--drivers/net/ethernet/moxa/moxart_ether.c1
-rw-r--r--drivers/net/ethernet/neterion/vxge/vxge-main.c2
-rw-r--r--drivers/net/ethernet/netx-eth.c2
-rw-r--r--drivers/net/ethernet/nuvoton/w90p910_ether.c1
-rw-r--r--drivers/net/ethernet/nvidia/forcedeth.c2
-rw-r--r--drivers/net/ethernet/nxp/lpc_eth.c3
-rw-r--r--drivers/net/ethernet/packetengines/yellowfin.c4
-rw-r--r--drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c5
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic.h8
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c218
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h2
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c156
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c2
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c6
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c2
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c10
-rw-r--r--drivers/net/ethernet/qlogic/qlge/qlge_main.c4
-rw-r--r--drivers/net/ethernet/qualcomm/Kconfig30
-rw-r--r--drivers/net/ethernet/qualcomm/Makefile6
-rw-r--r--drivers/net/ethernet/qualcomm/qca_7k.c149
-rw-r--r--drivers/net/ethernet/qualcomm/qca_7k.h72
-rw-r--r--drivers/net/ethernet/qualcomm/qca_debug.c311
-rw-r--r--drivers/net/ethernet/qualcomm/qca_debug.h34
-rw-r--r--drivers/net/ethernet/qualcomm/qca_framing.c156
-rw-r--r--drivers/net/ethernet/qualcomm/qca_framing.h134
-rw-r--r--drivers/net/ethernet/qualcomm/qca_spi.c991
-rw-r--r--drivers/net/ethernet/qualcomm/qca_spi.h114
-rw-r--r--drivers/net/ethernet/realtek/r8169.c1357
-rw-r--r--drivers/net/ethernet/sfc/tx.c2
-rw-r--r--drivers/net/ethernet/smsc/smc911x.c3
-rw-r--r--drivers/net/ethernet/smsc/smc91x.c3
-rw-r--r--drivers/net/ethernet/smsc/smsc911x.c1
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/Kconfig10
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/Makefile1
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c67
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c62
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac.h3
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c18
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_main.c4
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c4
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c3
-rw-r--r--drivers/net/ethernet/sun/cassini.c2
-rw-r--r--drivers/net/ethernet/sun/niu.c4
-rw-r--r--drivers/net/ethernet/sun/sungem.c34
-rw-r--r--drivers/net/ethernet/sun/sunvnet.c433
-rw-r--r--drivers/net/ethernet/sun/sunvnet.h20
-rw-r--r--drivers/net/ethernet/ti/Kconfig2
-rw-r--r--drivers/net/ethernet/ti/cpmac.c1
-rw-r--r--drivers/net/ethernet/ti/cpsw-phy-sel.c1
-rw-r--r--drivers/net/ethernet/ti/cpsw.c105
-rw-r--r--drivers/net/ethernet/ti/cpsw.h1
-rw-r--r--drivers/net/ethernet/ti/davinci_emac.c1
-rw-r--r--drivers/net/ethernet/ti/davinci_mdio.c1
-rw-r--r--drivers/net/ethernet/tile/tilepro.c8
-rw-r--r--drivers/net/ethernet/toshiba/spider_net.c42
-rw-r--r--drivers/net/ethernet/wiznet/w5100.c1
-rw-r--r--drivers/net/ethernet/wiznet/w5300.c1
-rw-r--r--drivers/net/ethernet/xilinx/ll_temac_main.c1
-rw-r--r--drivers/net/ethernet/xilinx/xilinx_axienet_main.c1
236 files changed, 33027 insertions, 4208 deletions
diff --git a/drivers/net/ethernet/3com/3c509.c b/drivers/net/ethernet/3com/3c509.c
index a968654..4547a1b 100644
--- a/drivers/net/ethernet/3com/3c509.c
+++ b/drivers/net/ethernet/3com/3c509.c
@@ -695,9 +695,9 @@ el3_tx_timeout (struct net_device *dev)
int ioaddr = dev->base_addr;
/* Transmitter timeout, serious problems. */
- pr_warning("%s: transmit timed out, Tx_status %2.2x status %4.4x Tx FIFO room %d.\n",
- dev->name, inb(ioaddr + TX_STATUS), inw(ioaddr + EL3_STATUS),
- inw(ioaddr + TX_FREE));
+ pr_warn("%s: transmit timed out, Tx_status %2.2x status %4.4x Tx FIFO room %d\n",
+ dev->name, inb(ioaddr + TX_STATUS), inw(ioaddr + EL3_STATUS),
+ inw(ioaddr + TX_FREE));
dev->stats.tx_errors++;
dev->trans_start = jiffies; /* prevent tx timeout */
/* Issue TX_RESET and TX_START commands. */
diff --git a/drivers/net/ethernet/3com/3c515.c b/drivers/net/ethernet/3com/3c515.c
index 94c656f..942fb0d 100644
--- a/drivers/net/ethernet/3com/3c515.c
+++ b/drivers/net/ethernet/3com/3c515.c
@@ -515,7 +515,7 @@ static struct net_device *corkscrew_scan(int unit)
if (pnp_device_attach(idev) < 0)
continue;
if (pnp_activate_dev(idev) < 0) {
- pr_warning("pnp activate failed (out of resources?)\n");
+ pr_warn("pnp activate failed (out of resources?)\n");
pnp_device_detach(idev);
continue;
}
@@ -659,7 +659,7 @@ static int corkscrew_setup(struct net_device *dev, int ioaddr,
pr_cont(", IRQ %d\n", dev->irq);
/* Tell them about an invalid IRQ. */
if (corkscrew_debug && (dev->irq <= 0 || dev->irq > 15))
- pr_warning(" *** Warning: this IRQ is unlikely to work! ***\n");
+ pr_warn(" *** Warning: this IRQ is unlikely to work! ***\n");
{
static const char * const ram_split[] = {
@@ -967,13 +967,13 @@ static void corkscrew_timeout(struct net_device *dev)
struct corkscrew_private *vp = netdev_priv(dev);
int ioaddr = dev->base_addr;
- pr_warning("%s: transmit timed out, tx_status %2.2x status %4.4x.\n",
- dev->name, inb(ioaddr + TxStatus),
- inw(ioaddr + EL3_STATUS));
+ pr_warn("%s: transmit timed out, tx_status %2.2x status %4.4x\n",
+ dev->name, inb(ioaddr + TxStatus),
+ inw(ioaddr + EL3_STATUS));
/* Slight code bloat to be user friendly. */
if ((inb(ioaddr + TxStatus) & 0x88) == 0x88)
- pr_warning("%s: Transmitter encountered 16 collisions --"
- " network cable problem?\n", dev->name);
+ pr_warn("%s: Transmitter encountered 16 collisions -- network cable problem?\n",
+ dev->name);
#ifndef final_version
pr_debug(" Flags; bus-master %d, full %d; dirty %d current %d.\n",
vp->full_bus_master_tx, vp->tx_full, vp->dirty_tx,
@@ -1382,13 +1382,10 @@ static int boomerang_rx(struct net_device *dev)
temp = skb_put(skb, pkt_len);
/* Remove this checking code for final release. */
if (isa_bus_to_virt(vp->rx_ring[entry].addr) != temp)
- pr_warning("%s: Warning -- the skbuff addresses do not match"
- " in boomerang_rx: %p vs. %p / %p.\n",
- dev->name,
- isa_bus_to_virt(vp->
- rx_ring[entry].
- addr), skb->head,
- temp);
+ pr_warn("%s: Warning -- the skbuff addresses do not match in boomerang_rx: %p vs. %p / %p\n",
+ dev->name,
+ isa_bus_to_virt(vp->rx_ring[entry].addr),
+ skb->head, temp);
rx_nocopy++;
}
skb->protocol = eth_type_trans(skb, dev);
diff --git a/drivers/net/ethernet/3com/3c59x.c b/drivers/net/ethernet/3com/3c59x.c
index 8ca49f04..41095eb 100644
--- a/drivers/net/ethernet/3com/3c59x.c
+++ b/drivers/net/ethernet/3com/3c59x.c
@@ -1310,8 +1310,8 @@ static int vortex_probe1(struct device *gendev, void __iomem *ioaddr, int irq,
pr_cont(", IRQ %d\n", dev->irq);
/* Tell them about an invalid IRQ. */
if (dev->irq <= 0 || dev->irq >= nr_irqs)
- pr_warning(" *** Warning: IRQ %d is unlikely to work! ***\n",
- dev->irq);
+ pr_warn(" *** Warning: IRQ %d is unlikely to work! ***\n",
+ dev->irq);
step = (window_read8(vp, 4, Wn4_NetDiag) & 0x1e) >> 1;
if (print_info) {
@@ -1425,7 +1425,7 @@ static int vortex_probe1(struct device *gendev, void __iomem *ioaddr, int irq,
}
mii_preamble_required--;
if (phy_idx == 0) {
- pr_warning(" ***WARNING*** No MII transceivers found!\n");
+ pr_warn(" ***WARNING*** No MII transceivers found!\n");
vp->phys[0] = 24;
} else {
vp->advertising = mdio_read(dev, vp->phys[0], MII_ADVERTISE);
@@ -1566,8 +1566,7 @@ vortex_up(struct net_device *dev)
pci_restore_state(VORTEX_PCI(vp));
err = pci_enable_device(VORTEX_PCI(vp));
if (err) {
- pr_warning("%s: Could not enable device\n",
- dev->name);
+ pr_warn("%s: Could not enable device\n", dev->name);
goto err_out;
}
}
@@ -2007,8 +2006,8 @@ vortex_error(struct net_device *dev, int status)
/* This occurs when we have the wrong media type! */
if (DoneDidThat == 0 &&
ioread16(ioaddr + EL3_STATUS) & StatsFull) {
- pr_warning("%s: Updating statistics failed, disabling "
- "stats as an interrupt source.\n", dev->name);
+ pr_warn("%s: Updating statistics failed, disabling stats as an interrupt source\n",
+ dev->name);
iowrite16(SetIntrEnb |
(window_read16(vp, 5, 10) & ~StatsFull),
ioaddr + EL3_CMD);
@@ -2148,8 +2147,8 @@ boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (vp->cur_tx - vp->dirty_tx >= TX_RING_SIZE) {
if (vortex_debug > 0)
- pr_warning("%s: BUG! Tx Ring full, refusing to send buffer.\n",
- dev->name);
+ pr_warn("%s: BUG! Tx Ring full, refusing to send buffer\n",
+ dev->name);
netif_stop_queue(dev);
return NETDEV_TX_BUSY;
}
@@ -2214,7 +2213,7 @@ boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
}
}
#else
- dma_addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE));
+ dma_addr = pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE);
if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr))
goto out_dma_err;
vp->tx_ring[entry].addr = cpu_to_le32(dma_addr);
@@ -2343,7 +2342,7 @@ vortex_interrupt(int irq, void *dev_id)
}
if (--work_done < 0) {
- pr_warning("%s: Too much work in interrupt, status %4.4x.\n",
+ pr_warn("%s: Too much work in interrupt, status %4.4x\n",
dev->name, status);
/* Disable all pending interrupts. */
do {
@@ -2476,7 +2475,7 @@ boomerang_interrupt(int irq, void *dev_id)
vortex_error(dev, status);
if (--work_done < 0) {
- pr_warning("%s: Too much work in interrupt, status %4.4x.\n",
+ pr_warn("%s: Too much work in interrupt, status %4.4x\n",
dev->name, status);
/* Disable all pending interrupts. */
do {
@@ -2652,7 +2651,8 @@ boomerang_rx(struct net_device *dev)
if (skb == NULL) {
static unsigned long last_jif;
if (time_after(jiffies, last_jif + 10 * HZ)) {
- pr_warning("%s: memory shortage\n", dev->name);
+ pr_warn("%s: memory shortage\n",
+ dev->name);
last_jif = jiffies;
}
if ((vp->cur_rx - vp->dirty_rx) == RX_RING_SIZE)
@@ -2751,7 +2751,8 @@ vortex_close(struct net_device *dev)
if (vp->rx_csumhits &&
(vp->drv_flags & HAS_HWCKSM) == 0 &&
(vp->card_idx >= MAX_UNITS || hw_checksums[vp->card_idx] == -1)) {
- pr_warning("%s supports hardware checksums, and we're not using them!\n", dev->name);
+ pr_warn("%s supports hardware checksums, and we're not using them!\n",
+ dev->name);
}
#endif
diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
index dc7406c..1ed1fbb 100644
--- a/drivers/net/ethernet/Kconfig
+++ b/drivers/net/ethernet/Kconfig
@@ -20,6 +20,7 @@ config SUNGEM_PHY
source "drivers/net/ethernet/3com/Kconfig"
source "drivers/net/ethernet/adaptec/Kconfig"
source "drivers/net/ethernet/aeroflex/Kconfig"
+source "drivers/net/ethernet/agere/Kconfig"
source "drivers/net/ethernet/allwinner/Kconfig"
source "drivers/net/ethernet/alteon/Kconfig"
source "drivers/net/ethernet/altera/Kconfig"
@@ -150,6 +151,7 @@ config ETHOC
source "drivers/net/ethernet/packetengines/Kconfig"
source "drivers/net/ethernet/pasemi/Kconfig"
source "drivers/net/ethernet/qlogic/Kconfig"
+source "drivers/net/ethernet/qualcomm/Kconfig"
source "drivers/net/ethernet/realtek/Kconfig"
source "drivers/net/ethernet/renesas/Kconfig"
source "drivers/net/ethernet/rdc/Kconfig"
diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile
index 224a018..6e0b629 100644
--- a/drivers/net/ethernet/Makefile
+++ b/drivers/net/ethernet/Makefile
@@ -6,6 +6,7 @@ obj-$(CONFIG_NET_VENDOR_3COM) += 3com/
obj-$(CONFIG_NET_VENDOR_8390) += 8390/
obj-$(CONFIG_NET_VENDOR_ADAPTEC) += adaptec/
obj-$(CONFIG_GRETH) += aeroflex/
+obj-$(CONFIG_NET_VENDOR_AGERE) += agere/
obj-$(CONFIG_NET_VENDOR_ALLWINNER) += allwinner/
obj-$(CONFIG_NET_VENDOR_ALTEON) += alteon/
obj-$(CONFIG_ALTERA_TSE) += altera/
@@ -60,6 +61,7 @@ obj-$(CONFIG_ETHOC) += ethoc.o
obj-$(CONFIG_NET_PACKET_ENGINE) += packetengines/
obj-$(CONFIG_NET_VENDOR_PASEMI) += pasemi/
obj-$(CONFIG_NET_VENDOR_QLOGIC) += qlogic/
+obj-$(CONFIG_NET_VENDOR_QUALCOMM) += qualcomm/
obj-$(CONFIG_NET_VENDOR_REALTEK) += realtek/
obj-$(CONFIG_SH_ETH) += renesas/
obj-$(CONFIG_NET_VENDOR_RDC) += rdc/
diff --git a/drivers/net/ethernet/adi/bfin_mac.c b/drivers/net/ethernet/adi/bfin_mac.c
index afa6684..8ed4d340 100644
--- a/drivers/net/ethernet/adi/bfin_mac.c
+++ b/drivers/net/ethernet/adi/bfin_mac.c
@@ -1692,9 +1692,6 @@ static int bfin_mac_probe(struct platform_device *pdev)
lp->vlan1_mask = ETH_P_8021Q | mii_bus_data->vlan1_mask;
lp->vlan2_mask = ETH_P_8021Q | mii_bus_data->vlan2_mask;
- /* Fill in the fields of the device structure with ethernet values. */
- ether_setup(ndev);
-
ndev->netdev_ops = &bfin_mac_netdev_ops;
ndev->ethtool_ops = &bfin_mac_ethtool_ops;
diff --git a/drivers/net/ethernet/agere/Kconfig b/drivers/net/ethernet/agere/Kconfig
new file mode 100644
index 0000000..63e805d
--- /dev/null
+++ b/drivers/net/ethernet/agere/Kconfig
@@ -0,0 +1,31 @@
+#
+# Agere device configuration
+#
+
+config NET_VENDOR_AGERE
+ bool "Agere devices"
+ default y
+ depends on PCI
+ ---help---
+ If you have a network (Ethernet) card belonging to this class, say Y
+ and read the Ethernet-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>.
+
+ Note that the answer to this question doesn't directly affect the
+ kernel: saying N will just cause the configurator to skip all
+ the questions about Agere devices. If you say Y, you will be asked
+ for your specific card in the following questions.
+
+if NET_VENDOR_AGERE
+
+config ET131X
+ tristate "Agere ET-1310 Gigabit Ethernet support"
+ depends on PCI
+ select PHYLIB
+ ---help---
+ This driver supports Agere ET-1310 ethernet adapters.
+
+ To compile this driver as a module, choose M here. The module
+ will be called et131x.
+
+endif # NET_VENDOR_AGERE
diff --git a/drivers/net/ethernet/agere/Makefile b/drivers/net/ethernet/agere/Makefile
new file mode 100644
index 0000000..027ff94
--- /dev/null
+++ b/drivers/net/ethernet/agere/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for the Agere ET-131x ethernet driver
+#
+
+obj-$(CONFIG_ET131X) += et131x.o
diff --git a/drivers/net/ethernet/agere/et131x.c b/drivers/net/ethernet/agere/et131x.c
new file mode 100644
index 0000000..384dc16
--- /dev/null
+++ b/drivers/net/ethernet/agere/et131x.c
@@ -0,0 +1,4121 @@
+/* Agere Systems Inc.
+ * 10/100/1000 Base-T Ethernet Driver for the ET1301 and ET131x series MACs
+ *
+ * Copyright © 2005 Agere Systems Inc.
+ * All rights reserved.
+ * http://www.agere.com
+ *
+ * Copyright (c) 2011 Mark Einon <mark.einon@gmail.com>
+ *
+ *------------------------------------------------------------------------------
+ *
+ * SOFTWARE LICENSE
+ *
+ * This software is provided subject to the following terms and conditions,
+ * which you should read carefully before using the software. Using this
+ * software indicates your acceptance of these terms and conditions. If you do
+ * not agree with these terms and conditions, do not use the software.
+ *
+ * Copyright © 2005 Agere Systems Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source or binary forms, with or without
+ * modifications, are permitted provided that the following conditions are met:
+ *
+ * . Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following Disclaimer as comments in the code as
+ * well as in the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * . Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following Disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * . Neither the name of Agere Systems Inc. nor the names of the contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * Disclaimer
+ *
+ * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
+ * INCLUDING, BUT NOT LIMITED TO, INFRINGEMENT AND THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. ANY
+ * USE, MODIFICATION OR DISTRIBUTION OF THIS SOFTWARE IS SOLELY AT THE USERS OWN
+ * RISK. IN NO EVENT SHALL AGERE SYSTEMS INC. OR CONTRIBUTORS BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, INCLUDING, BUT NOT LIMITED TO, CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+ * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+#include <linux/sched.h>
+#include <linux/ptrace.h>
+#include <linux/slab.h>
+#include <linux/ctype.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/interrupt.h>
+#include <linux/in.h>
+#include <linux/delay.h>
+#include <linux/bitops.h>
+#include <linux/io.h>
+
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/if_arp.h>
+#include <linux/ioport.h>
+#include <linux/crc32.h>
+#include <linux/random.h>
+#include <linux/phy.h>
+
+#include "et131x.h"
+
+MODULE_AUTHOR("Victor Soriano <vjsoriano@agere.com>");
+MODULE_AUTHOR("Mark Einon <mark.einon@gmail.com>");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_DESCRIPTION("10/100/1000 Base-T Ethernet Driver for the ET1310 by Agere Systems");
+
+/* EEPROM defines */
+#define MAX_NUM_REGISTER_POLLS 1000
+#define MAX_NUM_WRITE_RETRIES 2
+
+/* MAC defines */
+#define COUNTER_WRAP_16_BIT 0x10000
+#define COUNTER_WRAP_12_BIT 0x1000
+
+/* PCI defines */
+#define INTERNAL_MEM_SIZE 0x400 /* 1024 of internal memory */
+#define INTERNAL_MEM_RX_OFFSET 0x1FF /* 50% Tx, 50% Rx */
+
+/* ISR defines */
+/* For interrupts, normal running is:
+ * rxdma_xfr_done, phy_interrupt, mac_stat_interrupt,
+ * watchdog_interrupt & txdma_xfer_done
+ *
+ * In both cases, when flow control is enabled for either Tx or bi-direction,
+ * we additional enable rx_fbr0_low and rx_fbr1_low, so we know when the
+ * buffer rings are running low.
+ */
+#define INT_MASK_DISABLE 0xffffffff
+
+/* NOTE: Masking out MAC_STAT Interrupt for now...
+ * #define INT_MASK_ENABLE 0xfff6bf17
+ * #define INT_MASK_ENABLE_NO_FLOW 0xfff6bfd7
+ */
+#define INT_MASK_ENABLE 0xfffebf17
+#define INT_MASK_ENABLE_NO_FLOW 0xfffebfd7
+
+/* General defines */
+/* Packet and header sizes */
+#define NIC_MIN_PACKET_SIZE 60
+
+/* Multicast list size */
+#define NIC_MAX_MCAST_LIST 128
+
+/* Supported Filters */
+#define ET131X_PACKET_TYPE_DIRECTED 0x0001
+#define ET131X_PACKET_TYPE_MULTICAST 0x0002
+#define ET131X_PACKET_TYPE_BROADCAST 0x0004
+#define ET131X_PACKET_TYPE_PROMISCUOUS 0x0008
+#define ET131X_PACKET_TYPE_ALL_MULTICAST 0x0010
+
+/* Tx Timeout */
+#define ET131X_TX_TIMEOUT (1 * HZ)
+#define NIC_SEND_HANG_THRESHOLD 0
+
+/* MP_ADAPTER flags */
+#define FMP_ADAPTER_INTERRUPT_IN_USE 0x00000008
+
+/* MP_SHARED flags */
+#define FMP_ADAPTER_LOWER_POWER 0x00200000
+
+#define FMP_ADAPTER_NON_RECOVER_ERROR 0x00800000
+#define FMP_ADAPTER_HARDWARE_ERROR 0x04000000
+
+#define FMP_ADAPTER_FAIL_SEND_MASK 0x3ff00000
+
+/* Some offsets in PCI config space that are actually used. */
+#define ET1310_PCI_MAC_ADDRESS 0xA4
+#define ET1310_PCI_EEPROM_STATUS 0xB2
+#define ET1310_PCI_ACK_NACK 0xC0
+#define ET1310_PCI_REPLAY 0xC2
+#define ET1310_PCI_L0L1LATENCY 0xCF
+
+/* PCI Product IDs */
+#define ET131X_PCI_DEVICE_ID_GIG 0xED00 /* ET1310 1000 Base-T 8 */
+#define ET131X_PCI_DEVICE_ID_FAST 0xED01 /* ET1310 100 Base-T */
+
+/* Define order of magnitude converter */
+#define NANO_IN_A_MICRO 1000
+
+#define PARM_RX_NUM_BUFS_DEF 4
+#define PARM_RX_TIME_INT_DEF 10
+#define PARM_RX_MEM_END_DEF 0x2bc
+#define PARM_TX_TIME_INT_DEF 40
+#define PARM_TX_NUM_BUFS_DEF 4
+#define PARM_DMA_CACHE_DEF 0
+
+/* RX defines */
+#define FBR_CHUNKS 32
+#define MAX_DESC_PER_RING_RX 1024
+
+/* number of RFDs - default and min */
+#define RFD_LOW_WATER_MARK 40
+#define NIC_DEFAULT_NUM_RFD 1024
+#define NUM_FBRS 2
+
+#define MAX_PACKETS_HANDLED 256
+
+#define ALCATEL_MULTICAST_PKT 0x01000000
+#define ALCATEL_BROADCAST_PKT 0x02000000
+
+/* typedefs for Free Buffer Descriptors */
+struct fbr_desc {
+ u32 addr_lo;
+ u32 addr_hi;
+ u32 word2; /* Bits 10-31 reserved, 0-9 descriptor */
+};
+
+/* Packet Status Ring Descriptors
+ *
+ * Word 0:
+ *
+ * top 16 bits are from the Alcatel Status Word as enumerated in
+ * PE-MCXMAC Data Sheet IPD DS54 0210-1 (also IPD-DS80 0205-2)
+ *
+ * 0: hp hash pass
+ * 1: ipa IP checksum assist
+ * 2: ipp IP checksum pass
+ * 3: tcpa TCP checksum assist
+ * 4: tcpp TCP checksum pass
+ * 5: wol WOL Event
+ * 6: rxmac_error RXMAC Error Indicator
+ * 7: drop Drop packet
+ * 8: ft Frame Truncated
+ * 9: jp Jumbo Packet
+ * 10: vp VLAN Packet
+ * 11-15: unused
+ * 16: asw_prev_pkt_dropped e.g. IFG too small on previous
+ * 17: asw_RX_DV_event short receive event detected
+ * 18: asw_false_carrier_event bad carrier since last good packet
+ * 19: asw_code_err one or more nibbles signalled as errors
+ * 20: asw_CRC_err CRC error
+ * 21: asw_len_chk_err frame length field incorrect
+ * 22: asw_too_long frame length > 1518 bytes
+ * 23: asw_OK valid CRC + no code error
+ * 24: asw_multicast has a multicast address
+ * 25: asw_broadcast has a broadcast address
+ * 26: asw_dribble_nibble spurious bits after EOP
+ * 27: asw_control_frame is a control frame
+ * 28: asw_pause_frame is a pause frame
+ * 29: asw_unsupported_op unsupported OP code
+ * 30: asw_VLAN_tag VLAN tag detected
+ * 31: asw_long_evt Rx long event
+ *
+ * Word 1:
+ * 0-15: length length in bytes
+ * 16-25: bi Buffer Index
+ * 26-27: ri Ring Index
+ * 28-31: reserved
+ */
+struct pkt_stat_desc {
+ u32 word0;
+ u32 word1;
+};
+
+/* Typedefs for the RX DMA status word */
+
+/* rx status word 0 holds part of the status bits of the Rx DMA engine
+ * that get copied out to memory by the ET-1310. Word 0 is a 32 bit word
+ * which contains the Free Buffer ring 0 and 1 available offset.
+ *
+ * bit 0-9 FBR1 offset
+ * bit 10 Wrap flag for FBR1
+ * bit 16-25 FBR0 offset
+ * bit 26 Wrap flag for FBR0
+ */
+
+/* RXSTAT_WORD1_t structure holds part of the status bits of the Rx DMA engine
+ * that get copied out to memory by the ET-1310. Word 3 is a 32 bit word
+ * which contains the Packet Status Ring available offset.
+ *
+ * bit 0-15 reserved
+ * bit 16-27 PSRoffset
+ * bit 28 PSRwrap
+ * bit 29-31 unused
+ */
+
+/* struct rx_status_block is a structure representing the status of the Rx
+ * DMA engine it sits in free memory, and is pointed to by 0x101c / 0x1020
+ */
+struct rx_status_block {
+ u32 word0;
+ u32 word1;
+};
+
+/* Structure for look-up table holding free buffer ring pointers, addresses
+ * and state.
+ */
+struct fbr_lookup {
+ void *virt[MAX_DESC_PER_RING_RX];
+ u32 bus_high[MAX_DESC_PER_RING_RX];
+ u32 bus_low[MAX_DESC_PER_RING_RX];
+ void *ring_virtaddr;
+ dma_addr_t ring_physaddr;
+ void *mem_virtaddrs[MAX_DESC_PER_RING_RX / FBR_CHUNKS];
+ dma_addr_t mem_physaddrs[MAX_DESC_PER_RING_RX / FBR_CHUNKS];
+ u32 local_full;
+ u32 num_entries;
+ dma_addr_t buffsize;
+};
+
+/* struct rx_ring is the structure representing the adaptor's local
+ * reference(s) to the rings
+ */
+struct rx_ring {
+ struct fbr_lookup *fbr[NUM_FBRS];
+ void *ps_ring_virtaddr;
+ dma_addr_t ps_ring_physaddr;
+ u32 local_psr_full;
+ u32 psr_entries;
+
+ struct rx_status_block *rx_status_block;
+ dma_addr_t rx_status_bus;
+
+ struct list_head recv_list;
+ u32 num_ready_recv;
+
+ u32 num_rfd;
+
+ bool unfinished_receives;
+};
+
+/* TX defines */
+/* word 2 of the control bits in the Tx Descriptor ring for the ET-1310
+ *
+ * 0-15: length of packet
+ * 16-27: VLAN tag
+ * 28: VLAN CFI
+ * 29-31: VLAN priority
+ *
+ * word 3 of the control bits in the Tx Descriptor ring for the ET-1310
+ *
+ * 0: last packet in the sequence
+ * 1: first packet in the sequence
+ * 2: interrupt the processor when this pkt sent
+ * 3: Control word - no packet data
+ * 4: Issue half-duplex backpressure : XON/XOFF
+ * 5: send pause frame
+ * 6: Tx frame has error
+ * 7: append CRC
+ * 8: MAC override
+ * 9: pad packet
+ * 10: Packet is a Huge packet
+ * 11: append VLAN tag
+ * 12: IP checksum assist
+ * 13: TCP checksum assist
+ * 14: UDP checksum assist
+ */
+#define TXDESC_FLAG_LASTPKT 0x0001
+#define TXDESC_FLAG_FIRSTPKT 0x0002
+#define TXDESC_FLAG_INTPROC 0x0004
+
+/* struct tx_desc represents each descriptor on the ring */
+struct tx_desc {
+ u32 addr_hi;
+ u32 addr_lo;
+ u32 len_vlan; /* control words how to xmit the */
+ u32 flags; /* data (detailed above) */
+};
+
+/* The status of the Tx DMA engine it sits in free memory, and is pointed to
+ * by 0x101c / 0x1020. This is a DMA10 type
+ */
+
+/* TCB (Transmit Control Block: Host Side) */
+struct tcb {
+ struct tcb *next; /* Next entry in ring */
+ u32 count; /* Used to spot stuck/lost packets */
+ u32 stale; /* Used to spot stuck/lost packets */
+ struct sk_buff *skb; /* Network skb we are tied to */
+ u32 index; /* Ring indexes */
+ u32 index_start;
+};
+
+/* Structure representing our local reference(s) to the ring */
+struct tx_ring {
+ /* TCB (Transmit Control Block) memory and lists */
+ struct tcb *tcb_ring;
+
+ /* List of TCBs that are ready to be used */
+ struct tcb *tcb_qhead;
+ struct tcb *tcb_qtail;
+
+ /* list of TCBs that are currently being sent. */
+ struct tcb *send_head;
+ struct tcb *send_tail;
+ int used;
+
+ /* The actual descriptor ring */
+ struct tx_desc *tx_desc_ring;
+ dma_addr_t tx_desc_ring_pa;
+
+ /* send_idx indicates where we last wrote to in the descriptor ring. */
+ u32 send_idx;
+
+ /* The location of the write-back status block */
+ u32 *tx_status;
+ dma_addr_t tx_status_pa;
+
+ /* Packets since the last IRQ: used for interrupt coalescing */
+ int since_irq;
+};
+
+/* Do not change these values: if changed, then change also in respective
+ * TXdma and Rxdma engines
+ */
+#define NUM_DESC_PER_RING_TX 512 /* TX Do not change these values */
+#define NUM_TCB 64
+
+/* These values are all superseded by registry entries to facilitate tuning.
+ * Once the desired performance has been achieved, the optimal registry values
+ * should be re-populated to these #defines:
+ */
+#define TX_ERROR_PERIOD 1000
+
+#define LO_MARK_PERCENT_FOR_PSR 15
+#define LO_MARK_PERCENT_FOR_RX 15
+
+/* RFD (Receive Frame Descriptor) */
+struct rfd {
+ struct list_head list_node;
+ struct sk_buff *skb;
+ u32 len; /* total size of receive frame */
+ u16 bufferindex;
+ u8 ringindex;
+};
+
+/* Flow Control */
+#define FLOW_BOTH 0
+#define FLOW_TXONLY 1
+#define FLOW_RXONLY 2
+#define FLOW_NONE 3
+
+/* Struct to define some device statistics */
+struct ce_stats {
+ u32 multicast_pkts_rcvd;
+ u32 rcvd_pkts_dropped;
+
+ u32 tx_underflows;
+ u32 tx_collisions;
+ u32 tx_excessive_collisions;
+ u32 tx_first_collisions;
+ u32 tx_late_collisions;
+ u32 tx_max_pkt_errs;
+ u32 tx_deferred;
+
+ u32 rx_overflows;
+ u32 rx_length_errs;
+ u32 rx_align_errs;
+ u32 rx_crc_errs;
+ u32 rx_code_violations;
+ u32 rx_other_errs;
+
+ u32 interrupt_status;
+};
+
+/* The private adapter structure */
+struct et131x_adapter {
+ struct net_device *netdev;
+ struct pci_dev *pdev;
+ struct mii_bus *mii_bus;
+ struct phy_device *phydev;
+ struct napi_struct napi;
+
+ /* Flags that indicate current state of the adapter */
+ u32 flags;
+
+ /* local link state, to determine if a state change has occurred */
+ int link;
+
+ /* Configuration */
+ u8 rom_addr[ETH_ALEN];
+ u8 addr[ETH_ALEN];
+ bool has_eeprom;
+ u8 eeprom_data[2];
+
+ spinlock_t tcb_send_qlock; /* protects the tx_ring send tcb list */
+ spinlock_t tcb_ready_qlock; /* protects the tx_ring ready tcb list */
+ spinlock_t rcv_lock; /* protects the rx_ring receive list */
+
+ /* Packet Filter and look ahead size */
+ u32 packet_filter;
+
+ /* multicast list */
+ u32 multicast_addr_count;
+ u8 multicast_list[NIC_MAX_MCAST_LIST][ETH_ALEN];
+
+ /* Pointer to the device's PCI register space */
+ struct address_map __iomem *regs;
+
+ /* Registry parameters */
+ u8 wanted_flow; /* Flow we want for 802.3x flow control */
+ u32 registry_jumbo_packet; /* Max supported ethernet packet size */
+
+ /* Derived from the registry: */
+ u8 flow; /* flow control validated by the far-end */
+
+ /* Minimize init-time */
+ struct timer_list error_timer;
+
+ /* variable putting the phy into coma mode when boot up with no cable
+ * plugged in after 5 seconds
+ */
+ u8 boot_coma;
+
+ /* Tx Memory Variables */
+ struct tx_ring tx_ring;
+
+ /* Rx Memory Variables */
+ struct rx_ring rx_ring;
+
+ struct ce_stats stats;
+};
+
+static int eeprom_wait_ready(struct pci_dev *pdev, u32 *status)
+{
+ u32 reg;
+ int i;
+
+ /* 1. Check LBCIF Status Register for bits 6 & 3:2 all equal to 0 and
+ * bits 7,1:0 both equal to 1, at least once after reset.
+ * Subsequent operations need only to check that bits 1:0 are equal
+ * to 1 prior to starting a single byte read/write
+ */
+ for (i = 0; i < MAX_NUM_REGISTER_POLLS; i++) {
+ if (pci_read_config_dword(pdev, LBCIF_DWORD1_GROUP, &reg))
+ return -EIO;
+
+ /* I2C idle and Phy Queue Avail both true */
+ if ((reg & 0x3000) == 0x3000) {
+ if (status)
+ *status = reg;
+ return reg & 0xFF;
+ }
+ }
+ return -ETIMEDOUT;
+}
+
+static int eeprom_write(struct et131x_adapter *adapter, u32 addr, u8 data)
+{
+ struct pci_dev *pdev = adapter->pdev;
+ int index = 0;
+ int retries;
+ int err = 0;
+ int writeok = 0;
+ u32 status;
+ u32 val = 0;
+
+ /* For an EEPROM, an I2C single byte write is defined as a START
+ * condition followed by the device address, EEPROM address, one byte
+ * of data and a STOP condition. The STOP condition will trigger the
+ * EEPROM's internally timed write cycle to the nonvolatile memory.
+ * All inputs are disabled during this write cycle and the EEPROM will
+ * not respond to any access until the internal write is complete.
+ */
+ err = eeprom_wait_ready(pdev, NULL);
+ if (err < 0)
+ return err;
+
+ /* 2. Write to the LBCIF Control Register: bit 7=1, bit 6=1, bit 3=0,
+ * and bits 1:0 both =0. Bit 5 should be set according to the
+ * type of EEPROM being accessed (1=two byte addressing, 0=one
+ * byte addressing).
+ */
+ if (pci_write_config_byte(pdev, LBCIF_CONTROL_REGISTER,
+ LBCIF_CONTROL_LBCIF_ENABLE |
+ LBCIF_CONTROL_I2C_WRITE))
+ return -EIO;
+
+ /* Prepare EEPROM address for Step 3 */
+ for (retries = 0; retries < MAX_NUM_WRITE_RETRIES; retries++) {
+ if (pci_write_config_dword(pdev, LBCIF_ADDRESS_REGISTER, addr))
+ break;
+ /* Write the data to the LBCIF Data Register (the I2C write
+ * will begin).
+ */
+ if (pci_write_config_byte(pdev, LBCIF_DATA_REGISTER, data))
+ break;
+ /* Monitor bit 1:0 of the LBCIF Status Register. When bits
+ * 1:0 are both equal to 1, the I2C write has completed and the
+ * internal write cycle of the EEPROM is about to start.
+ * (bits 1:0 = 01 is a legal state while waiting from both
+ * equal to 1, but bits 1:0 = 10 is invalid and implies that
+ * something is broken).
+ */
+ err = eeprom_wait_ready(pdev, &status);
+ if (err < 0)
+ return 0;
+
+ /* Check bit 3 of the LBCIF Status Register. If equal to 1,
+ * an error has occurred.Don't break here if we are revision
+ * 1, this is so we do a blind write for load bug.
+ */
+ if ((status & LBCIF_STATUS_GENERAL_ERROR) &&
+ adapter->pdev->revision == 0)
+ break;
+
+ /* Check bit 2 of the LBCIF Status Register. If equal to 1 an
+ * ACK error has occurred on the address phase of the write.
+ * This could be due to an actual hardware failure or the
+ * EEPROM may still be in its internal write cycle from a
+ * previous write. This write operation was ignored and must be
+ *repeated later.
+ */
+ if (status & LBCIF_STATUS_ACK_ERROR) {
+ /* This could be due to an actual hardware failure
+ * or the EEPROM may still be in its internal write
+ * cycle from a previous write. This write operation
+ * was ignored and must be repeated later.
+ */
+ udelay(10);
+ continue;
+ }
+
+ writeok = 1;
+ break;
+ }
+
+ udelay(10);
+
+ while (1) {
+ if (pci_write_config_byte(pdev, LBCIF_CONTROL_REGISTER,
+ LBCIF_CONTROL_LBCIF_ENABLE))
+ writeok = 0;
+
+ /* Do read until internal ACK_ERROR goes away meaning write
+ * completed
+ */
+ do {
+ pci_write_config_dword(pdev,
+ LBCIF_ADDRESS_REGISTER,
+ addr);
+ do {
+ pci_read_config_dword(pdev,
+ LBCIF_DATA_REGISTER,
+ &val);
+ } while ((val & 0x00010000) == 0);
+ } while (val & 0x00040000);
+
+ if ((val & 0xFF00) != 0xC000 || index == 10000)
+ break;
+ index++;
+ }
+ return writeok ? 0 : -EIO;
+}
+
+static int eeprom_read(struct et131x_adapter *adapter, u32 addr, u8 *pdata)
+{
+ struct pci_dev *pdev = adapter->pdev;
+ int err;
+ u32 status;
+
+ /* A single byte read is similar to the single byte write, with the
+ * exception of the data flow:
+ */
+ err = eeprom_wait_ready(pdev, NULL);
+ if (err < 0)
+ return err;
+ /* Write to the LBCIF Control Register: bit 7=1, bit 6=0, bit 3=0,
+ * and bits 1:0 both =0. Bit 5 should be set according to the type
+ * of EEPROM being accessed (1=two byte addressing, 0=one byte
+ * addressing).
+ */
+ if (pci_write_config_byte(pdev, LBCIF_CONTROL_REGISTER,
+ LBCIF_CONTROL_LBCIF_ENABLE))
+ return -EIO;
+ /* Write the address to the LBCIF Address Register (I2C read will
+ * begin).
+ */
+ if (pci_write_config_dword(pdev, LBCIF_ADDRESS_REGISTER, addr))
+ return -EIO;
+ /* Monitor bit 0 of the LBCIF Status Register. When = 1, I2C read
+ * is complete. (if bit 1 =1 and bit 0 stays = 0, a hardware failure
+ * has occurred).
+ */
+ err = eeprom_wait_ready(pdev, &status);
+ if (err < 0)
+ return err;
+ /* Regardless of error status, read data byte from LBCIF Data
+ * Register.
+ */
+ *pdata = err;
+
+ return (status & LBCIF_STATUS_ACK_ERROR) ? -EIO : 0;
+}
+
+static int et131x_init_eeprom(struct et131x_adapter *adapter)
+{
+ struct pci_dev *pdev = adapter->pdev;
+ u8 eestatus;
+
+ pci_read_config_byte(pdev, ET1310_PCI_EEPROM_STATUS, &eestatus);
+
+ /* THIS IS A WORKAROUND:
+ * I need to call this function twice to get my card in a
+ * LG M1 Express Dual running. I tried also a msleep before this
+ * function, because I thought there could be some time conditions
+ * but it didn't work. Call the whole function twice also work.
+ */
+ if (pci_read_config_byte(pdev, ET1310_PCI_EEPROM_STATUS, &eestatus)) {
+ dev_err(&pdev->dev,
+ "Could not read PCI config space for EEPROM Status\n");
+ return -EIO;
+ }
+
+ /* Determine if the error(s) we care about are present. If they are
+ * present we need to fail.
+ */
+ if (eestatus & 0x4C) {
+ int write_failed = 0;
+
+ if (pdev->revision == 0x01) {
+ int i;
+ static const u8 eedata[4] = { 0xFE, 0x13, 0x10, 0xFF };
+
+ /* Re-write the first 4 bytes if we have an eeprom
+ * present and the revision id is 1, this fixes the
+ * corruption seen with 1310 B Silicon
+ */
+ for (i = 0; i < 3; i++)
+ if (eeprom_write(adapter, i, eedata[i]) < 0)
+ write_failed = 1;
+ }
+ if (pdev->revision != 0x01 || write_failed) {
+ dev_err(&pdev->dev,
+ "Fatal EEPROM Status Error - 0x%04x\n",
+ eestatus);
+
+ /* This error could mean that there was an error
+ * reading the eeprom or that the eeprom doesn't exist.
+ * We will treat each case the same and not try to
+ * gather additional information that normally would
+ * come from the eeprom, like MAC Address
+ */
+ adapter->has_eeprom = 0;
+ return -EIO;
+ }
+ }
+ adapter->has_eeprom = 1;
+
+ /* Read the EEPROM for information regarding LED behavior. Refer to
+ * et131x_xcvr_init() for its use.
+ */
+ eeprom_read(adapter, 0x70, &adapter->eeprom_data[0]);
+ eeprom_read(adapter, 0x71, &adapter->eeprom_data[1]);
+
+ if (adapter->eeprom_data[0] != 0xcd)
+ /* Disable all optional features */
+ adapter->eeprom_data[1] = 0x00;
+
+ return 0;
+}
+
+static void et131x_rx_dma_enable(struct et131x_adapter *adapter)
+{
+ /* Setup the receive dma configuration register for normal operation */
+ u32 csr = ET_RXDMA_CSR_FBR1_ENABLE;
+ struct rx_ring *rx_ring = &adapter->rx_ring;
+
+ if (rx_ring->fbr[1]->buffsize == 4096)
+ csr |= ET_RXDMA_CSR_FBR1_SIZE_LO;
+ else if (rx_ring->fbr[1]->buffsize == 8192)
+ csr |= ET_RXDMA_CSR_FBR1_SIZE_HI;
+ else if (rx_ring->fbr[1]->buffsize == 16384)
+ csr |= ET_RXDMA_CSR_FBR1_SIZE_LO | ET_RXDMA_CSR_FBR1_SIZE_HI;
+
+ csr |= ET_RXDMA_CSR_FBR0_ENABLE;
+ if (rx_ring->fbr[0]->buffsize == 256)
+ csr |= ET_RXDMA_CSR_FBR0_SIZE_LO;
+ else if (rx_ring->fbr[0]->buffsize == 512)
+ csr |= ET_RXDMA_CSR_FBR0_SIZE_HI;
+ else if (rx_ring->fbr[0]->buffsize == 1024)
+ csr |= ET_RXDMA_CSR_FBR0_SIZE_LO | ET_RXDMA_CSR_FBR0_SIZE_HI;
+ writel(csr, &adapter->regs->rxdma.csr);
+
+ csr = readl(&adapter->regs->rxdma.csr);
+ if (csr & ET_RXDMA_CSR_HALT_STATUS) {
+ udelay(5);
+ csr = readl(&adapter->regs->rxdma.csr);
+ if (csr & ET_RXDMA_CSR_HALT_STATUS) {
+ dev_err(&adapter->pdev->dev,
+ "RX Dma failed to exit halt state. CSR 0x%08x\n",
+ csr);
+ }
+ }
+}
+
+static void et131x_rx_dma_disable(struct et131x_adapter *adapter)
+{
+ u32 csr;
+ /* Setup the receive dma configuration register */
+ writel(ET_RXDMA_CSR_HALT | ET_RXDMA_CSR_FBR1_ENABLE,
+ &adapter->regs->rxdma.csr);
+ csr = readl(&adapter->regs->rxdma.csr);
+ if (!(csr & ET_RXDMA_CSR_HALT_STATUS)) {
+ udelay(5);
+ csr = readl(&adapter->regs->rxdma.csr);
+ if (!(csr & ET_RXDMA_CSR_HALT_STATUS))
+ dev_err(&adapter->pdev->dev,
+ "RX Dma failed to enter halt state. CSR 0x%08x\n",
+ csr);
+ }
+}
+
+static void et131x_tx_dma_enable(struct et131x_adapter *adapter)
+{
+ /* Setup the transmit dma configuration register for normal
+ * operation
+ */
+ writel(ET_TXDMA_SNGL_EPKT | (PARM_DMA_CACHE_DEF << ET_TXDMA_CACHE_SHIFT),
+ &adapter->regs->txdma.csr);
+}
+
+static inline void add_10bit(u32 *v, int n)
+{
+ *v = INDEX10(*v + n) | (*v & ET_DMA10_WRAP);
+}
+
+static inline void add_12bit(u32 *v, int n)
+{
+ *v = INDEX12(*v + n) | (*v & ET_DMA12_WRAP);
+}
+
+static void et1310_config_mac_regs1(struct et131x_adapter *adapter)
+{
+ struct mac_regs __iomem *macregs = &adapter->regs->mac;
+ u32 station1;
+ u32 station2;
+ u32 ipg;
+
+ /* First we need to reset everything. Write to MAC configuration
+ * register 1 to perform reset.
+ */
+ writel(ET_MAC_CFG1_SOFT_RESET | ET_MAC_CFG1_SIM_RESET |
+ ET_MAC_CFG1_RESET_RXMC | ET_MAC_CFG1_RESET_TXMC |
+ ET_MAC_CFG1_RESET_RXFUNC | ET_MAC_CFG1_RESET_TXFUNC,
+ &macregs->cfg1);
+
+ /* Next lets configure the MAC Inter-packet gap register */
+ ipg = 0x38005860; /* IPG1 0x38 IPG2 0x58 B2B 0x60 */
+ ipg |= 0x50 << 8; /* ifg enforce 0x50 */
+ writel(ipg, &macregs->ipg);
+
+ /* Next lets configure the MAC Half Duplex register */
+ /* BEB trunc 0xA, Ex Defer, Rexmit 0xF Coll 0x37 */
+ writel(0x00A1F037, &macregs->hfdp);
+
+ /* Next lets configure the MAC Interface Control register */
+ writel(0, &macregs->if_ctrl);
+
+ writel(ET_MAC_MIIMGMT_CLK_RST, &macregs->mii_mgmt_cfg);
+
+ /* Next lets configure the MAC Station Address register. These
+ * values are read from the EEPROM during initialization and stored
+ * in the adapter structure. We write what is stored in the adapter
+ * structure to the MAC Station Address registers high and low. This
+ * station address is used for generating and checking pause control
+ * packets.
+ */
+ station2 = (adapter->addr[1] << ET_MAC_STATION_ADDR2_OC2_SHIFT) |
+ (adapter->addr[0] << ET_MAC_STATION_ADDR2_OC1_SHIFT);
+ station1 = (adapter->addr[5] << ET_MAC_STATION_ADDR1_OC6_SHIFT) |
+ (adapter->addr[4] << ET_MAC_STATION_ADDR1_OC5_SHIFT) |
+ (adapter->addr[3] << ET_MAC_STATION_ADDR1_OC4_SHIFT) |
+ adapter->addr[2];
+ writel(station1, &macregs->station_addr_1);
+ writel(station2, &macregs->station_addr_2);
+
+ /* Max ethernet packet in bytes that will be passed by the mac without
+ * being truncated. Allow the MAC to pass 4 more than our max packet
+ * size. This is 4 for the Ethernet CRC.
+ *
+ * Packets larger than (registry_jumbo_packet) that do not contain a
+ * VLAN ID will be dropped by the Rx function.
+ */
+ writel(adapter->registry_jumbo_packet + 4, &macregs->max_fm_len);
+
+ /* clear out MAC config reset */
+ writel(0, &macregs->cfg1);
+}
+
+static void et1310_config_mac_regs2(struct et131x_adapter *adapter)
+{
+ int32_t delay = 0;
+ struct mac_regs __iomem *mac = &adapter->regs->mac;
+ struct phy_device *phydev = adapter->phydev;
+ u32 cfg1;
+ u32 cfg2;
+ u32 ifctrl;
+ u32 ctl;
+
+ ctl = readl(&adapter->regs->txmac.ctl);
+ cfg1 = readl(&mac->cfg1);
+ cfg2 = readl(&mac->cfg2);
+ ifctrl = readl(&mac->if_ctrl);
+
+ /* Set up the if mode bits */
+ cfg2 &= ~ET_MAC_CFG2_IFMODE_MASK;
+ if (phydev->speed == SPEED_1000) {
+ cfg2 |= ET_MAC_CFG2_IFMODE_1000;
+ ifctrl &= ~ET_MAC_IFCTRL_PHYMODE;
+ } else {
+ cfg2 |= ET_MAC_CFG2_IFMODE_100;
+ ifctrl |= ET_MAC_IFCTRL_PHYMODE;
+ }
+
+ cfg1 |= ET_MAC_CFG1_RX_ENABLE | ET_MAC_CFG1_TX_ENABLE |
+ ET_MAC_CFG1_TX_FLOW;
+
+ cfg1 &= ~(ET_MAC_CFG1_LOOPBACK | ET_MAC_CFG1_RX_FLOW);
+ if (adapter->flow == FLOW_RXONLY || adapter->flow == FLOW_BOTH)
+ cfg1 |= ET_MAC_CFG1_RX_FLOW;
+ writel(cfg1, &mac->cfg1);
+
+ /* Now we need to initialize the MAC Configuration 2 register */
+ /* preamble 7, check length, huge frame off, pad crc, crc enable
+ * full duplex off
+ */
+ cfg2 |= 0x7 << ET_MAC_CFG2_PREAMBLE_SHIFT;
+ cfg2 |= ET_MAC_CFG2_IFMODE_LEN_CHECK;
+ cfg2 |= ET_MAC_CFG2_IFMODE_PAD_CRC;
+ cfg2 |= ET_MAC_CFG2_IFMODE_CRC_ENABLE;
+ cfg2 &= ~ET_MAC_CFG2_IFMODE_HUGE_FRAME;
+ cfg2 &= ~ET_MAC_CFG2_IFMODE_FULL_DPLX;
+
+ if (phydev->duplex == DUPLEX_FULL)
+ cfg2 |= ET_MAC_CFG2_IFMODE_FULL_DPLX;
+
+ ifctrl &= ~ET_MAC_IFCTRL_GHDMODE;
+ if (phydev->duplex == DUPLEX_HALF)
+ ifctrl |= ET_MAC_IFCTRL_GHDMODE;
+
+ writel(ifctrl, &mac->if_ctrl);
+ writel(cfg2, &mac->cfg2);
+
+ do {
+ udelay(10);
+ delay++;
+ cfg1 = readl(&mac->cfg1);
+ } while ((cfg1 & ET_MAC_CFG1_WAIT) != ET_MAC_CFG1_WAIT && delay < 100);
+
+ if (delay == 100) {
+ dev_warn(&adapter->pdev->dev,
+ "Syncd bits did not respond correctly cfg1 word 0x%08x\n",
+ cfg1);
+ }
+
+ ctl |= ET_TX_CTRL_TXMAC_ENABLE | ET_TX_CTRL_FC_DISABLE;
+ writel(ctl, &adapter->regs->txmac.ctl);
+
+ if (adapter->flags & FMP_ADAPTER_LOWER_POWER) {
+ et131x_rx_dma_enable(adapter);
+ et131x_tx_dma_enable(adapter);
+ }
+}
+
+static int et1310_in_phy_coma(struct et131x_adapter *adapter)
+{
+ u32 pmcsr = readl(&adapter->regs->global.pm_csr);
+
+ return ET_PM_PHY_SW_COMA & pmcsr ? 1 : 0;
+}
+
+static void et1310_setup_device_for_multicast(struct et131x_adapter *adapter)
+{
+ struct rxmac_regs __iomem *rxmac = &adapter->regs->rxmac;
+ u32 hash1 = 0;
+ u32 hash2 = 0;
+ u32 hash3 = 0;
+ u32 hash4 = 0;
+ u32 pm_csr;
+
+ /* If ET131X_PACKET_TYPE_MULTICAST is specified, then we provision
+ * the multi-cast LIST. If it is NOT specified, (and "ALL" is not
+ * specified) then we should pass NO multi-cast addresses to the
+ * driver.
+ */
+ if (adapter->packet_filter & ET131X_PACKET_TYPE_MULTICAST) {
+ int i;
+
+ /* Loop through our multicast array and set up the device */
+ for (i = 0; i < adapter->multicast_addr_count; i++) {
+ u32 result;
+
+ result = ether_crc(6, adapter->multicast_list[i]);
+
+ result = (result & 0x3F800000) >> 23;
+
+ if (result < 32) {
+ hash1 |= (1 << result);
+ } else if ((31 < result) && (result < 64)) {
+ result -= 32;
+ hash2 |= (1 << result);
+ } else if ((63 < result) && (result < 96)) {
+ result -= 64;
+ hash3 |= (1 << result);
+ } else {
+ result -= 96;
+ hash4 |= (1 << result);
+ }
+ }
+ }
+
+ /* Write out the new hash to the device */
+ pm_csr = readl(&adapter->regs->global.pm_csr);
+ if (!et1310_in_phy_coma(adapter)) {
+ writel(hash1, &rxmac->multi_hash1);
+ writel(hash2, &rxmac->multi_hash2);
+ writel(hash3, &rxmac->multi_hash3);
+ writel(hash4, &rxmac->multi_hash4);
+ }
+}
+
+static void et1310_setup_device_for_unicast(struct et131x_adapter *adapter)
+{
+ struct rxmac_regs __iomem *rxmac = &adapter->regs->rxmac;
+ u32 uni_pf1;
+ u32 uni_pf2;
+ u32 uni_pf3;
+ u32 pm_csr;
+
+ /* Set up unicast packet filter reg 3 to be the first two octets of
+ * the MAC address for both address
+ *
+ * Set up unicast packet filter reg 2 to be the octets 2 - 5 of the
+ * MAC address for second address
+ *
+ * Set up unicast packet filter reg 3 to be the octets 2 - 5 of the
+ * MAC address for first address
+ */
+ uni_pf3 = (adapter->addr[0] << ET_RX_UNI_PF_ADDR2_1_SHIFT) |
+ (adapter->addr[1] << ET_RX_UNI_PF_ADDR2_2_SHIFT) |
+ (adapter->addr[0] << ET_RX_UNI_PF_ADDR1_1_SHIFT) |
+ adapter->addr[1];
+
+ uni_pf2 = (adapter->addr[2] << ET_RX_UNI_PF_ADDR2_3_SHIFT) |
+ (adapter->addr[3] << ET_RX_UNI_PF_ADDR2_4_SHIFT) |
+ (adapter->addr[4] << ET_RX_UNI_PF_ADDR2_5_SHIFT) |
+ adapter->addr[5];
+
+ uni_pf1 = (adapter->addr[2] << ET_RX_UNI_PF_ADDR1_3_SHIFT) |
+ (adapter->addr[3] << ET_RX_UNI_PF_ADDR1_4_SHIFT) |
+ (adapter->addr[4] << ET_RX_UNI_PF_ADDR1_5_SHIFT) |
+ adapter->addr[5];
+
+ pm_csr = readl(&adapter->regs->global.pm_csr);
+ if (!et1310_in_phy_coma(adapter)) {
+ writel(uni_pf1, &rxmac->uni_pf_addr1);
+ writel(uni_pf2, &rxmac->uni_pf_addr2);
+ writel(uni_pf3, &rxmac->uni_pf_addr3);
+ }
+}
+
+static void et1310_config_rxmac_regs(struct et131x_adapter *adapter)
+{
+ struct rxmac_regs __iomem *rxmac = &adapter->regs->rxmac;
+ struct phy_device *phydev = adapter->phydev;
+ u32 sa_lo;
+ u32 sa_hi = 0;
+ u32 pf_ctrl = 0;
+ u32 __iomem *wolw;
+
+ /* Disable the MAC while it is being configured (also disable WOL) */
+ writel(0x8, &rxmac->ctrl);
+
+ /* Initialize WOL to disabled. */
+ writel(0, &rxmac->crc0);
+ writel(0, &rxmac->crc12);
+ writel(0, &rxmac->crc34);
+
+ /* We need to set the WOL mask0 - mask4 next. We initialize it to
+ * its default Values of 0x00000000 because there are not WOL masks
+ * as of this time.
+ */
+ for (wolw = &rxmac->mask0_word0; wolw <= &rxmac->mask4_word3; wolw++)
+ writel(0, wolw);
+
+ /* Lets setup the WOL Source Address */
+ sa_lo = (adapter->addr[2] << ET_RX_WOL_LO_SA3_SHIFT) |
+ (adapter->addr[3] << ET_RX_WOL_LO_SA4_SHIFT) |
+ (adapter->addr[4] << ET_RX_WOL_LO_SA5_SHIFT) |
+ adapter->addr[5];
+ writel(sa_lo, &rxmac->sa_lo);
+
+ sa_hi = (u32)(adapter->addr[0] << ET_RX_WOL_HI_SA1_SHIFT) |
+ adapter->addr[1];
+ writel(sa_hi, &rxmac->sa_hi);
+
+ /* Disable all Packet Filtering */
+ writel(0, &rxmac->pf_ctrl);
+
+ /* Let's initialize the Unicast Packet filtering address */
+ if (adapter->packet_filter & ET131X_PACKET_TYPE_DIRECTED) {
+ et1310_setup_device_for_unicast(adapter);
+ pf_ctrl |= ET_RX_PFCTRL_UNICST_FILTER_ENABLE;
+ } else {
+ writel(0, &rxmac->uni_pf_addr1);
+ writel(0, &rxmac->uni_pf_addr2);
+ writel(0, &rxmac->uni_pf_addr3);
+ }
+
+ /* Let's initialize the Multicast hash */
+ if (!(adapter->packet_filter & ET131X_PACKET_TYPE_ALL_MULTICAST)) {
+ pf_ctrl |= ET_RX_PFCTRL_MLTCST_FILTER_ENABLE;
+ et1310_setup_device_for_multicast(adapter);
+ }
+
+ /* Runt packet filtering. Didn't work in version A silicon. */
+ pf_ctrl |= (NIC_MIN_PACKET_SIZE + 4) << ET_RX_PFCTRL_MIN_PKT_SZ_SHIFT;
+ pf_ctrl |= ET_RX_PFCTRL_FRAG_FILTER_ENABLE;
+
+ if (adapter->registry_jumbo_packet > 8192)
+ /* In order to transmit jumbo packets greater than 8k, the
+ * FIFO between RxMAC and RxDMA needs to be reduced in size
+ * to (16k - Jumbo packet size). In order to implement this,
+ * we must use "cut through" mode in the RxMAC, which chops
+ * packets down into segments which are (max_size * 16). In
+ * this case we selected 256 bytes, since this is the size of
+ * the PCI-Express TLP's that the 1310 uses.
+ *
+ * seg_en on, fc_en off, size 0x10
+ */
+ writel(0x41, &rxmac->mcif_ctrl_max_seg);
+ else
+ writel(0, &rxmac->mcif_ctrl_max_seg);
+
+ writel(0, &rxmac->mcif_water_mark);
+ writel(0, &rxmac->mif_ctrl);
+ writel(0, &rxmac->space_avail);
+
+ /* Initialize the the mif_ctrl register
+ * bit 3: Receive code error. One or more nibbles were signaled as
+ * errors during the reception of the packet. Clear this
+ * bit in Gigabit, set it in 100Mbit. This was derived
+ * experimentally at UNH.
+ * bit 4: Receive CRC error. The packet's CRC did not match the
+ * internally generated CRC.
+ * bit 5: Receive length check error. Indicates that frame length
+ * field value in the packet does not match the actual data
+ * byte length and is not a type field.
+ * bit 16: Receive frame truncated.
+ * bit 17: Drop packet enable
+ */
+ if (phydev && phydev->speed == SPEED_100)
+ writel(0x30038, &rxmac->mif_ctrl);
+ else
+ writel(0x30030, &rxmac->mif_ctrl);
+
+ /* Finally we initialize RxMac to be enabled & WOL disabled. Packet
+ * filter is always enabled since it is where the runt packets are
+ * supposed to be dropped. For version A silicon, runt packet
+ * dropping doesn't work, so it is disabled in the pf_ctrl register,
+ * but we still leave the packet filter on.
+ */
+ writel(pf_ctrl, &rxmac->pf_ctrl);
+ writel(ET_RX_CTRL_RXMAC_ENABLE | ET_RX_CTRL_WOL_DISABLE, &rxmac->ctrl);
+}
+
+static void et1310_config_txmac_regs(struct et131x_adapter *adapter)
+{
+ struct txmac_regs __iomem *txmac = &adapter->regs->txmac;
+
+ /* We need to update the Control Frame Parameters
+ * cfpt - control frame pause timer set to 64 (0x40)
+ * cfep - control frame extended pause timer set to 0x0
+ */
+ if (adapter->flow == FLOW_NONE)
+ writel(0, &txmac->cf_param);
+ else
+ writel(0x40, &txmac->cf_param);
+}
+
+static void et1310_config_macstat_regs(struct et131x_adapter *adapter)
+{
+ struct macstat_regs __iomem *macstat = &adapter->regs->macstat;
+ u32 __iomem *reg;
+
+ /* initialize all the macstat registers to zero on the device */
+ for (reg = &macstat->txrx_0_64_byte_frames;
+ reg <= &macstat->carry_reg2; reg++)
+ writel(0, reg);
+
+ /* Unmask any counters that we want to track the overflow of.
+ * Initially this will be all counters. It may become clear later
+ * that we do not need to track all counters.
+ */
+ writel(0xFFFFBE32, &macstat->carry_reg1_mask);
+ writel(0xFFFE7E8B, &macstat->carry_reg2_mask);
+}
+
+static int et131x_phy_mii_read(struct et131x_adapter *adapter, u8 addr,
+ u8 reg, u16 *value)
+{
+ struct mac_regs __iomem *mac = &adapter->regs->mac;
+ int status = 0;
+ u32 delay = 0;
+ u32 mii_addr;
+ u32 mii_cmd;
+ u32 mii_indicator;
+
+ /* Save a local copy of the registers we are dealing with so we can
+ * set them back
+ */
+ mii_addr = readl(&mac->mii_mgmt_addr);
+ mii_cmd = readl(&mac->mii_mgmt_cmd);
+
+ /* Stop the current operation */
+ writel(0, &mac->mii_mgmt_cmd);
+
+ /* Set up the register we need to read from on the correct PHY */
+ writel(ET_MAC_MII_ADDR(addr, reg), &mac->mii_mgmt_addr);
+
+ writel(0x1, &mac->mii_mgmt_cmd);
+
+ do {
+ udelay(50);
+ delay++;
+ mii_indicator = readl(&mac->mii_mgmt_indicator);
+ } while ((mii_indicator & ET_MAC_MGMT_WAIT) && delay < 50);
+
+ /* If we hit the max delay, we could not read the register */
+ if (delay == 50) {
+ dev_warn(&adapter->pdev->dev,
+ "reg 0x%08x could not be read\n", reg);
+ dev_warn(&adapter->pdev->dev, "status is 0x%08x\n",
+ mii_indicator);
+
+ status = -EIO;
+ goto out;
+ }
+
+ /* If we hit here we were able to read the register and we need to
+ * return the value to the caller
+ */
+ *value = readl(&mac->mii_mgmt_stat) & ET_MAC_MIIMGMT_STAT_PHYCRTL_MASK;
+
+out:
+ /* Stop the read operation */
+ writel(0, &mac->mii_mgmt_cmd);
+
+ /* set the registers we touched back to the state at which we entered
+ * this function
+ */
+ writel(mii_addr, &mac->mii_mgmt_addr);
+ writel(mii_cmd, &mac->mii_mgmt_cmd);
+
+ return status;
+}
+
+static int et131x_mii_read(struct et131x_adapter *adapter, u8 reg, u16 *value)
+{
+ struct phy_device *phydev = adapter->phydev;
+
+ if (!phydev)
+ return -EIO;
+
+ return et131x_phy_mii_read(adapter, phydev->addr, reg, value);
+}
+
+static int et131x_mii_write(struct et131x_adapter *adapter, u8 addr, u8 reg,
+ u16 value)
+{
+ struct mac_regs __iomem *mac = &adapter->regs->mac;
+ int status = 0;
+ u32 delay = 0;
+ u32 mii_addr;
+ u32 mii_cmd;
+ u32 mii_indicator;
+
+ /* Save a local copy of the registers we are dealing with so we can
+ * set them back
+ */
+ mii_addr = readl(&mac->mii_mgmt_addr);
+ mii_cmd = readl(&mac->mii_mgmt_cmd);
+
+ /* Stop the current operation */
+ writel(0, &mac->mii_mgmt_cmd);
+
+ /* Set up the register we need to write to on the correct PHY */
+ writel(ET_MAC_MII_ADDR(addr, reg), &mac->mii_mgmt_addr);
+
+ /* Add the value to write to the registers to the mac */
+ writel(value, &mac->mii_mgmt_ctrl);
+
+ do {
+ udelay(50);
+ delay++;
+ mii_indicator = readl(&mac->mii_mgmt_indicator);
+ } while ((mii_indicator & ET_MAC_MGMT_BUSY) && delay < 100);
+
+ /* If we hit the max delay, we could not write the register */
+ if (delay == 100) {
+ u16 tmp;
+
+ dev_warn(&adapter->pdev->dev,
+ "reg 0x%08x could not be written", reg);
+ dev_warn(&adapter->pdev->dev, "status is 0x%08x\n",
+ mii_indicator);
+ dev_warn(&adapter->pdev->dev, "command is 0x%08x\n",
+ readl(&mac->mii_mgmt_cmd));
+
+ et131x_mii_read(adapter, reg, &tmp);
+
+ status = -EIO;
+ }
+ /* Stop the write operation */
+ writel(0, &mac->mii_mgmt_cmd);
+
+ /* set the registers we touched back to the state at which we entered
+ * this function
+ */
+ writel(mii_addr, &mac->mii_mgmt_addr);
+ writel(mii_cmd, &mac->mii_mgmt_cmd);
+
+ return status;
+}
+
+static void et1310_phy_read_mii_bit(struct et131x_adapter *adapter,
+ u16 regnum,
+ u16 bitnum,
+ u8 *value)
+{
+ u16 reg;
+ u16 mask = 1 << bitnum;
+
+ et131x_mii_read(adapter, regnum, &reg);
+
+ *value = (reg & mask) >> bitnum;
+}
+
+static void et1310_config_flow_control(struct et131x_adapter *adapter)
+{
+ struct phy_device *phydev = adapter->phydev;
+
+ if (phydev->duplex == DUPLEX_HALF) {
+ adapter->flow = FLOW_NONE;
+ } else {
+ char remote_pause, remote_async_pause;
+
+ et1310_phy_read_mii_bit(adapter, 5, 10, &remote_pause);
+ et1310_phy_read_mii_bit(adapter, 5, 11, &remote_async_pause);
+
+ if (remote_pause && remote_async_pause) {
+ adapter->flow = adapter->wanted_flow;
+ } else if (remote_pause && !remote_async_pause) {
+ if (adapter->wanted_flow == FLOW_BOTH)
+ adapter->flow = FLOW_BOTH;
+ else
+ adapter->flow = FLOW_NONE;
+ } else if (!remote_pause && !remote_async_pause) {
+ adapter->flow = FLOW_NONE;
+ } else {
+ if (adapter->wanted_flow == FLOW_BOTH)
+ adapter->flow = FLOW_RXONLY;
+ else
+ adapter->flow = FLOW_NONE;
+ }
+ }
+}
+
+/* et1310_update_macstat_host_counters - Update local copy of the statistics */
+static void et1310_update_macstat_host_counters(struct et131x_adapter *adapter)
+{
+ struct ce_stats *stats = &adapter->stats;
+ struct macstat_regs __iomem *macstat =
+ &adapter->regs->macstat;
+
+ stats->tx_collisions += readl(&macstat->tx_total_collisions);
+ stats->tx_first_collisions += readl(&macstat->tx_single_collisions);
+ stats->tx_deferred += readl(&macstat->tx_deferred);
+ stats->tx_excessive_collisions +=
+ readl(&macstat->tx_multiple_collisions);
+ stats->tx_late_collisions += readl(&macstat->tx_late_collisions);
+ stats->tx_underflows += readl(&macstat->tx_undersize_frames);
+ stats->tx_max_pkt_errs += readl(&macstat->tx_oversize_frames);
+
+ stats->rx_align_errs += readl(&macstat->rx_align_errs);
+ stats->rx_crc_errs += readl(&macstat->rx_code_errs);
+ stats->rcvd_pkts_dropped += readl(&macstat->rx_drops);
+ stats->rx_overflows += readl(&macstat->rx_oversize_packets);
+ stats->rx_code_violations += readl(&macstat->rx_fcs_errs);
+ stats->rx_length_errs += readl(&macstat->rx_frame_len_errs);
+ stats->rx_other_errs += readl(&macstat->rx_fragment_packets);
+}
+
+/* et1310_handle_macstat_interrupt
+ *
+ * One of the MACSTAT counters has wrapped. Update the local copy of
+ * the statistics held in the adapter structure, checking the "wrap"
+ * bit for each counter.
+ */
+static void et1310_handle_macstat_interrupt(struct et131x_adapter *adapter)
+{
+ u32 carry_reg1;
+ u32 carry_reg2;
+
+ /* Read the interrupt bits from the register(s). These are Clear On
+ * Write.
+ */
+ carry_reg1 = readl(&adapter->regs->macstat.carry_reg1);
+ carry_reg2 = readl(&adapter->regs->macstat.carry_reg2);
+
+ writel(carry_reg1, &adapter->regs->macstat.carry_reg1);
+ writel(carry_reg2, &adapter->regs->macstat.carry_reg2);
+
+ /* We need to do update the host copy of all the MAC_STAT counters.
+ * For each counter, check it's overflow bit. If the overflow bit is
+ * set, then increment the host version of the count by one complete
+ * revolution of the counter. This routine is called when the counter
+ * block indicates that one of the counters has wrapped.
+ */
+ if (carry_reg1 & (1 << 14))
+ adapter->stats.rx_code_violations += COUNTER_WRAP_16_BIT;
+ if (carry_reg1 & (1 << 8))
+ adapter->stats.rx_align_errs += COUNTER_WRAP_12_BIT;
+ if (carry_reg1 & (1 << 7))
+ adapter->stats.rx_length_errs += COUNTER_WRAP_16_BIT;
+ if (carry_reg1 & (1 << 2))
+ adapter->stats.rx_other_errs += COUNTER_WRAP_16_BIT;
+ if (carry_reg1 & (1 << 6))
+ adapter->stats.rx_crc_errs += COUNTER_WRAP_16_BIT;
+ if (carry_reg1 & (1 << 3))
+ adapter->stats.rx_overflows += COUNTER_WRAP_16_BIT;
+ if (carry_reg1 & (1 << 0))
+ adapter->stats.rcvd_pkts_dropped += COUNTER_WRAP_16_BIT;
+ if (carry_reg2 & (1 << 16))
+ adapter->stats.tx_max_pkt_errs += COUNTER_WRAP_12_BIT;
+ if (carry_reg2 & (1 << 15))
+ adapter->stats.tx_underflows += COUNTER_WRAP_12_BIT;
+ if (carry_reg2 & (1 << 6))
+ adapter->stats.tx_first_collisions += COUNTER_WRAP_12_BIT;
+ if (carry_reg2 & (1 << 8))
+ adapter->stats.tx_deferred += COUNTER_WRAP_12_BIT;
+ if (carry_reg2 & (1 << 5))
+ adapter->stats.tx_excessive_collisions += COUNTER_WRAP_12_BIT;
+ if (carry_reg2 & (1 << 4))
+ adapter->stats.tx_late_collisions += COUNTER_WRAP_12_BIT;
+ if (carry_reg2 & (1 << 2))
+ adapter->stats.tx_collisions += COUNTER_WRAP_12_BIT;
+}
+
+static int et131x_mdio_read(struct mii_bus *bus, int phy_addr, int reg)
+{
+ struct net_device *netdev = bus->priv;
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+ u16 value;
+ int ret;
+
+ ret = et131x_phy_mii_read(adapter, phy_addr, reg, &value);
+
+ if (ret < 0)
+ return ret;
+
+ return value;
+}
+
+static int et131x_mdio_write(struct mii_bus *bus, int phy_addr,
+ int reg, u16 value)
+{
+ struct net_device *netdev = bus->priv;
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ return et131x_mii_write(adapter, phy_addr, reg, value);
+}
+
+/* et1310_phy_power_switch - PHY power control
+ * @adapter: device to control
+ * @down: true for off/false for back on
+ *
+ * one hundred, ten, one thousand megs
+ * How would you like to have your LAN accessed
+ * Can't you see that this code processed
+ * Phy power, phy power..
+ */
+static void et1310_phy_power_switch(struct et131x_adapter *adapter, bool down)
+{
+ u16 data;
+ struct phy_device *phydev = adapter->phydev;
+
+ et131x_mii_read(adapter, MII_BMCR, &data);
+ data &= ~BMCR_PDOWN;
+ if (down)
+ data |= BMCR_PDOWN;
+ et131x_mii_write(adapter, phydev->addr, MII_BMCR, data);
+}
+
+/* et131x_xcvr_init - Init the phy if we are setting it into force mode */
+static void et131x_xcvr_init(struct et131x_adapter *adapter)
+{
+ u16 lcr2;
+ struct phy_device *phydev = adapter->phydev;
+
+ /* Set the LED behavior such that LED 1 indicates speed (off =
+ * 10Mbits, blink = 100Mbits, on = 1000Mbits) and LED 2 indicates
+ * link and activity (on for link, blink off for activity).
+ *
+ * NOTE: Some customizations have been added here for specific
+ * vendors; The LED behavior is now determined by vendor data in the
+ * EEPROM. However, the above description is the default.
+ */
+ if ((adapter->eeprom_data[1] & 0x4) == 0) {
+ et131x_mii_read(adapter, PHY_LED_2, &lcr2);
+
+ lcr2 &= (ET_LED2_LED_100TX | ET_LED2_LED_1000T);
+ lcr2 |= (LED_VAL_LINKON_ACTIVE << LED_LINK_SHIFT);
+
+ if ((adapter->eeprom_data[1] & 0x8) == 0)
+ lcr2 |= (LED_VAL_1000BT_100BTX << LED_TXRX_SHIFT);
+ else
+ lcr2 |= (LED_VAL_LINKON << LED_TXRX_SHIFT);
+
+ et131x_mii_write(adapter, phydev->addr, PHY_LED_2, lcr2);
+ }
+}
+
+/* et131x_configure_global_regs - configure JAGCore global regs */
+static void et131x_configure_global_regs(struct et131x_adapter *adapter)
+{
+ struct global_regs __iomem *regs = &adapter->regs->global;
+
+ writel(0, &regs->rxq_start_addr);
+ writel(INTERNAL_MEM_SIZE - 1, &regs->txq_end_addr);
+
+ if (adapter->registry_jumbo_packet < 2048) {
+ /* Tx / RxDMA and Tx/Rx MAC interfaces have a 1k word
+ * block of RAM that the driver can split between Tx
+ * and Rx as it desires. Our default is to split it
+ * 50/50:
+ */
+ writel(PARM_RX_MEM_END_DEF, &regs->rxq_end_addr);
+ writel(PARM_RX_MEM_END_DEF + 1, &regs->txq_start_addr);
+ } else if (adapter->registry_jumbo_packet < 8192) {
+ /* For jumbo packets > 2k but < 8k, split 50-50. */
+ writel(INTERNAL_MEM_RX_OFFSET, &regs->rxq_end_addr);
+ writel(INTERNAL_MEM_RX_OFFSET + 1, &regs->txq_start_addr);
+ } else {
+ /* 9216 is the only packet size greater than 8k that
+ * is available. The Tx buffer has to be big enough
+ * for one whole packet on the Tx side. We'll make
+ * the Tx 9408, and give the rest to Rx
+ */
+ writel(0x01b3, &regs->rxq_end_addr);
+ writel(0x01b4, &regs->txq_start_addr);
+ }
+
+ /* Initialize the loopback register. Disable all loopbacks. */
+ writel(0, &regs->loopback);
+
+ writel(0, &regs->msi_config);
+
+ /* By default, disable the watchdog timer. It will be enabled when
+ * a packet is queued.
+ */
+ writel(0, &regs->watchdog_timer);
+}
+
+/* et131x_config_rx_dma_regs - Start of Rx_DMA init sequence */
+static void et131x_config_rx_dma_regs(struct et131x_adapter *adapter)
+{
+ struct rxdma_regs __iomem *rx_dma = &adapter->regs->rxdma;
+ struct rx_ring *rx_local = &adapter->rx_ring;
+ struct fbr_desc *fbr_entry;
+ u32 entry;
+ u32 psr_num_des;
+ unsigned long flags;
+ u8 id;
+
+ et131x_rx_dma_disable(adapter);
+
+ /* Load the completion writeback physical address */
+ writel(upper_32_bits(rx_local->rx_status_bus), &rx_dma->dma_wb_base_hi);
+ writel(lower_32_bits(rx_local->rx_status_bus), &rx_dma->dma_wb_base_lo);
+
+ memset(rx_local->rx_status_block, 0, sizeof(struct rx_status_block));
+
+ /* Set the address and parameters of the packet status ring */
+ writel(upper_32_bits(rx_local->ps_ring_physaddr), &rx_dma->psr_base_hi);
+ writel(lower_32_bits(rx_local->ps_ring_physaddr), &rx_dma->psr_base_lo);
+ writel(rx_local->psr_entries - 1, &rx_dma->psr_num_des);
+ writel(0, &rx_dma->psr_full_offset);
+
+ psr_num_des = readl(&rx_dma->psr_num_des) & ET_RXDMA_PSR_NUM_DES_MASK;
+ writel((psr_num_des * LO_MARK_PERCENT_FOR_PSR) / 100,
+ &rx_dma->psr_min_des);
+
+ spin_lock_irqsave(&adapter->rcv_lock, flags);
+
+ /* These local variables track the PSR in the adapter structure */
+ rx_local->local_psr_full = 0;
+
+ for (id = 0; id < NUM_FBRS; id++) {
+ u32 __iomem *num_des;
+ u32 __iomem *full_offset;
+ u32 __iomem *min_des;
+ u32 __iomem *base_hi;
+ u32 __iomem *base_lo;
+ struct fbr_lookup *fbr = rx_local->fbr[id];
+
+ if (id == 0) {
+ num_des = &rx_dma->fbr0_num_des;
+ full_offset = &rx_dma->fbr0_full_offset;
+ min_des = &rx_dma->fbr0_min_des;
+ base_hi = &rx_dma->fbr0_base_hi;
+ base_lo = &rx_dma->fbr0_base_lo;
+ } else {
+ num_des = &rx_dma->fbr1_num_des;
+ full_offset = &rx_dma->fbr1_full_offset;
+ min_des = &rx_dma->fbr1_min_des;
+ base_hi = &rx_dma->fbr1_base_hi;
+ base_lo = &rx_dma->fbr1_base_lo;
+ }
+
+ /* Now's the best time to initialize FBR contents */
+ fbr_entry = fbr->ring_virtaddr;
+ for (entry = 0; entry < fbr->num_entries; entry++) {
+ fbr_entry->addr_hi = fbr->bus_high[entry];
+ fbr_entry->addr_lo = fbr->bus_low[entry];
+ fbr_entry->word2 = entry;
+ fbr_entry++;
+ }
+
+ /* Set the address and parameters of Free buffer ring 1 and 0 */
+ writel(upper_32_bits(fbr->ring_physaddr), base_hi);
+ writel(lower_32_bits(fbr->ring_physaddr), base_lo);
+ writel(fbr->num_entries - 1, num_des);
+ writel(ET_DMA10_WRAP, full_offset);
+
+ /* This variable tracks the free buffer ring 1 full position,
+ * so it has to match the above.
+ */
+ fbr->local_full = ET_DMA10_WRAP;
+ writel(((fbr->num_entries * LO_MARK_PERCENT_FOR_RX) / 100) - 1,
+ min_des);
+ }
+
+ /* Program the number of packets we will receive before generating an
+ * interrupt.
+ * For version B silicon, this value gets updated once autoneg is
+ *complete.
+ */
+ writel(PARM_RX_NUM_BUFS_DEF, &rx_dma->num_pkt_done);
+
+ /* The "time_done" is not working correctly to coalesce interrupts
+ * after a given time period, but rather is giving us an interrupt
+ * regardless of whether we have received packets.
+ * This value gets updated once autoneg is complete.
+ */
+ writel(PARM_RX_TIME_INT_DEF, &rx_dma->max_pkt_time);
+
+ spin_unlock_irqrestore(&adapter->rcv_lock, flags);
+}
+
+/* et131x_config_tx_dma_regs - Set up the tx dma section of the JAGCore.
+ *
+ * Configure the transmit engine with the ring buffers we have created
+ * and prepare it for use.
+ */
+static void et131x_config_tx_dma_regs(struct et131x_adapter *adapter)
+{
+ struct txdma_regs __iomem *txdma = &adapter->regs->txdma;
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+
+ /* Load the hardware with the start of the transmit descriptor ring. */
+ writel(upper_32_bits(tx_ring->tx_desc_ring_pa), &txdma->pr_base_hi);
+ writel(lower_32_bits(tx_ring->tx_desc_ring_pa), &txdma->pr_base_lo);
+
+ /* Initialise the transmit DMA engine */
+ writel(NUM_DESC_PER_RING_TX - 1, &txdma->pr_num_des);
+
+ /* Load the completion writeback physical address */
+ writel(upper_32_bits(tx_ring->tx_status_pa), &txdma->dma_wb_base_hi);
+ writel(lower_32_bits(tx_ring->tx_status_pa), &txdma->dma_wb_base_lo);
+
+ *tx_ring->tx_status = 0;
+
+ writel(0, &txdma->service_request);
+ tx_ring->send_idx = 0;
+}
+
+/* et131x_adapter_setup - Set the adapter up as per cassini+ documentation */
+static void et131x_adapter_setup(struct et131x_adapter *adapter)
+{
+ et131x_configure_global_regs(adapter);
+ et1310_config_mac_regs1(adapter);
+
+ /* Configure the MMC registers */
+ /* All we need to do is initialize the Memory Control Register */
+ writel(ET_MMC_ENABLE, &adapter->regs->mmc.mmc_ctrl);
+
+ et1310_config_rxmac_regs(adapter);
+ et1310_config_txmac_regs(adapter);
+
+ et131x_config_rx_dma_regs(adapter);
+ et131x_config_tx_dma_regs(adapter);
+
+ et1310_config_macstat_regs(adapter);
+
+ et1310_phy_power_switch(adapter, 0);
+ et131x_xcvr_init(adapter);
+}
+
+/* et131x_soft_reset - Issue soft reset to the hardware, complete for ET1310 */
+static void et131x_soft_reset(struct et131x_adapter *adapter)
+{
+ u32 reg;
+
+ /* Disable MAC Core */
+ reg = ET_MAC_CFG1_SOFT_RESET | ET_MAC_CFG1_SIM_RESET |
+ ET_MAC_CFG1_RESET_RXMC | ET_MAC_CFG1_RESET_TXMC |
+ ET_MAC_CFG1_RESET_RXFUNC | ET_MAC_CFG1_RESET_TXFUNC;
+ writel(reg, &adapter->regs->mac.cfg1);
+
+ reg = ET_RESET_ALL;
+ writel(reg, &adapter->regs->global.sw_reset);
+
+ reg = ET_MAC_CFG1_RESET_RXMC | ET_MAC_CFG1_RESET_TXMC |
+ ET_MAC_CFG1_RESET_RXFUNC | ET_MAC_CFG1_RESET_TXFUNC;
+ writel(reg, &adapter->regs->mac.cfg1);
+ writel(0, &adapter->regs->mac.cfg1);
+}
+
+static void et131x_enable_interrupts(struct et131x_adapter *adapter)
+{
+ u32 mask;
+
+ if (adapter->flow == FLOW_TXONLY || adapter->flow == FLOW_BOTH)
+ mask = INT_MASK_ENABLE;
+ else
+ mask = INT_MASK_ENABLE_NO_FLOW;
+
+ writel(mask, &adapter->regs->global.int_mask);
+}
+
+static void et131x_disable_interrupts(struct et131x_adapter *adapter)
+{
+ writel(INT_MASK_DISABLE, &adapter->regs->global.int_mask);
+}
+
+static void et131x_tx_dma_disable(struct et131x_adapter *adapter)
+{
+ /* Setup the transmit dma configuration register */
+ writel(ET_TXDMA_CSR_HALT | ET_TXDMA_SNGL_EPKT,
+ &adapter->regs->txdma.csr);
+}
+
+static void et131x_enable_txrx(struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ et131x_rx_dma_enable(adapter);
+ et131x_tx_dma_enable(adapter);
+
+ if (adapter->flags & FMP_ADAPTER_INTERRUPT_IN_USE)
+ et131x_enable_interrupts(adapter);
+
+ netif_start_queue(netdev);
+}
+
+static void et131x_disable_txrx(struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ netif_stop_queue(netdev);
+
+ et131x_rx_dma_disable(adapter);
+ et131x_tx_dma_disable(adapter);
+
+ et131x_disable_interrupts(adapter);
+}
+
+static void et131x_init_send(struct et131x_adapter *adapter)
+{
+ int i;
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+ struct tcb *tcb = tx_ring->tcb_ring;
+
+ tx_ring->tcb_qhead = tcb;
+
+ memset(tcb, 0, sizeof(struct tcb) * NUM_TCB);
+
+ for (i = 0; i < NUM_TCB; i++) {
+ tcb->next = tcb + 1;
+ tcb++;
+ }
+
+ tcb--;
+ tx_ring->tcb_qtail = tcb;
+ tcb->next = NULL;
+ /* Curr send queue should now be empty */
+ tx_ring->send_head = NULL;
+ tx_ring->send_tail = NULL;
+}
+
+/* et1310_enable_phy_coma
+ *
+ * driver receive an phy status change interrupt while in D0 and check that
+ * phy_status is down.
+ *
+ * -- gate off JAGCore;
+ * -- set gigE PHY in Coma mode
+ * -- wake on phy_interrupt; Perform software reset JAGCore,
+ * re-initialize jagcore and gigE PHY
+ */
+static void et1310_enable_phy_coma(struct et131x_adapter *adapter)
+{
+ u32 pmcsr = readl(&adapter->regs->global.pm_csr);
+
+ /* Stop sending packets. */
+ adapter->flags |= FMP_ADAPTER_LOWER_POWER;
+
+ /* Wait for outstanding Receive packets */
+ et131x_disable_txrx(adapter->netdev);
+
+ /* Gate off JAGCore 3 clock domains */
+ pmcsr &= ~ET_PMCSR_INIT;
+ writel(pmcsr, &adapter->regs->global.pm_csr);
+
+ /* Program gigE PHY in to Coma mode */
+ pmcsr |= ET_PM_PHY_SW_COMA;
+ writel(pmcsr, &adapter->regs->global.pm_csr);
+}
+
+static void et1310_disable_phy_coma(struct et131x_adapter *adapter)
+{
+ u32 pmcsr;
+
+ pmcsr = readl(&adapter->regs->global.pm_csr);
+
+ /* Disable phy_sw_coma register and re-enable JAGCore clocks */
+ pmcsr |= ET_PMCSR_INIT;
+ pmcsr &= ~ET_PM_PHY_SW_COMA;
+ writel(pmcsr, &adapter->regs->global.pm_csr);
+
+ /* Restore the GbE PHY speed and duplex modes;
+ * Reset JAGCore; re-configure and initialize JAGCore and gigE PHY
+ */
+
+ /* Re-initialize the send structures */
+ et131x_init_send(adapter);
+
+ /* Bring the device back to the state it was during init prior to
+ * autonegotiation being complete. This way, when we get the auto-neg
+ * complete interrupt, we can complete init by calling ConfigMacREGS2.
+ */
+ et131x_soft_reset(adapter);
+
+ et131x_adapter_setup(adapter);
+
+ /* Allow Tx to restart */
+ adapter->flags &= ~FMP_ADAPTER_LOWER_POWER;
+
+ et131x_enable_txrx(adapter->netdev);
+}
+
+static inline u32 bump_free_buff_ring(u32 *free_buff_ring, u32 limit)
+{
+ u32 tmp_free_buff_ring = *free_buff_ring;
+
+ tmp_free_buff_ring++;
+ /* This works for all cases where limit < 1024. The 1023 case
+ * works because 1023++ is 1024 which means the if condition is not
+ * taken but the carry of the bit into the wrap bit toggles the wrap
+ * value correctly
+ */
+ if ((tmp_free_buff_ring & ET_DMA10_MASK) > limit) {
+ tmp_free_buff_ring &= ~ET_DMA10_MASK;
+ tmp_free_buff_ring ^= ET_DMA10_WRAP;
+ }
+ /* For the 1023 case */
+ tmp_free_buff_ring &= (ET_DMA10_MASK | ET_DMA10_WRAP);
+ *free_buff_ring = tmp_free_buff_ring;
+ return tmp_free_buff_ring;
+}
+
+/* et131x_rx_dma_memory_alloc
+ *
+ * Allocates Free buffer ring 1 for sure, free buffer ring 0 if required,
+ * and the Packet Status Ring.
+ */
+static int et131x_rx_dma_memory_alloc(struct et131x_adapter *adapter)
+{
+ u8 id;
+ u32 i, j;
+ u32 bufsize;
+ u32 psr_size;
+ u32 fbr_chunksize;
+ struct rx_ring *rx_ring = &adapter->rx_ring;
+ struct fbr_lookup *fbr;
+
+ /* Alloc memory for the lookup table */
+ rx_ring->fbr[0] = kzalloc(sizeof(*fbr), GFP_KERNEL);
+ if (rx_ring->fbr[0] == NULL)
+ return -ENOMEM;
+ rx_ring->fbr[1] = kzalloc(sizeof(*fbr), GFP_KERNEL);
+ if (rx_ring->fbr[1] == NULL)
+ return -ENOMEM;
+
+ /* The first thing we will do is configure the sizes of the buffer
+ * rings. These will change based on jumbo packet support. Larger
+ * jumbo packets increases the size of each entry in FBR0, and the
+ * number of entries in FBR0, while at the same time decreasing the
+ * number of entries in FBR1.
+ *
+ * FBR1 holds "large" frames, FBR0 holds "small" frames. If FBR1
+ * entries are huge in order to accommodate a "jumbo" frame, then it
+ * will have less entries. Conversely, FBR1 will now be relied upon
+ * to carry more "normal" frames, thus it's entry size also increases
+ * and the number of entries goes up too (since it now carries
+ * "small" + "regular" packets.
+ *
+ * In this scheme, we try to maintain 512 entries between the two
+ * rings. Also, FBR1 remains a constant size - when it's size doubles
+ * the number of entries halves. FBR0 increases in size, however.
+ */
+ if (adapter->registry_jumbo_packet < 2048) {
+ rx_ring->fbr[0]->buffsize = 256;
+ rx_ring->fbr[0]->num_entries = 512;
+ rx_ring->fbr[1]->buffsize = 2048;
+ rx_ring->fbr[1]->num_entries = 512;
+ } else if (adapter->registry_jumbo_packet < 4096) {
+ rx_ring->fbr[0]->buffsize = 512;
+ rx_ring->fbr[0]->num_entries = 1024;
+ rx_ring->fbr[1]->buffsize = 4096;
+ rx_ring->fbr[1]->num_entries = 512;
+ } else {
+ rx_ring->fbr[0]->buffsize = 1024;
+ rx_ring->fbr[0]->num_entries = 768;
+ rx_ring->fbr[1]->buffsize = 16384;
+ rx_ring->fbr[1]->num_entries = 128;
+ }
+
+ rx_ring->psr_entries = rx_ring->fbr[0]->num_entries +
+ rx_ring->fbr[1]->num_entries;
+
+ for (id = 0; id < NUM_FBRS; id++) {
+ fbr = rx_ring->fbr[id];
+ /* Allocate an area of memory for Free Buffer Ring */
+ bufsize = sizeof(struct fbr_desc) * fbr->num_entries;
+ fbr->ring_virtaddr = dma_alloc_coherent(&adapter->pdev->dev,
+ bufsize,
+ &fbr->ring_physaddr,
+ GFP_KERNEL);
+ if (!fbr->ring_virtaddr) {
+ dev_err(&adapter->pdev->dev,
+ "Cannot alloc memory for Free Buffer Ring %d\n",
+ id);
+ return -ENOMEM;
+ }
+ }
+
+ for (id = 0; id < NUM_FBRS; id++) {
+ fbr = rx_ring->fbr[id];
+ fbr_chunksize = (FBR_CHUNKS * fbr->buffsize);
+
+ for (i = 0; i < fbr->num_entries / FBR_CHUNKS; i++) {
+ dma_addr_t fbr_physaddr;
+
+ fbr->mem_virtaddrs[i] = dma_alloc_coherent(
+ &adapter->pdev->dev, fbr_chunksize,
+ &fbr->mem_physaddrs[i],
+ GFP_KERNEL);
+
+ if (!fbr->mem_virtaddrs[i]) {
+ dev_err(&adapter->pdev->dev,
+ "Could not alloc memory\n");
+ return -ENOMEM;
+ }
+
+ /* See NOTE in "Save Physical Address" comment above */
+ fbr_physaddr = fbr->mem_physaddrs[i];
+
+ for (j = 0; j < FBR_CHUNKS; j++) {
+ u32 k = (i * FBR_CHUNKS) + j;
+
+ /* Save the Virtual address of this index for
+ * quick access later
+ */
+ fbr->virt[k] = (u8 *)fbr->mem_virtaddrs[i] +
+ (j * fbr->buffsize);
+
+ /* now store the physical address in the
+ * descriptor so the device can access it
+ */
+ fbr->bus_high[k] = upper_32_bits(fbr_physaddr);
+ fbr->bus_low[k] = lower_32_bits(fbr_physaddr);
+ fbr_physaddr += fbr->buffsize;
+ }
+ }
+ }
+
+ /* Allocate an area of memory for FIFO of Packet Status ring entries */
+ psr_size = sizeof(struct pkt_stat_desc) * rx_ring->psr_entries;
+
+ rx_ring->ps_ring_virtaddr = dma_alloc_coherent(&adapter->pdev->dev,
+ psr_size,
+ &rx_ring->ps_ring_physaddr,
+ GFP_KERNEL);
+
+ if (!rx_ring->ps_ring_virtaddr) {
+ dev_err(&adapter->pdev->dev,
+ "Cannot alloc memory for Packet Status Ring\n");
+ return -ENOMEM;
+ }
+
+ /* Allocate an area of memory for writeback of status information */
+ rx_ring->rx_status_block = dma_alloc_coherent(&adapter->pdev->dev,
+ sizeof(struct rx_status_block),
+ &rx_ring->rx_status_bus,
+ GFP_KERNEL);
+ if (!rx_ring->rx_status_block) {
+ dev_err(&adapter->pdev->dev,
+ "Cannot alloc memory for Status Block\n");
+ return -ENOMEM;
+ }
+ rx_ring->num_rfd = NIC_DEFAULT_NUM_RFD;
+
+ /* The RFDs are going to be put on lists later on, so initialize the
+ * lists now.
+ */
+ INIT_LIST_HEAD(&rx_ring->recv_list);
+ return 0;
+}
+
+static void et131x_rx_dma_memory_free(struct et131x_adapter *adapter)
+{
+ u8 id;
+ u32 ii;
+ u32 bufsize;
+ u32 psr_size;
+ struct rfd *rfd;
+ struct rx_ring *rx_ring = &adapter->rx_ring;
+ struct fbr_lookup *fbr;
+
+ /* Free RFDs and associated packet descriptors */
+ WARN_ON(rx_ring->num_ready_recv != rx_ring->num_rfd);
+
+ while (!list_empty(&rx_ring->recv_list)) {
+ rfd = list_entry(rx_ring->recv_list.next,
+ struct rfd, list_node);
+
+ list_del(&rfd->list_node);
+ rfd->skb = NULL;
+ kfree(rfd);
+ }
+
+ /* Free Free Buffer Rings */
+ for (id = 0; id < NUM_FBRS; id++) {
+ fbr = rx_ring->fbr[id];
+
+ if (!fbr || !fbr->ring_virtaddr)
+ continue;
+
+ /* First the packet memory */
+ for (ii = 0; ii < fbr->num_entries / FBR_CHUNKS; ii++) {
+ if (fbr->mem_virtaddrs[ii]) {
+ bufsize = fbr->buffsize * FBR_CHUNKS;
+
+ dma_free_coherent(&adapter->pdev->dev,
+ bufsize,
+ fbr->mem_virtaddrs[ii],
+ fbr->mem_physaddrs[ii]);
+
+ fbr->mem_virtaddrs[ii] = NULL;
+ }
+ }
+
+ bufsize = sizeof(struct fbr_desc) * fbr->num_entries;
+
+ dma_free_coherent(&adapter->pdev->dev,
+ bufsize,
+ fbr->ring_virtaddr,
+ fbr->ring_physaddr);
+
+ fbr->ring_virtaddr = NULL;
+ }
+
+ /* Free Packet Status Ring */
+ if (rx_ring->ps_ring_virtaddr) {
+ psr_size = sizeof(struct pkt_stat_desc) * rx_ring->psr_entries;
+
+ dma_free_coherent(&adapter->pdev->dev, psr_size,
+ rx_ring->ps_ring_virtaddr,
+ rx_ring->ps_ring_physaddr);
+
+ rx_ring->ps_ring_virtaddr = NULL;
+ }
+
+ /* Free area of memory for the writeback of status information */
+ if (rx_ring->rx_status_block) {
+ dma_free_coherent(&adapter->pdev->dev,
+ sizeof(struct rx_status_block),
+ rx_ring->rx_status_block,
+ rx_ring->rx_status_bus);
+ rx_ring->rx_status_block = NULL;
+ }
+
+ /* Free the FBR Lookup Table */
+ kfree(rx_ring->fbr[0]);
+ kfree(rx_ring->fbr[1]);
+
+ /* Reset Counters */
+ rx_ring->num_ready_recv = 0;
+}
+
+/* et131x_init_recv - Initialize receive data structures */
+static int et131x_init_recv(struct et131x_adapter *adapter)
+{
+ struct rfd *rfd;
+ u32 rfdct;
+ struct rx_ring *rx_ring = &adapter->rx_ring;
+
+ /* Setup each RFD */
+ for (rfdct = 0; rfdct < rx_ring->num_rfd; rfdct++) {
+ rfd = kzalloc(sizeof(*rfd), GFP_ATOMIC | GFP_DMA);
+ if (!rfd)
+ return -ENOMEM;
+
+ rfd->skb = NULL;
+
+ /* Add this RFD to the recv_list */
+ list_add_tail(&rfd->list_node, &rx_ring->recv_list);
+
+ /* Increment the available RFD's */
+ rx_ring->num_ready_recv++;
+ }
+
+ return 0;
+}
+
+/* et131x_set_rx_dma_timer - Set the heartbeat timer according to line rate */
+static void et131x_set_rx_dma_timer(struct et131x_adapter *adapter)
+{
+ struct phy_device *phydev = adapter->phydev;
+
+ /* For version B silicon, we do not use the RxDMA timer for 10 and 100
+ * Mbits/s line rates. We do not enable and RxDMA interrupt coalescing.
+ */
+ if ((phydev->speed == SPEED_100) || (phydev->speed == SPEED_10)) {
+ writel(0, &adapter->regs->rxdma.max_pkt_time);
+ writel(1, &adapter->regs->rxdma.num_pkt_done);
+ }
+}
+
+/* nic_return_rfd - Recycle a RFD and put it back onto the receive list */
+static void nic_return_rfd(struct et131x_adapter *adapter, struct rfd *rfd)
+{
+ struct rx_ring *rx_local = &adapter->rx_ring;
+ struct rxdma_regs __iomem *rx_dma = &adapter->regs->rxdma;
+ u16 buff_index = rfd->bufferindex;
+ u8 ring_index = rfd->ringindex;
+ unsigned long flags;
+ struct fbr_lookup *fbr = rx_local->fbr[ring_index];
+
+ /* We don't use any of the OOB data besides status. Otherwise, we
+ * need to clean up OOB data
+ */
+ if (buff_index < fbr->num_entries) {
+ u32 free_buff_ring;
+ u32 __iomem *offset;
+ struct fbr_desc *next;
+
+ if (ring_index == 0)
+ offset = &rx_dma->fbr0_full_offset;
+ else
+ offset = &rx_dma->fbr1_full_offset;
+
+ next = (struct fbr_desc *)(fbr->ring_virtaddr) +
+ INDEX10(fbr->local_full);
+
+ /* Handle the Free Buffer Ring advancement here. Write
+ * the PA / Buffer Index for the returned buffer into
+ * the oldest (next to be freed)FBR entry
+ */
+ next->addr_hi = fbr->bus_high[buff_index];
+ next->addr_lo = fbr->bus_low[buff_index];
+ next->word2 = buff_index;
+
+ free_buff_ring = bump_free_buff_ring(&fbr->local_full,
+ fbr->num_entries - 1);
+ writel(free_buff_ring, offset);
+ } else {
+ dev_err(&adapter->pdev->dev,
+ "%s illegal Buffer Index returned\n", __func__);
+ }
+
+ /* The processing on this RFD is done, so put it back on the tail of
+ * our list
+ */
+ spin_lock_irqsave(&adapter->rcv_lock, flags);
+ list_add_tail(&rfd->list_node, &rx_local->recv_list);
+ rx_local->num_ready_recv++;
+ spin_unlock_irqrestore(&adapter->rcv_lock, flags);
+
+ WARN_ON(rx_local->num_ready_recv > rx_local->num_rfd);
+}
+
+/* nic_rx_pkts - Checks the hardware for available packets
+ *
+ * Checks the hardware for available packets, using completion ring
+ * If packets are available, it gets an RFD from the recv_list, attaches
+ * the packet to it, puts the RFD in the RecvPendList, and also returns
+ * the pointer to the RFD.
+ */
+static struct rfd *nic_rx_pkts(struct et131x_adapter *adapter)
+{
+ struct rx_ring *rx_local = &adapter->rx_ring;
+ struct rx_status_block *status;
+ struct pkt_stat_desc *psr;
+ struct rfd *rfd;
+ unsigned long flags;
+ struct list_head *element;
+ u8 ring_index;
+ u16 buff_index;
+ u32 len;
+ u32 word0;
+ u32 word1;
+ struct sk_buff *skb;
+ struct fbr_lookup *fbr;
+
+ /* RX Status block is written by the DMA engine prior to every
+ * interrupt. It contains the next to be used entry in the Packet
+ * Status Ring, and also the two Free Buffer rings.
+ */
+ status = rx_local->rx_status_block;
+ word1 = status->word1 >> 16;
+
+ /* Check the PSR and wrap bits do not match */
+ if ((word1 & 0x1FFF) == (rx_local->local_psr_full & 0x1FFF))
+ return NULL; /* Looks like this ring is not updated yet */
+
+ /* The packet status ring indicates that data is available. */
+ psr = (struct pkt_stat_desc *)(rx_local->ps_ring_virtaddr) +
+ (rx_local->local_psr_full & 0xFFF);
+
+ /* Grab any information that is required once the PSR is advanced,
+ * since we can no longer rely on the memory being accurate
+ */
+ len = psr->word1 & 0xFFFF;
+ ring_index = (psr->word1 >> 26) & 0x03;
+ fbr = rx_local->fbr[ring_index];
+ buff_index = (psr->word1 >> 16) & 0x3FF;
+ word0 = psr->word0;
+
+ /* Indicate that we have used this PSR entry. */
+ /* FIXME wrap 12 */
+ add_12bit(&rx_local->local_psr_full, 1);
+ if ((rx_local->local_psr_full & 0xFFF) > rx_local->psr_entries - 1) {
+ /* Clear psr full and toggle the wrap bit */
+ rx_local->local_psr_full &= ~0xFFF;
+ rx_local->local_psr_full ^= 0x1000;
+ }
+
+ writel(rx_local->local_psr_full, &adapter->regs->rxdma.psr_full_offset);
+
+ if (ring_index > 1 || buff_index > fbr->num_entries - 1) {
+ /* Illegal buffer or ring index cannot be used by S/W*/
+ dev_err(&adapter->pdev->dev,
+ "NICRxPkts PSR Entry %d indicates length of %d and/or bad bi(%d)\n",
+ rx_local->local_psr_full & 0xFFF, len, buff_index);
+ return NULL;
+ }
+
+ /* Get and fill the RFD. */
+ spin_lock_irqsave(&adapter->rcv_lock, flags);
+
+ element = rx_local->recv_list.next;
+ rfd = list_entry(element, struct rfd, list_node);
+
+ if (!rfd) {
+ spin_unlock_irqrestore(&adapter->rcv_lock, flags);
+ return NULL;
+ }
+
+ list_del(&rfd->list_node);
+ rx_local->num_ready_recv--;
+
+ spin_unlock_irqrestore(&adapter->rcv_lock, flags);
+
+ rfd->bufferindex = buff_index;
+ rfd->ringindex = ring_index;
+
+ /* In V1 silicon, there is a bug which screws up filtering of runt
+ * packets. Therefore runt packet filtering is disabled in the MAC and
+ * the packets are dropped here. They are also counted here.
+ */
+ if (len < (NIC_MIN_PACKET_SIZE + 4)) {
+ adapter->stats.rx_other_errs++;
+ rfd->len = 0;
+ goto out;
+ }
+
+ if ((word0 & ALCATEL_MULTICAST_PKT) && !(word0 & ALCATEL_BROADCAST_PKT))
+ adapter->stats.multicast_pkts_rcvd++;
+
+ rfd->len = len;
+
+ skb = dev_alloc_skb(rfd->len + 2);
+ if (!skb)
+ return NULL;
+
+ adapter->netdev->stats.rx_bytes += rfd->len;
+
+ memcpy(skb_put(skb, rfd->len), fbr->virt[buff_index], rfd->len);
+
+ skb->protocol = eth_type_trans(skb, adapter->netdev);
+ skb->ip_summed = CHECKSUM_NONE;
+ netif_receive_skb(skb);
+
+out:
+ nic_return_rfd(adapter, rfd);
+ return rfd;
+}
+
+static int et131x_handle_recv_pkts(struct et131x_adapter *adapter, int budget)
+{
+ struct rfd *rfd = NULL;
+ int count = 0;
+ int limit = budget;
+ bool done = true;
+ struct rx_ring *rx_ring = &adapter->rx_ring;
+
+ if (budget > MAX_PACKETS_HANDLED)
+ limit = MAX_PACKETS_HANDLED;
+
+ /* Process up to available RFD's */
+ while (count < limit) {
+ if (list_empty(&rx_ring->recv_list)) {
+ WARN_ON(rx_ring->num_ready_recv != 0);
+ done = false;
+ break;
+ }
+
+ rfd = nic_rx_pkts(adapter);
+
+ if (rfd == NULL)
+ break;
+
+ /* Do not receive any packets until a filter has been set.
+ * Do not receive any packets until we have link.
+ * If length is zero, return the RFD in order to advance the
+ * Free buffer ring.
+ */
+ if (!adapter->packet_filter ||
+ !netif_carrier_ok(adapter->netdev) ||
+ rfd->len == 0)
+ continue;
+
+ adapter->netdev->stats.rx_packets++;
+
+ if (rx_ring->num_ready_recv < RFD_LOW_WATER_MARK)
+ dev_warn(&adapter->pdev->dev, "RFD's are running out\n");
+
+ count++;
+ }
+
+ if (count == limit || !done) {
+ rx_ring->unfinished_receives = true;
+ writel(PARM_TX_TIME_INT_DEF * NANO_IN_A_MICRO,
+ &adapter->regs->global.watchdog_timer);
+ } else {
+ /* Watchdog timer will disable itself if appropriate. */
+ rx_ring->unfinished_receives = false;
+ }
+
+ return count;
+}
+
+/* et131x_tx_dma_memory_alloc
+ *
+ * Allocates memory that will be visible both to the device and to the CPU.
+ * The OS will pass us packets, pointers to which we will insert in the Tx
+ * Descriptor queue. The device will read this queue to find the packets in
+ * memory. The device will update the "status" in memory each time it xmits a
+ * packet.
+ */
+static int et131x_tx_dma_memory_alloc(struct et131x_adapter *adapter)
+{
+ int desc_size = 0;
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+
+ /* Allocate memory for the TCB's (Transmit Control Block) */
+ tx_ring->tcb_ring = kcalloc(NUM_TCB, sizeof(struct tcb),
+ GFP_ATOMIC | GFP_DMA);
+ if (!tx_ring->tcb_ring)
+ return -ENOMEM;
+
+ desc_size = (sizeof(struct tx_desc) * NUM_DESC_PER_RING_TX);
+ tx_ring->tx_desc_ring = dma_alloc_coherent(&adapter->pdev->dev,
+ desc_size,
+ &tx_ring->tx_desc_ring_pa,
+ GFP_KERNEL);
+ if (!tx_ring->tx_desc_ring) {
+ dev_err(&adapter->pdev->dev,
+ "Cannot alloc memory for Tx Ring\n");
+ return -ENOMEM;
+ }
+
+ tx_ring->tx_status = dma_alloc_coherent(&adapter->pdev->dev,
+ sizeof(u32),
+ &tx_ring->tx_status_pa,
+ GFP_KERNEL);
+ if (!tx_ring->tx_status_pa) {
+ dev_err(&adapter->pdev->dev,
+ "Cannot alloc memory for Tx status block\n");
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+static void et131x_tx_dma_memory_free(struct et131x_adapter *adapter)
+{
+ int desc_size = 0;
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+
+ if (tx_ring->tx_desc_ring) {
+ /* Free memory relating to Tx rings here */
+ desc_size = (sizeof(struct tx_desc) * NUM_DESC_PER_RING_TX);
+ dma_free_coherent(&adapter->pdev->dev,
+ desc_size,
+ tx_ring->tx_desc_ring,
+ tx_ring->tx_desc_ring_pa);
+ tx_ring->tx_desc_ring = NULL;
+ }
+
+ /* Free memory for the Tx status block */
+ if (tx_ring->tx_status) {
+ dma_free_coherent(&adapter->pdev->dev,
+ sizeof(u32),
+ tx_ring->tx_status,
+ tx_ring->tx_status_pa);
+
+ tx_ring->tx_status = NULL;
+ }
+ /* Free the memory for the tcb structures */
+ kfree(tx_ring->tcb_ring);
+}
+
+/* nic_send_packet - NIC specific send handler for version B silicon. */
+static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+{
+ u32 i;
+ struct tx_desc desc[24];
+ u32 frag = 0;
+ u32 thiscopy, remainder;
+ struct sk_buff *skb = tcb->skb;
+ u32 nr_frags = skb_shinfo(skb)->nr_frags + 1;
+ struct skb_frag_struct *frags = &skb_shinfo(skb)->frags[0];
+ struct phy_device *phydev = adapter->phydev;
+ dma_addr_t dma_addr;
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+
+ /* Part of the optimizations of this send routine restrict us to
+ * sending 24 fragments at a pass. In practice we should never see
+ * more than 5 fragments.
+ */
+
+ /* nr_frags should be no more than 18. */
+ BUILD_BUG_ON(MAX_SKB_FRAGS + 1 > 23);
+
+ memset(desc, 0, sizeof(struct tx_desc) * (nr_frags + 1));
+
+ for (i = 0; i < nr_frags; i++) {
+ /* If there is something in this element, lets get a
+ * descriptor from the ring and get the necessary data
+ */
+ if (i == 0) {
+ /* If the fragments are smaller than a standard MTU,
+ * then map them to a single descriptor in the Tx
+ * Desc ring. However, if they're larger, as is
+ * possible with support for jumbo packets, then
+ * split them each across 2 descriptors.
+ *
+ * This will work until we determine why the hardware
+ * doesn't seem to like large fragments.
+ */
+ if (skb_headlen(skb) <= 1514) {
+ /* Low 16bits are length, high is vlan and
+ * unused currently so zero
+ */
+ desc[frag].len_vlan = skb_headlen(skb);
+ dma_addr = dma_map_single(&adapter->pdev->dev,
+ skb->data,
+ skb_headlen(skb),
+ DMA_TO_DEVICE);
+ desc[frag].addr_lo = lower_32_bits(dma_addr);
+ desc[frag].addr_hi = upper_32_bits(dma_addr);
+ frag++;
+ } else {
+ desc[frag].len_vlan = skb_headlen(skb) / 2;
+ dma_addr = dma_map_single(&adapter->pdev->dev,
+ skb->data,
+ skb_headlen(skb) / 2,
+ DMA_TO_DEVICE);
+ desc[frag].addr_lo = lower_32_bits(dma_addr);
+ desc[frag].addr_hi = upper_32_bits(dma_addr);
+ frag++;
+
+ desc[frag].len_vlan = skb_headlen(skb) / 2;
+ dma_addr = dma_map_single(&adapter->pdev->dev,
+ skb->data +
+ skb_headlen(skb) / 2,
+ skb_headlen(skb) / 2,
+ DMA_TO_DEVICE);
+ desc[frag].addr_lo = lower_32_bits(dma_addr);
+ desc[frag].addr_hi = upper_32_bits(dma_addr);
+ frag++;
+ }
+ } else {
+ desc[frag].len_vlan = frags[i - 1].size;
+ dma_addr = skb_frag_dma_map(&adapter->pdev->dev,
+ &frags[i - 1],
+ 0,
+ frags[i - 1].size,
+ DMA_TO_DEVICE);
+ desc[frag].addr_lo = lower_32_bits(dma_addr);
+ desc[frag].addr_hi = upper_32_bits(dma_addr);
+ frag++;
+ }
+ }
+
+ if (phydev && phydev->speed == SPEED_1000) {
+ if (++tx_ring->since_irq == PARM_TX_NUM_BUFS_DEF) {
+ /* Last element & Interrupt flag */
+ desc[frag - 1].flags =
+ TXDESC_FLAG_INTPROC | TXDESC_FLAG_LASTPKT;
+ tx_ring->since_irq = 0;
+ } else { /* Last element */
+ desc[frag - 1].flags = TXDESC_FLAG_LASTPKT;
+ }
+ } else {
+ desc[frag - 1].flags =
+ TXDESC_FLAG_INTPROC | TXDESC_FLAG_LASTPKT;
+ }
+
+ desc[0].flags |= TXDESC_FLAG_FIRSTPKT;
+
+ tcb->index_start = tx_ring->send_idx;
+ tcb->stale = 0;
+
+ thiscopy = NUM_DESC_PER_RING_TX - INDEX10(tx_ring->send_idx);
+
+ if (thiscopy >= frag) {
+ remainder = 0;
+ thiscopy = frag;
+ } else {
+ remainder = frag - thiscopy;
+ }
+
+ memcpy(tx_ring->tx_desc_ring + INDEX10(tx_ring->send_idx),
+ desc,
+ sizeof(struct tx_desc) * thiscopy);
+
+ add_10bit(&tx_ring->send_idx, thiscopy);
+
+ if (INDEX10(tx_ring->send_idx) == 0 ||
+ INDEX10(tx_ring->send_idx) == NUM_DESC_PER_RING_TX) {
+ tx_ring->send_idx &= ~ET_DMA10_MASK;
+ tx_ring->send_idx ^= ET_DMA10_WRAP;
+ }
+
+ if (remainder) {
+ memcpy(tx_ring->tx_desc_ring,
+ desc + thiscopy,
+ sizeof(struct tx_desc) * remainder);
+
+ add_10bit(&tx_ring->send_idx, remainder);
+ }
+
+ if (INDEX10(tx_ring->send_idx) == 0) {
+ if (tx_ring->send_idx)
+ tcb->index = NUM_DESC_PER_RING_TX - 1;
+ else
+ tcb->index = ET_DMA10_WRAP|(NUM_DESC_PER_RING_TX - 1);
+ } else {
+ tcb->index = tx_ring->send_idx - 1;
+ }
+
+ spin_lock(&adapter->tcb_send_qlock);
+
+ if (tx_ring->send_tail)
+ tx_ring->send_tail->next = tcb;
+ else
+ tx_ring->send_head = tcb;
+
+ tx_ring->send_tail = tcb;
+
+ WARN_ON(tcb->next != NULL);
+
+ tx_ring->used++;
+
+ spin_unlock(&adapter->tcb_send_qlock);
+
+ /* Write the new write pointer back to the device. */
+ writel(tx_ring->send_idx, &adapter->regs->txdma.service_request);
+
+ /* For Gig only, we use Tx Interrupt coalescing. Enable the software
+ * timer to wake us up if this packet isn't followed by N more.
+ */
+ if (phydev && phydev->speed == SPEED_1000) {
+ writel(PARM_TX_TIME_INT_DEF * NANO_IN_A_MICRO,
+ &adapter->regs->global.watchdog_timer);
+ }
+ return 0;
+}
+
+static int send_packet(struct sk_buff *skb, struct et131x_adapter *adapter)
+{
+ int status;
+ struct tcb *tcb;
+ unsigned long flags;
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+
+ /* All packets must have at least a MAC address and a protocol type */
+ if (skb->len < ETH_HLEN)
+ return -EIO;
+
+ spin_lock_irqsave(&adapter->tcb_ready_qlock, flags);
+
+ tcb = tx_ring->tcb_qhead;
+
+ if (tcb == NULL) {
+ spin_unlock_irqrestore(&adapter->tcb_ready_qlock, flags);
+ return -ENOMEM;
+ }
+
+ tx_ring->tcb_qhead = tcb->next;
+
+ if (tx_ring->tcb_qhead == NULL)
+ tx_ring->tcb_qtail = NULL;
+
+ spin_unlock_irqrestore(&adapter->tcb_ready_qlock, flags);
+
+ tcb->skb = skb;
+ tcb->next = NULL;
+
+ status = nic_send_packet(adapter, tcb);
+
+ if (status != 0) {
+ spin_lock_irqsave(&adapter->tcb_ready_qlock, flags);
+
+ if (tx_ring->tcb_qtail)
+ tx_ring->tcb_qtail->next = tcb;
+ else
+ /* Apparently ready Q is empty. */
+ tx_ring->tcb_qhead = tcb;
+
+ tx_ring->tcb_qtail = tcb;
+ spin_unlock_irqrestore(&adapter->tcb_ready_qlock, flags);
+ return status;
+ }
+ WARN_ON(tx_ring->used > NUM_TCB);
+ return 0;
+}
+
+/* free_send_packet - Recycle a struct tcb */
+static inline void free_send_packet(struct et131x_adapter *adapter,
+ struct tcb *tcb)
+{
+ unsigned long flags;
+ struct tx_desc *desc = NULL;
+ struct net_device_stats *stats = &adapter->netdev->stats;
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+ u64 dma_addr;
+
+ if (tcb->skb) {
+ stats->tx_bytes += tcb->skb->len;
+
+ /* Iterate through the TX descriptors on the ring
+ * corresponding to this packet and umap the fragments
+ * they point to
+ */
+ do {
+ desc = tx_ring->tx_desc_ring +
+ INDEX10(tcb->index_start);
+
+ dma_addr = desc->addr_lo;
+ dma_addr |= (u64)desc->addr_hi << 32;
+
+ dma_unmap_single(&adapter->pdev->dev,
+ dma_addr,
+ desc->len_vlan, DMA_TO_DEVICE);
+
+ add_10bit(&tcb->index_start, 1);
+ if (INDEX10(tcb->index_start) >=
+ NUM_DESC_PER_RING_TX) {
+ tcb->index_start &= ~ET_DMA10_MASK;
+ tcb->index_start ^= ET_DMA10_WRAP;
+ }
+ } while (desc != tx_ring->tx_desc_ring + INDEX10(tcb->index));
+
+ dev_kfree_skb_any(tcb->skb);
+ }
+
+ memset(tcb, 0, sizeof(struct tcb));
+
+ /* Add the TCB to the Ready Q */
+ spin_lock_irqsave(&adapter->tcb_ready_qlock, flags);
+
+ stats->tx_packets++;
+
+ if (tx_ring->tcb_qtail)
+ tx_ring->tcb_qtail->next = tcb;
+ else /* Apparently ready Q is empty. */
+ tx_ring->tcb_qhead = tcb;
+
+ tx_ring->tcb_qtail = tcb;
+
+ spin_unlock_irqrestore(&adapter->tcb_ready_qlock, flags);
+ WARN_ON(tx_ring->used < 0);
+}
+
+/* et131x_free_busy_send_packets - Free and complete the stopped active sends */
+static void et131x_free_busy_send_packets(struct et131x_adapter *adapter)
+{
+ struct tcb *tcb;
+ unsigned long flags;
+ u32 freed = 0;
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+
+ /* Any packets being sent? Check the first TCB on the send list */
+ spin_lock_irqsave(&adapter->tcb_send_qlock, flags);
+
+ tcb = tx_ring->send_head;
+
+ while (tcb != NULL && freed < NUM_TCB) {
+ struct tcb *next = tcb->next;
+
+ tx_ring->send_head = next;
+
+ if (next == NULL)
+ tx_ring->send_tail = NULL;
+
+ tx_ring->used--;
+
+ spin_unlock_irqrestore(&adapter->tcb_send_qlock, flags);
+
+ freed++;
+ free_send_packet(adapter, tcb);
+
+ spin_lock_irqsave(&adapter->tcb_send_qlock, flags);
+
+ tcb = tx_ring->send_head;
+ }
+
+ WARN_ON(freed == NUM_TCB);
+
+ spin_unlock_irqrestore(&adapter->tcb_send_qlock, flags);
+
+ tx_ring->used = 0;
+}
+
+/* et131x_handle_send_pkts
+ *
+ * Re-claim the send resources, complete sends and get more to send from
+ * the send wait queue.
+ */
+static void et131x_handle_send_pkts(struct et131x_adapter *adapter)
+{
+ unsigned long flags;
+ u32 serviced;
+ struct tcb *tcb;
+ u32 index;
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+
+ serviced = readl(&adapter->regs->txdma.new_service_complete);
+ index = INDEX10(serviced);
+
+ /* Has the ring wrapped? Process any descriptors that do not have
+ * the same "wrap" indicator as the current completion indicator
+ */
+ spin_lock_irqsave(&adapter->tcb_send_qlock, flags);
+
+ tcb = tx_ring->send_head;
+
+ while (tcb &&
+ ((serviced ^ tcb->index) & ET_DMA10_WRAP) &&
+ index < INDEX10(tcb->index)) {
+ tx_ring->used--;
+ tx_ring->send_head = tcb->next;
+ if (tcb->next == NULL)
+ tx_ring->send_tail = NULL;
+
+ spin_unlock_irqrestore(&adapter->tcb_send_qlock, flags);
+ free_send_packet(adapter, tcb);
+ spin_lock_irqsave(&adapter->tcb_send_qlock, flags);
+
+ /* Goto the next packet */
+ tcb = tx_ring->send_head;
+ }
+ while (tcb &&
+ !((serviced ^ tcb->index) & ET_DMA10_WRAP) &&
+ index > (tcb->index & ET_DMA10_MASK)) {
+ tx_ring->used--;
+ tx_ring->send_head = tcb->next;
+ if (tcb->next == NULL)
+ tx_ring->send_tail = NULL;
+
+ spin_unlock_irqrestore(&adapter->tcb_send_qlock, flags);
+ free_send_packet(adapter, tcb);
+ spin_lock_irqsave(&adapter->tcb_send_qlock, flags);
+
+ /* Goto the next packet */
+ tcb = tx_ring->send_head;
+ }
+
+ /* Wake up the queue when we hit a low-water mark */
+ if (tx_ring->used <= NUM_TCB / 3)
+ netif_wake_queue(adapter->netdev);
+
+ spin_unlock_irqrestore(&adapter->tcb_send_qlock, flags);
+}
+
+static int et131x_get_settings(struct net_device *netdev,
+ struct ethtool_cmd *cmd)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ return phy_ethtool_gset(adapter->phydev, cmd);
+}
+
+static int et131x_set_settings(struct net_device *netdev,
+ struct ethtool_cmd *cmd)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ return phy_ethtool_sset(adapter->phydev, cmd);
+}
+
+static int et131x_get_regs_len(struct net_device *netdev)
+{
+#define ET131X_REGS_LEN 256
+ return ET131X_REGS_LEN * sizeof(u32);
+}
+
+static void et131x_get_regs(struct net_device *netdev,
+ struct ethtool_regs *regs, void *regs_data)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+ struct address_map __iomem *aregs = adapter->regs;
+ u32 *regs_buff = regs_data;
+ u32 num = 0;
+ u16 tmp;
+
+ memset(regs_data, 0, et131x_get_regs_len(netdev));
+
+ regs->version = (1 << 24) | (adapter->pdev->revision << 16) |
+ adapter->pdev->device;
+
+ /* PHY regs */
+ et131x_mii_read(adapter, MII_BMCR, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, MII_BMSR, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, MII_PHYSID1, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, MII_PHYSID2, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, MII_ADVERTISE, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, MII_LPA, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, MII_EXPANSION, &tmp);
+ regs_buff[num++] = tmp;
+ /* Autoneg next page transmit reg */
+ et131x_mii_read(adapter, 0x07, &tmp);
+ regs_buff[num++] = tmp;
+ /* Link partner next page reg */
+ et131x_mii_read(adapter, 0x08, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, MII_CTRL1000, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, MII_STAT1000, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, 0x0b, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, 0x0c, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, MII_MMD_CTRL, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, MII_MMD_DATA, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, MII_ESTATUS, &tmp);
+ regs_buff[num++] = tmp;
+
+ et131x_mii_read(adapter, PHY_INDEX_REG, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, PHY_DATA_REG, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, PHY_MPHY_CONTROL_REG, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, PHY_LOOPBACK_CONTROL, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, PHY_LOOPBACK_CONTROL + 1, &tmp);
+ regs_buff[num++] = tmp;
+
+ et131x_mii_read(adapter, PHY_REGISTER_MGMT_CONTROL, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, PHY_CONFIG, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, PHY_PHY_CONTROL, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, PHY_INTERRUPT_MASK, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, PHY_INTERRUPT_STATUS, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, PHY_PHY_STATUS, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, PHY_LED_1, &tmp);
+ regs_buff[num++] = tmp;
+ et131x_mii_read(adapter, PHY_LED_2, &tmp);
+ regs_buff[num++] = tmp;
+
+ /* Global regs */
+ regs_buff[num++] = readl(&aregs->global.txq_start_addr);
+ regs_buff[num++] = readl(&aregs->global.txq_end_addr);
+ regs_buff[num++] = readl(&aregs->global.rxq_start_addr);
+ regs_buff[num++] = readl(&aregs->global.rxq_end_addr);
+ regs_buff[num++] = readl(&aregs->global.pm_csr);
+ regs_buff[num++] = adapter->stats.interrupt_status;
+ regs_buff[num++] = readl(&aregs->global.int_mask);
+ regs_buff[num++] = readl(&aregs->global.int_alias_clr_en);
+ regs_buff[num++] = readl(&aregs->global.int_status_alias);
+ regs_buff[num++] = readl(&aregs->global.sw_reset);
+ regs_buff[num++] = readl(&aregs->global.slv_timer);
+ regs_buff[num++] = readl(&aregs->global.msi_config);
+ regs_buff[num++] = readl(&aregs->global.loopback);
+ regs_buff[num++] = readl(&aregs->global.watchdog_timer);
+
+ /* TXDMA regs */
+ regs_buff[num++] = readl(&aregs->txdma.csr);
+ regs_buff[num++] = readl(&aregs->txdma.pr_base_hi);
+ regs_buff[num++] = readl(&aregs->txdma.pr_base_lo);
+ regs_buff[num++] = readl(&aregs->txdma.pr_num_des);
+ regs_buff[num++] = readl(&aregs->txdma.txq_wr_addr);
+ regs_buff[num++] = readl(&aregs->txdma.txq_wr_addr_ext);
+ regs_buff[num++] = readl(&aregs->txdma.txq_rd_addr);
+ regs_buff[num++] = readl(&aregs->txdma.dma_wb_base_hi);
+ regs_buff[num++] = readl(&aregs->txdma.dma_wb_base_lo);
+ regs_buff[num++] = readl(&aregs->txdma.service_request);
+ regs_buff[num++] = readl(&aregs->txdma.service_complete);
+ regs_buff[num++] = readl(&aregs->txdma.cache_rd_index);
+ regs_buff[num++] = readl(&aregs->txdma.cache_wr_index);
+ regs_buff[num++] = readl(&aregs->txdma.tx_dma_error);
+ regs_buff[num++] = readl(&aregs->txdma.desc_abort_cnt);
+ regs_buff[num++] = readl(&aregs->txdma.payload_abort_cnt);
+ regs_buff[num++] = readl(&aregs->txdma.writeback_abort_cnt);
+ regs_buff[num++] = readl(&aregs->txdma.desc_timeout_cnt);
+ regs_buff[num++] = readl(&aregs->txdma.payload_timeout_cnt);
+ regs_buff[num++] = readl(&aregs->txdma.writeback_timeout_cnt);
+ regs_buff[num++] = readl(&aregs->txdma.desc_error_cnt);
+ regs_buff[num++] = readl(&aregs->txdma.payload_error_cnt);
+ regs_buff[num++] = readl(&aregs->txdma.writeback_error_cnt);
+ regs_buff[num++] = readl(&aregs->txdma.dropped_tlp_cnt);
+ regs_buff[num++] = readl(&aregs->txdma.new_service_complete);
+ regs_buff[num++] = readl(&aregs->txdma.ethernet_packet_cnt);
+
+ /* RXDMA regs */
+ regs_buff[num++] = readl(&aregs->rxdma.csr);
+ regs_buff[num++] = readl(&aregs->rxdma.dma_wb_base_hi);
+ regs_buff[num++] = readl(&aregs->rxdma.dma_wb_base_lo);
+ regs_buff[num++] = readl(&aregs->rxdma.num_pkt_done);
+ regs_buff[num++] = readl(&aregs->rxdma.max_pkt_time);
+ regs_buff[num++] = readl(&aregs->rxdma.rxq_rd_addr);
+ regs_buff[num++] = readl(&aregs->rxdma.rxq_rd_addr_ext);
+ regs_buff[num++] = readl(&aregs->rxdma.rxq_wr_addr);
+ regs_buff[num++] = readl(&aregs->rxdma.psr_base_hi);
+ regs_buff[num++] = readl(&aregs->rxdma.psr_base_lo);
+ regs_buff[num++] = readl(&aregs->rxdma.psr_num_des);
+ regs_buff[num++] = readl(&aregs->rxdma.psr_avail_offset);
+ regs_buff[num++] = readl(&aregs->rxdma.psr_full_offset);
+ regs_buff[num++] = readl(&aregs->rxdma.psr_access_index);
+ regs_buff[num++] = readl(&aregs->rxdma.psr_min_des);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr0_base_lo);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr0_base_hi);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr0_num_des);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr0_avail_offset);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr0_full_offset);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr0_rd_index);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr0_min_des);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr1_base_lo);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr1_base_hi);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr1_num_des);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr1_avail_offset);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr1_full_offset);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr1_rd_index);
+ regs_buff[num++] = readl(&aregs->rxdma.fbr1_min_des);
+}
+
+static void et131x_get_drvinfo(struct net_device *netdev,
+ struct ethtool_drvinfo *info)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ strlcpy(info->driver, DRIVER_NAME, sizeof(info->driver));
+ strlcpy(info->version, DRIVER_VERSION, sizeof(info->version));
+ strlcpy(info->bus_info, pci_name(adapter->pdev),
+ sizeof(info->bus_info));
+}
+
+static struct ethtool_ops et131x_ethtool_ops = {
+ .get_settings = et131x_get_settings,
+ .set_settings = et131x_set_settings,
+ .get_drvinfo = et131x_get_drvinfo,
+ .get_regs_len = et131x_get_regs_len,
+ .get_regs = et131x_get_regs,
+ .get_link = ethtool_op_get_link,
+};
+
+/* et131x_hwaddr_init - set up the MAC Address */
+static void et131x_hwaddr_init(struct et131x_adapter *adapter)
+{
+ /* If have our default mac from init and no mac address from
+ * EEPROM then we need to generate the last octet and set it on the
+ * device
+ */
+ if (is_zero_ether_addr(adapter->rom_addr)) {
+ /* We need to randomly generate the last octet so we
+ * decrease our chances of setting the mac address to
+ * same as another one of our cards in the system
+ */
+ get_random_bytes(&adapter->addr[5], 1);
+ /* We have the default value in the register we are
+ * working with so we need to copy the current
+ * address into the permanent address
+ */
+ ether_addr_copy(adapter->rom_addr, adapter->addr);
+ } else {
+ /* We do not have an override address, so set the
+ * current address to the permanent address and add
+ * it to the device
+ */
+ ether_addr_copy(adapter->addr, adapter->rom_addr);
+ }
+}
+
+static int et131x_pci_init(struct et131x_adapter *adapter,
+ struct pci_dev *pdev)
+{
+ u16 max_payload;
+ int i, rc;
+
+ rc = et131x_init_eeprom(adapter);
+ if (rc < 0)
+ goto out;
+
+ if (!pci_is_pcie(pdev)) {
+ dev_err(&pdev->dev, "Missing PCIe capabilities\n");
+ goto err_out;
+ }
+
+ /* Program the Ack/Nak latency and replay timers */
+ max_payload = pdev->pcie_mpss;
+
+ if (max_payload < 2) {
+ static const u16 acknak[2] = { 0x76, 0xD0 };
+ static const u16 replay[2] = { 0x1E0, 0x2ED };
+
+ if (pci_write_config_word(pdev, ET1310_PCI_ACK_NACK,
+ acknak[max_payload])) {
+ dev_err(&pdev->dev,
+ "Could not write PCI config space for ACK/NAK\n");
+ goto err_out;
+ }
+ if (pci_write_config_word(pdev, ET1310_PCI_REPLAY,
+ replay[max_payload])) {
+ dev_err(&pdev->dev,
+ "Could not write PCI config space for Replay Timer\n");
+ goto err_out;
+ }
+ }
+
+ /* l0s and l1 latency timers. We are using default values.
+ * Representing 001 for L0s and 010 for L1
+ */
+ if (pci_write_config_byte(pdev, ET1310_PCI_L0L1LATENCY, 0x11)) {
+ dev_err(&pdev->dev,
+ "Could not write PCI config space for Latency Timers\n");
+ goto err_out;
+ }
+
+ /* Change the max read size to 2k */
+ if (pcie_set_readrq(pdev, 2048)) {
+ dev_err(&pdev->dev,
+ "Couldn't change PCI config space for Max read size\n");
+ goto err_out;
+ }
+
+ /* Get MAC address from config space if an eeprom exists, otherwise
+ * the MAC address there will not be valid
+ */
+ if (!adapter->has_eeprom) {
+ et131x_hwaddr_init(adapter);
+ return 0;
+ }
+
+ for (i = 0; i < ETH_ALEN; i++) {
+ if (pci_read_config_byte(pdev, ET1310_PCI_MAC_ADDRESS + i,
+ adapter->rom_addr + i)) {
+ dev_err(&pdev->dev, "Could not read PCI config space for MAC address\n");
+ goto err_out;
+ }
+ }
+ ether_addr_copy(adapter->addr, adapter->rom_addr);
+out:
+ return rc;
+err_out:
+ rc = -EIO;
+ goto out;
+}
+
+/* et131x_error_timer_handler
+ * @data: timer-specific variable; here a pointer to our adapter structure
+ *
+ * The routine called when the error timer expires, to track the number of
+ * recurring errors.
+ */
+static void et131x_error_timer_handler(unsigned long data)
+{
+ struct et131x_adapter *adapter = (struct et131x_adapter *)data;
+ struct phy_device *phydev = adapter->phydev;
+
+ if (et1310_in_phy_coma(adapter)) {
+ /* Bring the device immediately out of coma, to
+ * prevent it from sleeping indefinitely, this
+ * mechanism could be improved!
+ */
+ et1310_disable_phy_coma(adapter);
+ adapter->boot_coma = 20;
+ } else {
+ et1310_update_macstat_host_counters(adapter);
+ }
+
+ if (!phydev->link && adapter->boot_coma < 11)
+ adapter->boot_coma++;
+
+ if (adapter->boot_coma == 10) {
+ if (!phydev->link) {
+ if (!et1310_in_phy_coma(adapter)) {
+ /* NOTE - This was originally a 'sync with
+ * interrupt'. How to do that under Linux?
+ */
+ et131x_enable_interrupts(adapter);
+ et1310_enable_phy_coma(adapter);
+ }
+ }
+ }
+
+ /* This is a periodic timer, so reschedule */
+ mod_timer(&adapter->error_timer, jiffies + TX_ERROR_PERIOD * HZ / 1000);
+}
+
+static void et131x_adapter_memory_free(struct et131x_adapter *adapter)
+{
+ et131x_tx_dma_memory_free(adapter);
+ et131x_rx_dma_memory_free(adapter);
+}
+
+static int et131x_adapter_memory_alloc(struct et131x_adapter *adapter)
+{
+ int status;
+
+ status = et131x_tx_dma_memory_alloc(adapter);
+ if (status) {
+ dev_err(&adapter->pdev->dev,
+ "et131x_tx_dma_memory_alloc FAILED\n");
+ et131x_tx_dma_memory_free(adapter);
+ return status;
+ }
+
+ status = et131x_rx_dma_memory_alloc(adapter);
+ if (status) {
+ dev_err(&adapter->pdev->dev,
+ "et131x_rx_dma_memory_alloc FAILED\n");
+ et131x_adapter_memory_free(adapter);
+ return status;
+ }
+
+ status = et131x_init_recv(adapter);
+ if (status) {
+ dev_err(&adapter->pdev->dev, "et131x_init_recv FAILED\n");
+ et131x_adapter_memory_free(adapter);
+ }
+ return status;
+}
+
+static void et131x_adjust_link(struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+ struct phy_device *phydev = adapter->phydev;
+
+ if (!phydev)
+ return;
+ if (phydev->link == adapter->link)
+ return;
+
+ /* Check to see if we are in coma mode and if
+ * so, disable it because we will not be able
+ * to read PHY values until we are out.
+ */
+ if (et1310_in_phy_coma(adapter))
+ et1310_disable_phy_coma(adapter);
+
+ adapter->link = phydev->link;
+ phy_print_status(phydev);
+
+ if (phydev->link) {
+ adapter->boot_coma = 20;
+ if (phydev->speed == SPEED_10) {
+ u16 register18;
+
+ et131x_mii_read(adapter, PHY_MPHY_CONTROL_REG,
+ &register18);
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_MPHY_CONTROL_REG,
+ register18 | 0x4);
+ et131x_mii_write(adapter, phydev->addr, PHY_INDEX_REG,
+ register18 | 0x8402);
+ et131x_mii_write(adapter, phydev->addr, PHY_DATA_REG,
+ register18 | 511);
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_MPHY_CONTROL_REG, register18);
+ }
+
+ et1310_config_flow_control(adapter);
+
+ if (phydev->speed == SPEED_1000 &&
+ adapter->registry_jumbo_packet > 2048) {
+ u16 reg;
+
+ et131x_mii_read(adapter, PHY_CONFIG, &reg);
+ reg &= ~ET_PHY_CONFIG_TX_FIFO_DEPTH;
+ reg |= ET_PHY_CONFIG_FIFO_DEPTH_32;
+ et131x_mii_write(adapter, phydev->addr, PHY_CONFIG,
+ reg);
+ }
+
+ et131x_set_rx_dma_timer(adapter);
+ et1310_config_mac_regs2(adapter);
+ } else {
+ adapter->boot_coma = 0;
+
+ if (phydev->speed == SPEED_10) {
+ u16 register18;
+
+ et131x_mii_read(adapter, PHY_MPHY_CONTROL_REG,
+ &register18);
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_MPHY_CONTROL_REG,
+ register18 | 0x4);
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_INDEX_REG, register18 | 0x8402);
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_DATA_REG, register18 | 511);
+ et131x_mii_write(adapter, phydev->addr,
+ PHY_MPHY_CONTROL_REG, register18);
+ }
+
+ et131x_free_busy_send_packets(adapter);
+ et131x_init_send(adapter);
+
+ /* Bring the device back to the state it was during
+ * init prior to autonegotiation being complete. This
+ * way, when we get the auto-neg complete interrupt,
+ * we can complete init by calling config_mac_regs2.
+ */
+ et131x_soft_reset(adapter);
+
+ et131x_adapter_setup(adapter);
+
+ et131x_disable_txrx(netdev);
+ et131x_enable_txrx(netdev);
+ }
+}
+
+static int et131x_mii_probe(struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+ struct phy_device *phydev = NULL;
+
+ phydev = phy_find_first(adapter->mii_bus);
+ if (!phydev) {
+ dev_err(&adapter->pdev->dev, "no PHY found\n");
+ return -ENODEV;
+ }
+
+ phydev = phy_connect(netdev, dev_name(&phydev->dev),
+ &et131x_adjust_link, PHY_INTERFACE_MODE_MII);
+
+ if (IS_ERR(phydev)) {
+ dev_err(&adapter->pdev->dev, "Could not attach to PHY\n");
+ return PTR_ERR(phydev);
+ }
+
+ phydev->supported &= (SUPPORTED_10baseT_Half |
+ SUPPORTED_10baseT_Full |
+ SUPPORTED_100baseT_Half |
+ SUPPORTED_100baseT_Full |
+ SUPPORTED_Autoneg |
+ SUPPORTED_MII |
+ SUPPORTED_TP);
+
+ if (adapter->pdev->device != ET131X_PCI_DEVICE_ID_FAST)
+ phydev->supported |= SUPPORTED_1000baseT_Half |
+ SUPPORTED_1000baseT_Full;
+
+ phydev->advertising = phydev->supported;
+ phydev->autoneg = AUTONEG_ENABLE;
+ adapter->phydev = phydev;
+
+ dev_info(&adapter->pdev->dev,
+ "attached PHY driver [%s] (mii_bus:phy_addr=%s)\n",
+ phydev->drv->name, dev_name(&phydev->dev));
+
+ return 0;
+}
+
+static struct et131x_adapter *et131x_adapter_init(struct net_device *netdev,
+ struct pci_dev *pdev)
+{
+ static const u8 default_mac[] = { 0x00, 0x05, 0x3d, 0x00, 0x02, 0x00 };
+
+ struct et131x_adapter *adapter;
+
+ adapter = netdev_priv(netdev);
+ adapter->pdev = pci_dev_get(pdev);
+ adapter->netdev = netdev;
+
+ spin_lock_init(&adapter->tcb_send_qlock);
+ spin_lock_init(&adapter->tcb_ready_qlock);
+ spin_lock_init(&adapter->rcv_lock);
+
+ adapter->registry_jumbo_packet = 1514; /* 1514-9216 */
+
+ ether_addr_copy(adapter->addr, default_mac);
+
+ return adapter;
+}
+
+static void et131x_pci_remove(struct pci_dev *pdev)
+{
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ unregister_netdev(netdev);
+ netif_napi_del(&adapter->napi);
+ phy_disconnect(adapter->phydev);
+ mdiobus_unregister(adapter->mii_bus);
+ kfree(adapter->mii_bus->irq);
+ mdiobus_free(adapter->mii_bus);
+
+ et131x_adapter_memory_free(adapter);
+ iounmap(adapter->regs);
+ pci_dev_put(pdev);
+
+ free_netdev(netdev);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+}
+
+static void et131x_up(struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ et131x_enable_txrx(netdev);
+ phy_start(adapter->phydev);
+}
+
+static void et131x_down(struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ /* Save the timestamp for the TX watchdog, prevent a timeout */
+ netdev->trans_start = jiffies;
+
+ phy_stop(adapter->phydev);
+ et131x_disable_txrx(netdev);
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int et131x_suspend(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct net_device *netdev = pci_get_drvdata(pdev);
+
+ if (netif_running(netdev)) {
+ netif_device_detach(netdev);
+ et131x_down(netdev);
+ pci_save_state(pdev);
+ }
+
+ return 0;
+}
+
+static int et131x_resume(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct net_device *netdev = pci_get_drvdata(pdev);
+
+ if (netif_running(netdev)) {
+ pci_restore_state(pdev);
+ et131x_up(netdev);
+ netif_device_attach(netdev);
+ }
+
+ return 0;
+}
+#endif
+
+static SIMPLE_DEV_PM_OPS(et131x_pm_ops, et131x_suspend, et131x_resume);
+
+static irqreturn_t et131x_isr(int irq, void *dev_id)
+{
+ bool handled = true;
+ bool enable_interrupts = true;
+ struct net_device *netdev = dev_id;
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+ struct address_map __iomem *iomem = adapter->regs;
+ struct rx_ring *rx_ring = &adapter->rx_ring;
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+ u32 status;
+
+ if (!netif_device_present(netdev)) {
+ handled = false;
+ enable_interrupts = false;
+ goto out;
+ }
+
+ et131x_disable_interrupts(adapter);
+
+ status = readl(&adapter->regs->global.int_status);
+
+ if (adapter->flow == FLOW_TXONLY || adapter->flow == FLOW_BOTH)
+ status &= ~INT_MASK_ENABLE;
+ else
+ status &= ~INT_MASK_ENABLE_NO_FLOW;
+
+ /* Make sure this is our interrupt */
+ if (!status) {
+ handled = false;
+ et131x_enable_interrupts(adapter);
+ goto out;
+ }
+
+ /* This is our interrupt, so process accordingly */
+ if (status & ET_INTR_WATCHDOG) {
+ struct tcb *tcb = tx_ring->send_head;
+
+ if (tcb)
+ if (++tcb->stale > 1)
+ status |= ET_INTR_TXDMA_ISR;
+
+ if (rx_ring->unfinished_receives)
+ status |= ET_INTR_RXDMA_XFR_DONE;
+ else if (tcb == NULL)
+ writel(0, &adapter->regs->global.watchdog_timer);
+
+ status &= ~ET_INTR_WATCHDOG;
+ }
+
+ if (status & (ET_INTR_RXDMA_XFR_DONE | ET_INTR_TXDMA_ISR)) {
+ enable_interrupts = false;
+ napi_schedule(&adapter->napi);
+ }
+
+ status &= ~(ET_INTR_TXDMA_ISR | ET_INTR_RXDMA_XFR_DONE);
+
+ if (!status)
+ goto out;
+
+ if (status & ET_INTR_TXDMA_ERR) {
+ /* Following read also clears the register (COR) */
+ u32 txdma_err = readl(&iomem->txdma.tx_dma_error);
+
+ dev_warn(&adapter->pdev->dev,
+ "TXDMA_ERR interrupt, error = %d\n",
+ txdma_err);
+ }
+
+ if (status & (ET_INTR_RXDMA_FB_R0_LOW | ET_INTR_RXDMA_FB_R1_LOW)) {
+ /* This indicates the number of unused buffers in RXDMA free
+ * buffer ring 0 is <= the limit you programmed. Free buffer
+ * resources need to be returned. Free buffers are consumed as
+ * packets are passed from the network to the host. The host
+ * becomes aware of the packets from the contents of the packet
+ * status ring. This ring is queried when the packet done
+ * interrupt occurs. Packets are then passed to the OS. When
+ * the OS is done with the packets the resources can be
+ * returned to the ET1310 for re-use. This interrupt is one
+ * method of returning resources.
+ */
+
+ /* If the user has flow control on, then we will
+ * send a pause packet, otherwise just exit
+ */
+ if (adapter->flow == FLOW_TXONLY || adapter->flow == FLOW_BOTH) {
+ u32 pm_csr;
+
+ /* Tell the device to send a pause packet via the back
+ * pressure register (bp req and bp xon/xoff)
+ */
+ pm_csr = readl(&iomem->global.pm_csr);
+ if (!et1310_in_phy_coma(adapter))
+ writel(3, &iomem->txmac.bp_ctrl);
+ }
+ }
+
+ /* Handle Packet Status Ring Low Interrupt */
+ if (status & ET_INTR_RXDMA_STAT_LOW) {
+ /* Same idea as with the two Free Buffer Rings. Packets going
+ * from the network to the host each consume a free buffer
+ * resource and a packet status resource. These resources are
+ * passed to the OS. When the OS is done with the resources,
+ * they need to be returned to the ET1310. This is one method
+ * of returning the resources.
+ */
+ }
+
+ if (status & ET_INTR_RXDMA_ERR) {
+ /* The rxdma_error interrupt is sent when a time-out on a
+ * request issued by the JAGCore has occurred or a completion is
+ * returned with an un-successful status. In both cases the
+ * request is considered complete. The JAGCore will
+ * automatically re-try the request in question. Normally
+ * information on events like these are sent to the host using
+ * the "Advanced Error Reporting" capability. This interrupt is
+ * another way of getting similar information. The only thing
+ * required is to clear the interrupt by reading the ISR in the
+ * global resources. The JAGCore will do a re-try on the
+ * request. Normally you should never see this interrupt. If
+ * you start to see this interrupt occurring frequently then
+ * something bad has occurred. A reset might be the thing to do.
+ */
+ /* TRAP();*/
+
+ dev_warn(&adapter->pdev->dev, "RxDMA_ERR interrupt, error %x\n",
+ readl(&iomem->txmac.tx_test));
+ }
+
+ /* Handle the Wake on LAN Event */
+ if (status & ET_INTR_WOL) {
+ /* This is a secondary interrupt for wake on LAN. The driver
+ * should never see this, if it does, something serious is
+ * wrong.
+ */
+ dev_err(&adapter->pdev->dev, "WAKE_ON_LAN interrupt\n");
+ }
+
+ if (status & ET_INTR_TXMAC) {
+ u32 err = readl(&iomem->txmac.err);
+
+ /* When any of the errors occur and TXMAC generates an
+ * interrupt to report these errors, it usually means that
+ * TXMAC has detected an error in the data stream retrieved
+ * from the on-chip Tx Q. All of these errors are catastrophic
+ * and TXMAC won't be able to recover data when these errors
+ * occur. In a nutshell, the whole Tx path will have to be reset
+ * and re-configured afterwards.
+ */
+ dev_warn(&adapter->pdev->dev, "TXMAC interrupt, error 0x%08x\n",
+ err);
+
+ /* If we are debugging, we want to see this error, otherwise we
+ * just want the device to be reset and continue
+ */
+ }
+
+ if (status & ET_INTR_RXMAC) {
+ /* These interrupts are catastrophic to the device, what we need
+ * to do is disable the interrupts and set the flag to cause us
+ * to reset so we can solve this issue.
+ */
+ dev_warn(&adapter->pdev->dev,
+ "RXMAC interrupt, error 0x%08x. Requesting reset\n",
+ readl(&iomem->rxmac.err_reg));
+
+ dev_warn(&adapter->pdev->dev,
+ "Enable 0x%08x, Diag 0x%08x\n",
+ readl(&iomem->rxmac.ctrl),
+ readl(&iomem->rxmac.rxq_diag));
+
+ /* If we are debugging, we want to see this error, otherwise we
+ * just want the device to be reset and continue
+ */
+ }
+
+ if (status & ET_INTR_MAC_STAT) {
+ /* This means at least one of the un-masked counters in the
+ * MAC_STAT block has rolled over. Use this to maintain the top,
+ * software managed bits of the counter(s).
+ */
+ et1310_handle_macstat_interrupt(adapter);
+ }
+
+ if (status & ET_INTR_SLV_TIMEOUT) {
+ /* This means a timeout has occurred on a read or write request
+ * to one of the JAGCore registers. The Global Resources block
+ * has terminated the request and on a read request, returned a
+ * "fake" value. The most likely reasons are: Bad Address or the
+ * addressed module is in a power-down state and can't respond.
+ */
+ }
+
+out:
+ if (enable_interrupts)
+ et131x_enable_interrupts(adapter);
+
+ return IRQ_RETVAL(handled);
+}
+
+static int et131x_poll(struct napi_struct *napi, int budget)
+{
+ struct et131x_adapter *adapter =
+ container_of(napi, struct et131x_adapter, napi);
+ int work_done = et131x_handle_recv_pkts(adapter, budget);
+
+ et131x_handle_send_pkts(adapter);
+
+ if (work_done < budget) {
+ napi_complete(&adapter->napi);
+ et131x_enable_interrupts(adapter);
+ }
+
+ return work_done;
+}
+
+/* et131x_stats - Return the current device statistics */
+static struct net_device_stats *et131x_stats(struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+ struct net_device_stats *stats = &adapter->netdev->stats;
+ struct ce_stats *devstat = &adapter->stats;
+
+ stats->rx_errors = devstat->rx_length_errs +
+ devstat->rx_align_errs +
+ devstat->rx_crc_errs +
+ devstat->rx_code_violations +
+ devstat->rx_other_errs;
+ stats->tx_errors = devstat->tx_max_pkt_errs;
+ stats->multicast = devstat->multicast_pkts_rcvd;
+ stats->collisions = devstat->tx_collisions;
+
+ stats->rx_length_errors = devstat->rx_length_errs;
+ stats->rx_over_errors = devstat->rx_overflows;
+ stats->rx_crc_errors = devstat->rx_crc_errs;
+ stats->rx_dropped = devstat->rcvd_pkts_dropped;
+
+ /* NOTE: Not used, can't find analogous statistics */
+ /* stats->rx_frame_errors = devstat->; */
+ /* stats->rx_fifo_errors = devstat->; */
+ /* stats->rx_missed_errors = devstat->; */
+
+ /* stats->tx_aborted_errors = devstat->; */
+ /* stats->tx_carrier_errors = devstat->; */
+ /* stats->tx_fifo_errors = devstat->; */
+ /* stats->tx_heartbeat_errors = devstat->; */
+ /* stats->tx_window_errors = devstat->; */
+ return stats;
+}
+
+static int et131x_open(struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+ struct pci_dev *pdev = adapter->pdev;
+ unsigned int irq = pdev->irq;
+ int result;
+
+ /* Start the timer to track NIC errors */
+ init_timer(&adapter->error_timer);
+ adapter->error_timer.expires = jiffies + TX_ERROR_PERIOD * HZ / 1000;
+ adapter->error_timer.function = et131x_error_timer_handler;
+ adapter->error_timer.data = (unsigned long)adapter;
+ add_timer(&adapter->error_timer);
+
+ result = request_irq(irq, et131x_isr,
+ IRQF_SHARED, netdev->name, netdev);
+ if (result) {
+ dev_err(&pdev->dev, "could not register IRQ %d\n", irq);
+ return result;
+ }
+
+ adapter->flags |= FMP_ADAPTER_INTERRUPT_IN_USE;
+
+ napi_enable(&adapter->napi);
+
+ et131x_up(netdev);
+
+ return result;
+}
+
+static int et131x_close(struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ et131x_down(netdev);
+ napi_disable(&adapter->napi);
+
+ adapter->flags &= ~FMP_ADAPTER_INTERRUPT_IN_USE;
+ free_irq(adapter->pdev->irq, netdev);
+
+ /* Stop the error timer */
+ return del_timer_sync(&adapter->error_timer);
+}
+
+static int et131x_ioctl(struct net_device *netdev, struct ifreq *reqbuf,
+ int cmd)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ if (!adapter->phydev)
+ return -EINVAL;
+
+ return phy_mii_ioctl(adapter->phydev, reqbuf, cmd);
+}
+
+/* et131x_set_packet_filter - Configures the Rx Packet filtering */
+static int et131x_set_packet_filter(struct et131x_adapter *adapter)
+{
+ int filter = adapter->packet_filter;
+ u32 ctrl;
+ u32 pf_ctrl;
+
+ ctrl = readl(&adapter->regs->rxmac.ctrl);
+ pf_ctrl = readl(&adapter->regs->rxmac.pf_ctrl);
+
+ /* Default to disabled packet filtering */
+ ctrl |= 0x04;
+
+ /* Set us to be in promiscuous mode so we receive everything, this
+ * is also true when we get a packet filter of 0
+ */
+ if ((filter & ET131X_PACKET_TYPE_PROMISCUOUS) || filter == 0)
+ pf_ctrl &= ~7; /* Clear filter bits */
+ else {
+ /* Set us up with Multicast packet filtering. Three cases are
+ * possible - (1) we have a multi-cast list, (2) we receive ALL
+ * multicast entries or (3) we receive none.
+ */
+ if (filter & ET131X_PACKET_TYPE_ALL_MULTICAST)
+ pf_ctrl &= ~2; /* Multicast filter bit */
+ else {
+ et1310_setup_device_for_multicast(adapter);
+ pf_ctrl |= 2;
+ ctrl &= ~0x04;
+ }
+
+ /* Set us up with Unicast packet filtering */
+ if (filter & ET131X_PACKET_TYPE_DIRECTED) {
+ et1310_setup_device_for_unicast(adapter);
+ pf_ctrl |= 4;
+ ctrl &= ~0x04;
+ }
+
+ /* Set us up with Broadcast packet filtering */
+ if (filter & ET131X_PACKET_TYPE_BROADCAST) {
+ pf_ctrl |= 1; /* Broadcast filter bit */
+ ctrl &= ~0x04;
+ } else {
+ pf_ctrl &= ~1;
+ }
+
+ /* Setup the receive mac configuration registers - Packet
+ * Filter control + the enable / disable for packet filter
+ * in the control reg.
+ */
+ writel(pf_ctrl, &adapter->regs->rxmac.pf_ctrl);
+ writel(ctrl, &adapter->regs->rxmac.ctrl);
+ }
+ return 0;
+}
+
+static void et131x_multicast(struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+ int packet_filter;
+ struct netdev_hw_addr *ha;
+ int i;
+
+ /* Before we modify the platform-independent filter flags, store them
+ * locally. This allows us to determine if anything's changed and if
+ * we even need to bother the hardware
+ */
+ packet_filter = adapter->packet_filter;
+
+ /* Clear the 'multicast' flag locally; because we only have a single
+ * flag to check multicast, and multiple multicast addresses can be
+ * set, this is the easiest way to determine if more than one
+ * multicast address is being set.
+ */
+ packet_filter &= ~ET131X_PACKET_TYPE_MULTICAST;
+
+ /* Check the net_device flags and set the device independent flags
+ * accordingly
+ */
+ if (netdev->flags & IFF_PROMISC)
+ adapter->packet_filter |= ET131X_PACKET_TYPE_PROMISCUOUS;
+ else
+ adapter->packet_filter &= ~ET131X_PACKET_TYPE_PROMISCUOUS;
+
+ if ((netdev->flags & IFF_ALLMULTI) ||
+ (netdev_mc_count(netdev) > NIC_MAX_MCAST_LIST))
+ adapter->packet_filter |= ET131X_PACKET_TYPE_ALL_MULTICAST;
+
+ if (netdev_mc_count(netdev) < 1) {
+ adapter->packet_filter &= ~ET131X_PACKET_TYPE_ALL_MULTICAST;
+ adapter->packet_filter &= ~ET131X_PACKET_TYPE_MULTICAST;
+ } else {
+ adapter->packet_filter |= ET131X_PACKET_TYPE_MULTICAST;
+ }
+
+ /* Set values in the private adapter struct */
+ i = 0;
+ netdev_for_each_mc_addr(ha, netdev) {
+ if (i == NIC_MAX_MCAST_LIST)
+ break;
+ ether_addr_copy(adapter->multicast_list[i++], ha->addr);
+ }
+ adapter->multicast_addr_count = i;
+
+ /* Are the new flags different from the previous ones? If not, then no
+ * action is required
+ *
+ * NOTE - This block will always update the multicast_list with the
+ * hardware, even if the addresses aren't the same.
+ */
+ if (packet_filter != adapter->packet_filter)
+ et131x_set_packet_filter(adapter);
+}
+
+static netdev_tx_t et131x_tx(struct sk_buff *skb, struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+
+ /* stop the queue if it's getting full */
+ if (tx_ring->used >= NUM_TCB - 1 && !netif_queue_stopped(netdev))
+ netif_stop_queue(netdev);
+
+ /* Save the timestamp for the TX timeout watchdog */
+ netdev->trans_start = jiffies;
+
+ /* TCB is not available */
+ if (tx_ring->used >= NUM_TCB)
+ goto drop_err;
+
+ if ((adapter->flags & FMP_ADAPTER_FAIL_SEND_MASK) ||
+ !netif_carrier_ok(netdev))
+ goto drop_err;
+
+ if (send_packet(skb, adapter))
+ goto drop_err;
+
+ return NETDEV_TX_OK;
+
+drop_err:
+ dev_kfree_skb_any(skb);
+ adapter->netdev->stats.tx_dropped++;
+ return NETDEV_TX_OK;
+}
+
+/* et131x_tx_timeout - Timeout handler
+ *
+ * The handler called when a Tx request times out. The timeout period is
+ * specified by the 'tx_timeo" element in the net_device structure (see
+ * et131x_alloc_device() to see how this value is set).
+ */
+static void et131x_tx_timeout(struct net_device *netdev)
+{
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+ struct tx_ring *tx_ring = &adapter->tx_ring;
+ struct tcb *tcb;
+ unsigned long flags;
+
+ /* If the device is closed, ignore the timeout */
+ if (~(adapter->flags & FMP_ADAPTER_INTERRUPT_IN_USE))
+ return;
+
+ /* Any nonrecoverable hardware error?
+ * Checks adapter->flags for any failure in phy reading
+ */
+ if (adapter->flags & FMP_ADAPTER_NON_RECOVER_ERROR)
+ return;
+
+ /* Hardware failure? */
+ if (adapter->flags & FMP_ADAPTER_HARDWARE_ERROR) {
+ dev_err(&adapter->pdev->dev, "hardware error - reset\n");
+ return;
+ }
+
+ /* Is send stuck? */
+ spin_lock_irqsave(&adapter->tcb_send_qlock, flags);
+ tcb = tx_ring->send_head;
+ spin_unlock_irqrestore(&adapter->tcb_send_qlock, flags);
+
+ if (tcb) {
+ tcb->count++;
+
+ if (tcb->count > NIC_SEND_HANG_THRESHOLD) {
+ dev_warn(&adapter->pdev->dev,
+ "Send stuck - reset. tcb->WrIndex %x\n",
+ tcb->index);
+
+ adapter->netdev->stats.tx_errors++;
+
+ /* perform reset of tx/rx */
+ et131x_disable_txrx(netdev);
+ et131x_enable_txrx(netdev);
+ }
+ }
+}
+
+static int et131x_change_mtu(struct net_device *netdev, int new_mtu)
+{
+ int result = 0;
+ struct et131x_adapter *adapter = netdev_priv(netdev);
+
+ if (new_mtu < 64 || new_mtu > 9216)
+ return -EINVAL;
+
+ et131x_disable_txrx(netdev);
+
+ netdev->mtu = new_mtu;
+
+ et131x_adapter_memory_free(adapter);
+
+ /* Set the config parameter for Jumbo Packet support */
+ adapter->registry_jumbo_packet = new_mtu + 14;
+ et131x_soft_reset(adapter);
+
+ result = et131x_adapter_memory_alloc(adapter);
+ if (result != 0) {
+ dev_warn(&adapter->pdev->dev,
+ "Change MTU failed; couldn't re-alloc DMA memory\n");
+ return result;
+ }
+
+ et131x_init_send(adapter);
+ et131x_hwaddr_init(adapter);
+ ether_addr_copy(netdev->dev_addr, adapter->addr);
+
+ /* Init the device with the new settings */
+ et131x_adapter_setup(adapter);
+ et131x_enable_txrx(netdev);
+
+ return result;
+}
+
+static const struct net_device_ops et131x_netdev_ops = {
+ .ndo_open = et131x_open,
+ .ndo_stop = et131x_close,
+ .ndo_start_xmit = et131x_tx,
+ .ndo_set_rx_mode = et131x_multicast,
+ .ndo_tx_timeout = et131x_tx_timeout,
+ .ndo_change_mtu = et131x_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_get_stats = et131x_stats,
+ .ndo_do_ioctl = et131x_ioctl,
+};
+
+static int et131x_pci_setup(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ struct net_device *netdev;
+ struct et131x_adapter *adapter;
+ int rc;
+ int ii;
+
+ rc = pci_enable_device(pdev);
+ if (rc < 0) {
+ dev_err(&pdev->dev, "pci_enable_device() failed\n");
+ goto out;
+ }
+
+ /* Perform some basic PCI checks */
+ if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) {
+ dev_err(&pdev->dev, "Can't find PCI device's base address\n");
+ rc = -ENODEV;
+ goto err_disable;
+ }
+
+ rc = pci_request_regions(pdev, DRIVER_NAME);
+ if (rc < 0) {
+ dev_err(&pdev->dev, "Can't get PCI resources\n");
+ goto err_disable;
+ }
+
+ pci_set_master(pdev);
+
+ /* Check the DMA addressing support of this device */
+ if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) &&
+ dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) {
+ dev_err(&pdev->dev, "No usable DMA addressing method\n");
+ rc = -EIO;
+ goto err_release_res;
+ }
+
+ netdev = alloc_etherdev(sizeof(struct et131x_adapter));
+ if (!netdev) {
+ dev_err(&pdev->dev, "Couldn't alloc netdev struct\n");
+ rc = -ENOMEM;
+ goto err_release_res;
+ }
+
+ netdev->watchdog_timeo = ET131X_TX_TIMEOUT;
+ netdev->netdev_ops = &et131x_netdev_ops;
+
+ SET_NETDEV_DEV(netdev, &pdev->dev);
+ netdev->ethtool_ops = &et131x_ethtool_ops;
+
+ adapter = et131x_adapter_init(netdev, pdev);
+
+ rc = et131x_pci_init(adapter, pdev);
+ if (rc < 0)
+ goto err_free_dev;
+
+ /* Map the bus-relative registers to system virtual memory */
+ adapter->regs = pci_ioremap_bar(pdev, 0);
+ if (!adapter->regs) {
+ dev_err(&pdev->dev, "Cannot map device registers\n");
+ rc = -ENOMEM;
+ goto err_free_dev;
+ }
+
+ /* If Phy COMA mode was enabled when we went down, disable it here. */
+ writel(ET_PMCSR_INIT, &adapter->regs->global.pm_csr);
+
+ et131x_soft_reset(adapter);
+ et131x_disable_interrupts(adapter);
+
+ rc = et131x_adapter_memory_alloc(adapter);
+ if (rc < 0) {
+ dev_err(&pdev->dev, "Could not alloc adapter memory (DMA)\n");
+ goto err_iounmap;
+ }
+
+ et131x_init_send(adapter);
+
+ netif_napi_add(netdev, &adapter->napi, et131x_poll, 64);
+
+ ether_addr_copy(netdev->dev_addr, adapter->addr);
+
+ rc = -ENOMEM;
+
+ adapter->mii_bus = mdiobus_alloc();
+ if (!adapter->mii_bus) {
+ dev_err(&pdev->dev, "Alloc of mii_bus struct failed\n");
+ goto err_mem_free;
+ }
+
+ adapter->mii_bus->name = "et131x_eth_mii";
+ snprintf(adapter->mii_bus->id, MII_BUS_ID_SIZE, "%x",
+ (adapter->pdev->bus->number << 8) | adapter->pdev->devfn);
+ adapter->mii_bus->priv = netdev;
+ adapter->mii_bus->read = et131x_mdio_read;
+ adapter->mii_bus->write = et131x_mdio_write;
+ adapter->mii_bus->irq = kmalloc_array(PHY_MAX_ADDR, sizeof(int),
+ GFP_KERNEL);
+ if (!adapter->mii_bus->irq)
+ goto err_mdio_free;
+
+ for (ii = 0; ii < PHY_MAX_ADDR; ii++)
+ adapter->mii_bus->irq[ii] = PHY_POLL;
+
+ rc = mdiobus_register(adapter->mii_bus);
+ if (rc < 0) {
+ dev_err(&pdev->dev, "failed to register MII bus\n");
+ goto err_mdio_free_irq;
+ }
+
+ rc = et131x_mii_probe(netdev);
+ if (rc < 0) {
+ dev_err(&pdev->dev, "failed to probe MII bus\n");
+ goto err_mdio_unregister;
+ }
+
+ et131x_adapter_setup(adapter);
+
+ /* Init variable for counting how long we do not have link status */
+ adapter->boot_coma = 0;
+ et1310_disable_phy_coma(adapter);
+
+ /* We can enable interrupts now
+ *
+ * NOTE - Because registration of interrupt handler is done in the
+ * device's open(), defer enabling device interrupts to that
+ * point
+ */
+
+ rc = register_netdev(netdev);
+ if (rc < 0) {
+ dev_err(&pdev->dev, "register_netdev() failed\n");
+ goto err_phy_disconnect;
+ }
+
+ /* Register the net_device struct with the PCI subsystem. Save a copy
+ * of the PCI config space for this device now that the device has
+ * been initialized, just in case it needs to be quickly restored.
+ */
+ pci_set_drvdata(pdev, netdev);
+out:
+ return rc;
+
+err_phy_disconnect:
+ phy_disconnect(adapter->phydev);
+err_mdio_unregister:
+ mdiobus_unregister(adapter->mii_bus);
+err_mdio_free_irq:
+ kfree(adapter->mii_bus->irq);
+err_mdio_free:
+ mdiobus_free(adapter->mii_bus);
+err_mem_free:
+ et131x_adapter_memory_free(adapter);
+err_iounmap:
+ iounmap(adapter->regs);
+err_free_dev:
+ pci_dev_put(pdev);
+ free_netdev(netdev);
+err_release_res:
+ pci_release_regions(pdev);
+err_disable:
+ pci_disable_device(pdev);
+ goto out;
+}
+
+static const struct pci_device_id et131x_pci_table[] = {
+ { PCI_VDEVICE(ATT, ET131X_PCI_DEVICE_ID_GIG), 0UL},
+ { PCI_VDEVICE(ATT, ET131X_PCI_DEVICE_ID_FAST), 0UL},
+ { 0,}
+};
+MODULE_DEVICE_TABLE(pci, et131x_pci_table);
+
+static struct pci_driver et131x_driver = {
+ .name = DRIVER_NAME,
+ .id_table = et131x_pci_table,
+ .probe = et131x_pci_setup,
+ .remove = et131x_pci_remove,
+ .driver.pm = &et131x_pm_ops,
+};
+
+module_pci_driver(et131x_driver);
diff --git a/drivers/net/ethernet/agere/et131x.h b/drivers/net/ethernet/agere/et131x.h
new file mode 100644
index 0000000..be9a11c
--- /dev/null
+++ b/drivers/net/ethernet/agere/et131x.h
@@ -0,0 +1,1433 @@
+/* Copyright © 2005 Agere Systems Inc.
+ * All rights reserved.
+ * http://www.agere.com
+ *
+ * SOFTWARE LICENSE
+ *
+ * This software is provided subject to the following terms and conditions,
+ * which you should read carefully before using the software. Using this
+ * software indicates your acceptance of these terms and conditions. If you do
+ * not agree with these terms and conditions, do not use the software.
+ *
+ * Copyright © 2005 Agere Systems Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source or binary forms, with or without
+ * modifications, are permitted provided that the following conditions are met:
+ *
+ * . Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following Disclaimer as comments in the code as
+ * well as in the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * . Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following Disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * . Neither the name of Agere Systems Inc. nor the names of the contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * Disclaimer
+ *
+ * THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
+ * INCLUDING, BUT NOT LIMITED TO, INFRINGEMENT AND THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. ANY
+ * USE, MODIFICATION OR DISTRIBUTION OF THIS SOFTWARE IS SOLELY AT THE USERS OWN
+ * RISK. IN NO EVENT SHALL AGERE SYSTEMS INC. OR CONTRIBUTORS BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, INCLUDING, BUT NOT LIMITED TO, CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+ * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ *
+ */
+
+#define DRIVER_NAME "et131x"
+#define DRIVER_VERSION "v2.0"
+
+/* EEPROM registers */
+
+/* LBCIF Register Groups (addressed via 32-bit offsets) */
+#define LBCIF_DWORD0_GROUP 0xAC
+#define LBCIF_DWORD1_GROUP 0xB0
+
+/* LBCIF Registers (addressed via 8-bit offsets) */
+#define LBCIF_ADDRESS_REGISTER 0xAC
+#define LBCIF_DATA_REGISTER 0xB0
+#define LBCIF_CONTROL_REGISTER 0xB1
+#define LBCIF_STATUS_REGISTER 0xB2
+
+/* LBCIF Control Register Bits */
+#define LBCIF_CONTROL_SEQUENTIAL_READ 0x01
+#define LBCIF_CONTROL_PAGE_WRITE 0x02
+#define LBCIF_CONTROL_EEPROM_RELOAD 0x08
+#define LBCIF_CONTROL_TWO_BYTE_ADDR 0x20
+#define LBCIF_CONTROL_I2C_WRITE 0x40
+#define LBCIF_CONTROL_LBCIF_ENABLE 0x80
+
+/* LBCIF Status Register Bits */
+#define LBCIF_STATUS_PHY_QUEUE_AVAIL 0x01
+#define LBCIF_STATUS_I2C_IDLE 0x02
+#define LBCIF_STATUS_ACK_ERROR 0x04
+#define LBCIF_STATUS_GENERAL_ERROR 0x08
+#define LBCIF_STATUS_CHECKSUM_ERROR 0x40
+#define LBCIF_STATUS_EEPROM_PRESENT 0x80
+
+/* START OF GLOBAL REGISTER ADDRESS MAP */
+/* 10bit registers
+ *
+ * Tx queue start address reg in global address map at address 0x0000
+ * tx queue end address reg in global address map at address 0x0004
+ * rx queue start address reg in global address map at address 0x0008
+ * rx queue end address reg in global address map at address 0x000C
+ */
+
+/* structure for power management control status reg in global address map
+ * located at address 0x0010
+ * jagcore_rx_rdy bit 9
+ * jagcore_tx_rdy bit 8
+ * phy_lped_en bit 7
+ * phy_sw_coma bit 6
+ * rxclk_gate bit 5
+ * txclk_gate bit 4
+ * sysclk_gate bit 3
+ * jagcore_rx_en bit 2
+ * jagcore_tx_en bit 1
+ * gigephy_en bit 0
+ */
+#define ET_PM_PHY_SW_COMA 0x40
+#define ET_PMCSR_INIT 0x38
+
+/* Interrupt status reg at address 0x0018
+ */
+#define ET_INTR_TXDMA_ISR 0x00000008
+#define ET_INTR_TXDMA_ERR 0x00000010
+#define ET_INTR_RXDMA_XFR_DONE 0x00000020
+#define ET_INTR_RXDMA_FB_R0_LOW 0x00000040
+#define ET_INTR_RXDMA_FB_R1_LOW 0x00000080
+#define ET_INTR_RXDMA_STAT_LOW 0x00000100
+#define ET_INTR_RXDMA_ERR 0x00000200
+#define ET_INTR_WATCHDOG 0x00004000
+#define ET_INTR_WOL 0x00008000
+#define ET_INTR_PHY 0x00010000
+#define ET_INTR_TXMAC 0x00020000
+#define ET_INTR_RXMAC 0x00040000
+#define ET_INTR_MAC_STAT 0x00080000
+#define ET_INTR_SLV_TIMEOUT 0x00100000
+
+/* Interrupt mask register at address 0x001C
+ * Interrupt alias clear mask reg at address 0x0020
+ * Interrupt status alias reg at address 0x0024
+ *
+ * Same masks as above
+ */
+
+/* Software reset reg at address 0x0028
+ * 0: txdma_sw_reset
+ * 1: rxdma_sw_reset
+ * 2: txmac_sw_reset
+ * 3: rxmac_sw_reset
+ * 4: mac_sw_reset
+ * 5: mac_stat_sw_reset
+ * 6: mmc_sw_reset
+ *31: selfclr_disable
+ */
+#define ET_RESET_ALL 0x007F
+
+/* SLV Timer reg at address 0x002C (low 24 bits)
+ */
+
+/* MSI Configuration reg at address 0x0030
+ */
+#define ET_MSI_VECTOR 0x0000001F
+#define ET_MSI_TC 0x00070000
+
+/* Loopback reg located at address 0x0034
+ */
+#define ET_LOOP_MAC 0x00000001
+#define ET_LOOP_DMA 0x00000002
+
+/* GLOBAL Module of JAGCore Address Mapping
+ * Located at address 0x0000
+ */
+struct global_regs { /* Location: */
+ u32 txq_start_addr; /* 0x0000 */
+ u32 txq_end_addr; /* 0x0004 */
+ u32 rxq_start_addr; /* 0x0008 */
+ u32 rxq_end_addr; /* 0x000C */
+ u32 pm_csr; /* 0x0010 */
+ u32 unused; /* 0x0014 */
+ u32 int_status; /* 0x0018 */
+ u32 int_mask; /* 0x001C */
+ u32 int_alias_clr_en; /* 0x0020 */
+ u32 int_status_alias; /* 0x0024 */
+ u32 sw_reset; /* 0x0028 */
+ u32 slv_timer; /* 0x002C */
+ u32 msi_config; /* 0x0030 */
+ u32 loopback; /* 0x0034 */
+ u32 watchdog_timer; /* 0x0038 */
+};
+
+/* START OF TXDMA REGISTER ADDRESS MAP */
+/* txdma control status reg at address 0x1000
+ */
+#define ET_TXDMA_CSR_HALT 0x00000001
+#define ET_TXDMA_DROP_TLP 0x00000002
+#define ET_TXDMA_CACHE_THRS 0x000000F0
+#define ET_TXDMA_CACHE_SHIFT 4
+#define ET_TXDMA_SNGL_EPKT 0x00000100
+#define ET_TXDMA_CLASS 0x00001E00
+
+/* structure for txdma packet ring base address hi reg in txdma address map
+ * located at address 0x1004
+ * Defined earlier (u32)
+ */
+
+/* structure for txdma packet ring base address low reg in txdma address map
+ * located at address 0x1008
+ * Defined earlier (u32)
+ */
+
+/* structure for txdma packet ring number of descriptor reg in txdma address
+ * map. Located at address 0x100C
+ *
+ * 31-10: unused
+ * 9-0: pr ndes
+ */
+#define ET_DMA12_MASK 0x0FFF /* 12 bit mask for DMA12W types */
+#define ET_DMA12_WRAP 0x1000
+#define ET_DMA10_MASK 0x03FF /* 10 bit mask for DMA10W types */
+#define ET_DMA10_WRAP 0x0400
+#define ET_DMA4_MASK 0x000F /* 4 bit mask for DMA4W types */
+#define ET_DMA4_WRAP 0x0010
+
+#define INDEX12(x) ((x) & ET_DMA12_MASK)
+#define INDEX10(x) ((x) & ET_DMA10_MASK)
+#define INDEX4(x) ((x) & ET_DMA4_MASK)
+
+/* 10bit DMA with wrap
+ * txdma tx queue write address reg in txdma address map at 0x1010
+ * txdma tx queue write address external reg in txdma address map at 0x1014
+ * txdma tx queue read address reg in txdma address map at 0x1018
+ *
+ * u32
+ * txdma status writeback address hi reg in txdma address map at0x101C
+ * txdma status writeback address lo reg in txdma address map at 0x1020
+ *
+ * 10bit DMA with wrap
+ * txdma service request reg in txdma address map at 0x1024
+ * structure for txdma service complete reg in txdma address map at 0x1028
+ *
+ * 4bit DMA with wrap
+ * txdma tx descriptor cache read index reg in txdma address map at 0x102C
+ * txdma tx descriptor cache write index reg in txdma address map at 0x1030
+ *
+ * txdma error reg in txdma address map at address 0x1034
+ * 0: PyldResend
+ * 1: PyldRewind
+ * 4: DescrResend
+ * 5: DescrRewind
+ * 8: WrbkResend
+ * 9: WrbkRewind
+ */
+
+/* Tx DMA Module of JAGCore Address Mapping
+ * Located at address 0x1000
+ */
+struct txdma_regs { /* Location: */
+ u32 csr; /* 0x1000 */
+ u32 pr_base_hi; /* 0x1004 */
+ u32 pr_base_lo; /* 0x1008 */
+ u32 pr_num_des; /* 0x100C */
+ u32 txq_wr_addr; /* 0x1010 */
+ u32 txq_wr_addr_ext; /* 0x1014 */
+ u32 txq_rd_addr; /* 0x1018 */
+ u32 dma_wb_base_hi; /* 0x101C */
+ u32 dma_wb_base_lo; /* 0x1020 */
+ u32 service_request; /* 0x1024 */
+ u32 service_complete; /* 0x1028 */
+ u32 cache_rd_index; /* 0x102C */
+ u32 cache_wr_index; /* 0x1030 */
+ u32 tx_dma_error; /* 0x1034 */
+ u32 desc_abort_cnt; /* 0x1038 */
+ u32 payload_abort_cnt; /* 0x103c */
+ u32 writeback_abort_cnt; /* 0x1040 */
+ u32 desc_timeout_cnt; /* 0x1044 */
+ u32 payload_timeout_cnt; /* 0x1048 */
+ u32 writeback_timeout_cnt; /* 0x104c */
+ u32 desc_error_cnt; /* 0x1050 */
+ u32 payload_error_cnt; /* 0x1054 */
+ u32 writeback_error_cnt; /* 0x1058 */
+ u32 dropped_tlp_cnt; /* 0x105c */
+ u32 new_service_complete; /* 0x1060 */
+ u32 ethernet_packet_cnt; /* 0x1064 */
+};
+
+/* END OF TXDMA REGISTER ADDRESS MAP */
+
+/* START OF RXDMA REGISTER ADDRESS MAP */
+/* structure for control status reg in rxdma address map
+ * Located at address 0x2000
+ *
+ * CSR
+ * 0: halt
+ * 1-3: tc
+ * 4: fbr_big_endian
+ * 5: psr_big_endian
+ * 6: pkt_big_endian
+ * 7: dma_big_endian
+ * 8-9: fbr0_size
+ * 10: fbr0_enable
+ * 11-12: fbr1_size
+ * 13: fbr1_enable
+ * 14: unused
+ * 15: pkt_drop_disable
+ * 16: pkt_done_flush
+ * 17: halt_status
+ * 18-31: unused
+ */
+#define ET_RXDMA_CSR_HALT 0x0001
+#define ET_RXDMA_CSR_FBR0_SIZE_LO 0x0100
+#define ET_RXDMA_CSR_FBR0_SIZE_HI 0x0200
+#define ET_RXDMA_CSR_FBR0_ENABLE 0x0400
+#define ET_RXDMA_CSR_FBR1_SIZE_LO 0x0800
+#define ET_RXDMA_CSR_FBR1_SIZE_HI 0x1000
+#define ET_RXDMA_CSR_FBR1_ENABLE 0x2000
+#define ET_RXDMA_CSR_HALT_STATUS 0x00020000
+
+/* structure for dma writeback lo reg in rxdma address map
+ * located at address 0x2004
+ * Defined earlier (u32)
+ */
+
+/* structure for dma writeback hi reg in rxdma address map
+ * located at address 0x2008
+ * Defined earlier (u32)
+ */
+
+/* structure for number of packets done reg in rxdma address map
+ * located at address 0x200C
+ *
+ * 31-8: unused
+ * 7-0: num done
+ */
+
+/* structure for max packet time reg in rxdma address map
+ * located at address 0x2010
+ *
+ * 31-18: unused
+ * 17-0: time done
+ */
+
+/* structure for rx queue read address reg in rxdma address map
+ * located at address 0x2014
+ * Defined earlier (u32)
+ */
+
+/* structure for rx queue read address external reg in rxdma address map
+ * located at address 0x2018
+ * Defined earlier (u32)
+ */
+
+/* structure for rx queue write address reg in rxdma address map
+ * located at address 0x201C
+ * Defined earlier (u32)
+ */
+
+/* structure for packet status ring base address lo reg in rxdma address map
+ * located at address 0x2020
+ * Defined earlier (u32)
+ */
+
+/* structure for packet status ring base address hi reg in rxdma address map
+ * located at address 0x2024
+ * Defined earlier (u32)
+ */
+
+/* structure for packet status ring number of descriptors reg in rxdma address
+ * map. Located at address 0x2028
+ *
+ * 31-12: unused
+ * 11-0: psr ndes
+ */
+#define ET_RXDMA_PSR_NUM_DES_MASK 0xFFF
+
+/* structure for packet status ring available offset reg in rxdma address map
+ * located at address 0x202C
+ *
+ * 31-13: unused
+ * 12: psr avail wrap
+ * 11-0: psr avail
+ */
+
+/* structure for packet status ring full offset reg in rxdma address map
+ * located at address 0x2030
+ *
+ * 31-13: unused
+ * 12: psr full wrap
+ * 11-0: psr full
+ */
+
+/* structure for packet status ring access index reg in rxdma address map
+ * located at address 0x2034
+ *
+ * 31-5: unused
+ * 4-0: psr_ai
+ */
+
+/* structure for packet status ring minimum descriptors reg in rxdma address
+ * map. Located at address 0x2038
+ *
+ * 31-12: unused
+ * 11-0: psr_min
+ */
+
+/* structure for free buffer ring base lo address reg in rxdma address map
+ * located at address 0x203C
+ * Defined earlier (u32)
+ */
+
+/* structure for free buffer ring base hi address reg in rxdma address map
+ * located at address 0x2040
+ * Defined earlier (u32)
+ */
+
+/* structure for free buffer ring number of descriptors reg in rxdma address
+ * map. Located at address 0x2044
+ *
+ * 31-10: unused
+ * 9-0: fbr ndesc
+ */
+
+/* structure for free buffer ring 0 available offset reg in rxdma address map
+ * located at address 0x2048
+ * Defined earlier (u32)
+ */
+
+/* structure for free buffer ring 0 full offset reg in rxdma address map
+ * located at address 0x204C
+ * Defined earlier (u32)
+ */
+
+/* structure for free buffer cache 0 full offset reg in rxdma address map
+ * located at address 0x2050
+ *
+ * 31-5: unused
+ * 4-0: fbc rdi
+ */
+
+/* structure for free buffer ring 0 minimum descriptor reg in rxdma address map
+ * located at address 0x2054
+ *
+ * 31-10: unused
+ * 9-0: fbr min
+ */
+
+/* structure for free buffer ring 1 base address lo reg in rxdma address map
+ * located at address 0x2058 - 0x205C
+ * Defined earlier (RXDMA_FBR_BASE_LO_t and RXDMA_FBR_BASE_HI_t)
+ */
+
+/* structure for free buffer ring 1 number of descriptors reg in rxdma address
+ * map. Located at address 0x2060
+ * Defined earlier (RXDMA_FBR_NUM_DES_t)
+ */
+
+/* structure for free buffer ring 1 available offset reg in rxdma address map
+ * located at address 0x2064
+ * Defined Earlier (RXDMA_FBR_AVAIL_OFFSET_t)
+ */
+
+/* structure for free buffer ring 1 full offset reg in rxdma address map
+ * located at address 0x2068
+ * Defined Earlier (RXDMA_FBR_FULL_OFFSET_t)
+ */
+
+/* structure for free buffer cache 1 read index reg in rxdma address map
+ * located at address 0x206C
+ * Defined Earlier (RXDMA_FBC_RD_INDEX_t)
+ */
+
+/* structure for free buffer ring 1 minimum descriptor reg in rxdma address map
+ * located at address 0x2070
+ * Defined Earlier (RXDMA_FBR_MIN_DES_t)
+ */
+
+/* Rx DMA Module of JAGCore Address Mapping
+ * Located at address 0x2000
+ */
+struct rxdma_regs { /* Location: */
+ u32 csr; /* 0x2000 */
+ u32 dma_wb_base_lo; /* 0x2004 */
+ u32 dma_wb_base_hi; /* 0x2008 */
+ u32 num_pkt_done; /* 0x200C */
+ u32 max_pkt_time; /* 0x2010 */
+ u32 rxq_rd_addr; /* 0x2014 */
+ u32 rxq_rd_addr_ext; /* 0x2018 */
+ u32 rxq_wr_addr; /* 0x201C */
+ u32 psr_base_lo; /* 0x2020 */
+ u32 psr_base_hi; /* 0x2024 */
+ u32 psr_num_des; /* 0x2028 */
+ u32 psr_avail_offset; /* 0x202C */
+ u32 psr_full_offset; /* 0x2030 */
+ u32 psr_access_index; /* 0x2034 */
+ u32 psr_min_des; /* 0x2038 */
+ u32 fbr0_base_lo; /* 0x203C */
+ u32 fbr0_base_hi; /* 0x2040 */
+ u32 fbr0_num_des; /* 0x2044 */
+ u32 fbr0_avail_offset; /* 0x2048 */
+ u32 fbr0_full_offset; /* 0x204C */
+ u32 fbr0_rd_index; /* 0x2050 */
+ u32 fbr0_min_des; /* 0x2054 */
+ u32 fbr1_base_lo; /* 0x2058 */
+ u32 fbr1_base_hi; /* 0x205C */
+ u32 fbr1_num_des; /* 0x2060 */
+ u32 fbr1_avail_offset; /* 0x2064 */
+ u32 fbr1_full_offset; /* 0x2068 */
+ u32 fbr1_rd_index; /* 0x206C */
+ u32 fbr1_min_des; /* 0x2070 */
+};
+
+/* END OF RXDMA REGISTER ADDRESS MAP */
+
+/* START OF TXMAC REGISTER ADDRESS MAP */
+/* structure for control reg in txmac address map
+ * located at address 0x3000
+ *
+ * bits
+ * 31-8: unused
+ * 7: cklseg_disable
+ * 6: ckbcnt_disable
+ * 5: cksegnum
+ * 4: async_disable
+ * 3: fc_disable
+ * 2: mcif_disable
+ * 1: mif_disable
+ * 0: txmac_en
+ */
+#define ET_TX_CTRL_FC_DISABLE 0x0008
+#define ET_TX_CTRL_TXMAC_ENABLE 0x0001
+
+/* structure for shadow pointer reg in txmac address map
+ * located at address 0x3004
+ * 31-27: reserved
+ * 26-16: txq rd ptr
+ * 15-11: reserved
+ * 10-0: txq wr ptr
+ */
+
+/* structure for error count reg in txmac address map
+ * located at address 0x3008
+ *
+ * 31-12: unused
+ * 11-8: reserved
+ * 7-4: txq_underrun
+ * 3-0: fifo_underrun
+ */
+
+/* structure for max fill reg in txmac address map
+ * located at address 0x300C
+ * 31-12: unused
+ * 11-0: max fill
+ */
+
+/* structure for cf parameter reg in txmac address map
+ * located at address 0x3010
+ * 31-16: cfep
+ * 15-0: cfpt
+ */
+
+/* structure for tx test reg in txmac address map
+ * located at address 0x3014
+ * 31-17: unused
+ * 16: reserved
+ * 15: txtest_en
+ * 14-11: unused
+ * 10-0: txq test pointer
+ */
+
+/* structure for error reg in txmac address map
+ * located at address 0x3018
+ *
+ * 31-9: unused
+ * 8: fifo_underrun
+ * 7-6: unused
+ * 5: ctrl2_err
+ * 4: txq_underrun
+ * 3: bcnt_err
+ * 2: lseg_err
+ * 1: segnum_err
+ * 0: seg0_err
+ */
+
+/* structure for error interrupt reg in txmac address map
+ * located at address 0x301C
+ *
+ * 31-9: unused
+ * 8: fifo_underrun
+ * 7-6: unused
+ * 5: ctrl2_err
+ * 4: txq_underrun
+ * 3: bcnt_err
+ * 2: lseg_err
+ * 1: segnum_err
+ * 0: seg0_err
+ */
+
+/* structure for error interrupt reg in txmac address map
+ * located at address 0x3020
+ *
+ * 31-2: unused
+ * 1: bp_req
+ * 0: bp_xonxoff
+ */
+
+/* Tx MAC Module of JAGCore Address Mapping
+ */
+struct txmac_regs { /* Location: */
+ u32 ctl; /* 0x3000 */
+ u32 shadow_ptr; /* 0x3004 */
+ u32 err_cnt; /* 0x3008 */
+ u32 max_fill; /* 0x300C */
+ u32 cf_param; /* 0x3010 */
+ u32 tx_test; /* 0x3014 */
+ u32 err; /* 0x3018 */
+ u32 err_int; /* 0x301C */
+ u32 bp_ctrl; /* 0x3020 */
+};
+
+/* END OF TXMAC REGISTER ADDRESS MAP */
+
+/* START OF RXMAC REGISTER ADDRESS MAP */
+
+/* structure for rxmac control reg in rxmac address map
+ * located at address 0x4000
+ *
+ * 31-7: reserved
+ * 6: rxmac_int_disable
+ * 5: async_disable
+ * 4: mif_disable
+ * 3: wol_disable
+ * 2: pkt_filter_disable
+ * 1: mcif_disable
+ * 0: rxmac_en
+ */
+#define ET_RX_CTRL_WOL_DISABLE 0x0008
+#define ET_RX_CTRL_RXMAC_ENABLE 0x0001
+
+/* structure for Wake On Lan Control and CRC 0 reg in rxmac address map
+ * located at address 0x4004
+ * 31-16: crc
+ * 15-12: reserved
+ * 11: ignore_pp
+ * 10: ignore_mp
+ * 9: clr_intr
+ * 8: ignore_link_chg
+ * 7: ignore_uni
+ * 6: ignore_multi
+ * 5: ignore_broad
+ * 4-0: valid_crc 4-0
+ */
+
+/* structure for CRC 1 and CRC 2 reg in rxmac address map
+ * located at address 0x4008
+ *
+ * 31-16: crc2
+ * 15-0: crc1
+ */
+
+/* structure for CRC 3 and CRC 4 reg in rxmac address map
+ * located at address 0x400C
+ *
+ * 31-16: crc4
+ * 15-0: crc3
+ */
+
+/* structure for Wake On Lan Source Address Lo reg in rxmac address map
+ * located at address 0x4010
+ *
+ * 31-24: sa3
+ * 23-16: sa4
+ * 15-8: sa5
+ * 7-0: sa6
+ */
+#define ET_RX_WOL_LO_SA3_SHIFT 24
+#define ET_RX_WOL_LO_SA4_SHIFT 16
+#define ET_RX_WOL_LO_SA5_SHIFT 8
+
+/* structure for Wake On Lan Source Address Hi reg in rxmac address map
+ * located at address 0x4014
+ *
+ * 31-16: reserved
+ * 15-8: sa1
+ * 7-0: sa2
+ */
+#define ET_RX_WOL_HI_SA1_SHIFT 8
+
+/* structure for Wake On Lan mask reg in rxmac address map
+ * located at address 0x4018 - 0x4064
+ * Defined earlier (u32)
+ */
+
+/* structure for Unicast Packet Filter Address 1 reg in rxmac address map
+ * located at address 0x4068
+ *
+ * 31-24: addr1_3
+ * 23-16: addr1_4
+ * 15-8: addr1_5
+ * 7-0: addr1_6
+ */
+#define ET_RX_UNI_PF_ADDR1_3_SHIFT 24
+#define ET_RX_UNI_PF_ADDR1_4_SHIFT 16
+#define ET_RX_UNI_PF_ADDR1_5_SHIFT 8
+
+/* structure for Unicast Packet Filter Address 2 reg in rxmac address map
+ * located at address 0x406C
+ *
+ * 31-24: addr2_3
+ * 23-16: addr2_4
+ * 15-8: addr2_5
+ * 7-0: addr2_6
+ */
+#define ET_RX_UNI_PF_ADDR2_3_SHIFT 24
+#define ET_RX_UNI_PF_ADDR2_4_SHIFT 16
+#define ET_RX_UNI_PF_ADDR2_5_SHIFT 8
+
+/* structure for Unicast Packet Filter Address 1 & 2 reg in rxmac address map
+ * located at address 0x4070
+ *
+ * 31-24: addr2_1
+ * 23-16: addr2_2
+ * 15-8: addr1_1
+ * 7-0: addr1_2
+ */
+#define ET_RX_UNI_PF_ADDR2_1_SHIFT 24
+#define ET_RX_UNI_PF_ADDR2_2_SHIFT 16
+#define ET_RX_UNI_PF_ADDR1_1_SHIFT 8
+
+/* structure for Multicast Hash reg in rxmac address map
+ * located at address 0x4074 - 0x4080
+ * Defined earlier (u32)
+ */
+
+/* structure for Packet Filter Control reg in rxmac address map
+ * located at address 0x4084
+ *
+ * 31-23: unused
+ * 22-16: min_pkt_size
+ * 15-4: unused
+ * 3: filter_frag_en
+ * 2: filter_uni_en
+ * 1: filter_multi_en
+ * 0: filter_broad_en
+ */
+#define ET_RX_PFCTRL_MIN_PKT_SZ_SHIFT 16
+#define ET_RX_PFCTRL_FRAG_FILTER_ENABLE 0x0008
+#define ET_RX_PFCTRL_UNICST_FILTER_ENABLE 0x0004
+#define ET_RX_PFCTRL_MLTCST_FILTER_ENABLE 0x0002
+#define ET_RX_PFCTRL_BRDCST_FILTER_ENABLE 0x0001
+
+/* structure for Memory Controller Interface Control Max Segment reg in rxmac
+ * address map. Located at address 0x4088
+ *
+ * 31-10: reserved
+ * 9-2: max_size
+ * 1: fc_en
+ * 0: seg_en
+ */
+#define ET_RX_MCIF_CTRL_MAX_SEG_SIZE_SHIFT 2
+#define ET_RX_MCIF_CTRL_MAX_SEG_FC_ENABLE 0x0002
+#define ET_RX_MCIF_CTRL_MAX_SEG_ENABLE 0x0001
+
+/* structure for Memory Controller Interface Water Mark reg in rxmac address
+ * map. Located at address 0x408C
+ *
+ * 31-26: unused
+ * 25-16: mark_hi
+ * 15-10: unused
+ * 9-0: mark_lo
+ */
+
+/* structure for Rx Queue Dialog reg in rxmac address map.
+ * located at address 0x4090
+ *
+ * 31-26: reserved
+ * 25-16: rd_ptr
+ * 15-10: reserved
+ * 9-0: wr_ptr
+ */
+
+/* structure for space available reg in rxmac address map.
+ * located at address 0x4094
+ *
+ * 31-17: reserved
+ * 16: space_avail_en
+ * 15-10: reserved
+ * 9-0: space_avail
+ */
+
+/* structure for management interface reg in rxmac address map.
+ * located at address 0x4098
+ *
+ * 31-18: reserved
+ * 17: drop_pkt_en
+ * 16-0: drop_pkt_mask
+ */
+
+/* structure for Error reg in rxmac address map.
+ * located at address 0x409C
+ *
+ * 31-4: unused
+ * 3: mif
+ * 2: async
+ * 1: pkt_filter
+ * 0: mcif
+ */
+
+/* Rx MAC Module of JAGCore Address Mapping
+ */
+struct rxmac_regs { /* Location: */
+ u32 ctrl; /* 0x4000 */
+ u32 crc0; /* 0x4004 */
+ u32 crc12; /* 0x4008 */
+ u32 crc34; /* 0x400C */
+ u32 sa_lo; /* 0x4010 */
+ u32 sa_hi; /* 0x4014 */
+ u32 mask0_word0; /* 0x4018 */
+ u32 mask0_word1; /* 0x401C */
+ u32 mask0_word2; /* 0x4020 */
+ u32 mask0_word3; /* 0x4024 */
+ u32 mask1_word0; /* 0x4028 */
+ u32 mask1_word1; /* 0x402C */
+ u32 mask1_word2; /* 0x4030 */
+ u32 mask1_word3; /* 0x4034 */
+ u32 mask2_word0; /* 0x4038 */
+ u32 mask2_word1; /* 0x403C */
+ u32 mask2_word2; /* 0x4040 */
+ u32 mask2_word3; /* 0x4044 */
+ u32 mask3_word0; /* 0x4048 */
+ u32 mask3_word1; /* 0x404C */
+ u32 mask3_word2; /* 0x4050 */
+ u32 mask3_word3; /* 0x4054 */
+ u32 mask4_word0; /* 0x4058 */
+ u32 mask4_word1; /* 0x405C */
+ u32 mask4_word2; /* 0x4060 */
+ u32 mask4_word3; /* 0x4064 */
+ u32 uni_pf_addr1; /* 0x4068 */
+ u32 uni_pf_addr2; /* 0x406C */
+ u32 uni_pf_addr3; /* 0x4070 */
+ u32 multi_hash1; /* 0x4074 */
+ u32 multi_hash2; /* 0x4078 */
+ u32 multi_hash3; /* 0x407C */
+ u32 multi_hash4; /* 0x4080 */
+ u32 pf_ctrl; /* 0x4084 */
+ u32 mcif_ctrl_max_seg; /* 0x4088 */
+ u32 mcif_water_mark; /* 0x408C */
+ u32 rxq_diag; /* 0x4090 */
+ u32 space_avail; /* 0x4094 */
+
+ u32 mif_ctrl; /* 0x4098 */
+ u32 err_reg; /* 0x409C */
+};
+
+/* END OF RXMAC REGISTER ADDRESS MAP */
+
+/* START OF MAC REGISTER ADDRESS MAP */
+/* structure for configuration #1 reg in mac address map.
+ * located at address 0x5000
+ *
+ * 31: soft reset
+ * 30: sim reset
+ * 29-20: reserved
+ * 19: reset rx mc
+ * 18: reset tx mc
+ * 17: reset rx func
+ * 16: reset tx fnc
+ * 15-9: reserved
+ * 8: loopback
+ * 7-6: reserved
+ * 5: rx flow
+ * 4: tx flow
+ * 3: syncd rx en
+ * 2: rx enable
+ * 1: syncd tx en
+ * 0: tx enable
+ */
+#define ET_MAC_CFG1_SOFT_RESET 0x80000000
+#define ET_MAC_CFG1_SIM_RESET 0x40000000
+#define ET_MAC_CFG1_RESET_RXMC 0x00080000
+#define ET_MAC_CFG1_RESET_TXMC 0x00040000
+#define ET_MAC_CFG1_RESET_RXFUNC 0x00020000
+#define ET_MAC_CFG1_RESET_TXFUNC 0x00010000
+#define ET_MAC_CFG1_LOOPBACK 0x00000100
+#define ET_MAC_CFG1_RX_FLOW 0x00000020
+#define ET_MAC_CFG1_TX_FLOW 0x00000010
+#define ET_MAC_CFG1_RX_ENABLE 0x00000004
+#define ET_MAC_CFG1_TX_ENABLE 0x00000001
+#define ET_MAC_CFG1_WAIT 0x0000000A /* RX & TX syncd */
+
+/* structure for configuration #2 reg in mac address map.
+ * located at address 0x5004
+ * 31-16: reserved
+ * 15-12: preamble
+ * 11-10: reserved
+ * 9-8: if mode
+ * 7-6: reserved
+ * 5: huge frame
+ * 4: length check
+ * 3: undefined
+ * 2: pad crc
+ * 1: crc enable
+ * 0: full duplex
+ */
+#define ET_MAC_CFG2_PREAMBLE_SHIFT 12
+#define ET_MAC_CFG2_IFMODE_MASK 0x0300
+#define ET_MAC_CFG2_IFMODE_1000 0x0200
+#define ET_MAC_CFG2_IFMODE_100 0x0100
+#define ET_MAC_CFG2_IFMODE_HUGE_FRAME 0x0020
+#define ET_MAC_CFG2_IFMODE_LEN_CHECK 0x0010
+#define ET_MAC_CFG2_IFMODE_PAD_CRC 0x0004
+#define ET_MAC_CFG2_IFMODE_CRC_ENABLE 0x0002
+#define ET_MAC_CFG2_IFMODE_FULL_DPLX 0x0001
+
+/* structure for Interpacket gap reg in mac address map.
+ * located at address 0x5008
+ *
+ * 31: reserved
+ * 30-24: non B2B ipg 1
+ * 23: undefined
+ * 22-16: non B2B ipg 2
+ * 15-8: Min ifg enforce
+ * 7-0: B2B ipg
+ *
+ * structure for half duplex reg in mac address map.
+ * located at address 0x500C
+ * 31-24: reserved
+ * 23-20: Alt BEB trunc
+ * 19: Alt BEB enable
+ * 18: BP no backoff
+ * 17: no backoff
+ * 16: excess defer
+ * 15-12: re-xmit max
+ * 11-10: reserved
+ * 9-0: collision window
+ */
+
+/* structure for Maximum Frame Length reg in mac address map.
+ * located at address 0x5010: bits 0-15 hold the length.
+ */
+
+/* structure for Reserve 1 reg in mac address map.
+ * located at address 0x5014 - 0x5018
+ * Defined earlier (u32)
+ */
+
+/* structure for Test reg in mac address map.
+ * located at address 0x501C
+ * test: bits 0-2, rest unused
+ */
+
+/* structure for MII Management Configuration reg in mac address map.
+ * located at address 0x5020
+ *
+ * 31: reset MII mgmt
+ * 30-6: unused
+ * 5: scan auto increment
+ * 4: preamble suppress
+ * 3: undefined
+ * 2-0: mgmt clock reset
+ */
+#define ET_MAC_MIIMGMT_CLK_RST 0x0007
+
+/* structure for MII Management Command reg in mac address map.
+ * located at address 0x5024
+ * bit 1: scan cycle
+ * bit 0: read cycle
+ */
+
+/* structure for MII Management Address reg in mac address map.
+ * located at address 0x5028
+ * 31-13: reserved
+ * 12-8: phy addr
+ * 7-5: reserved
+ * 4-0: register
+ */
+#define ET_MAC_MII_ADDR(phy, reg) ((phy) << 8 | (reg))
+
+/* structure for MII Management Control reg in mac address map.
+ * located at address 0x502C
+ * 31-16: reserved
+ * 15-0: phy control
+ */
+
+/* structure for MII Management Status reg in mac address map.
+ * located at address 0x5030
+ * 31-16: reserved
+ * 15-0: phy control
+ */
+#define ET_MAC_MIIMGMT_STAT_PHYCRTL_MASK 0xFFFF
+
+/* structure for MII Management Indicators reg in mac address map.
+ * located at address 0x5034
+ * 31-3: reserved
+ * 2: not valid
+ * 1: scanning
+ * 0: busy
+ */
+#define ET_MAC_MGMT_BUSY 0x00000001 /* busy */
+#define ET_MAC_MGMT_WAIT 0x00000005 /* busy | not valid */
+
+/* structure for Interface Control reg in mac address map.
+ * located at address 0x5038
+ *
+ * 31: reset if module
+ * 30-28: reserved
+ * 27: tbi mode
+ * 26: ghd mode
+ * 25: lhd mode
+ * 24: phy mode
+ * 23: reset per mii
+ * 22-17: reserved
+ * 16: speed
+ * 15: reset pe100x
+ * 14-11: reserved
+ * 10: force quiet
+ * 9: no cipher
+ * 8: disable link fail
+ * 7: reset gpsi
+ * 6-1: reserved
+ * 0: enable jabber protection
+ */
+#define ET_MAC_IFCTRL_GHDMODE (1 << 26)
+#define ET_MAC_IFCTRL_PHYMODE (1 << 24)
+
+/* structure for Interface Status reg in mac address map.
+ * located at address 0x503C
+ *
+ * 31-10: reserved
+ * 9: excess_defer
+ * 8: clash
+ * 7: phy_jabber
+ * 6: phy_link_ok
+ * 5: phy_full_duplex
+ * 4: phy_speed
+ * 3: pe100x_link_fail
+ * 2: pe10t_loss_carrier
+ * 1: pe10t_sqe_error
+ * 0: pe10t_jabber
+ */
+
+/* structure for Mac Station Address, Part 1 reg in mac address map.
+ * located at address 0x5040
+ *
+ * 31-24: Octet6
+ * 23-16: Octet5
+ * 15-8: Octet4
+ * 7-0: Octet3
+ */
+#define ET_MAC_STATION_ADDR1_OC6_SHIFT 24
+#define ET_MAC_STATION_ADDR1_OC5_SHIFT 16
+#define ET_MAC_STATION_ADDR1_OC4_SHIFT 8
+
+/* structure for Mac Station Address, Part 2 reg in mac address map.
+ * located at address 0x5044
+ *
+ * 31-24: Octet2
+ * 23-16: Octet1
+ * 15-0: reserved
+ */
+#define ET_MAC_STATION_ADDR2_OC2_SHIFT 24
+#define ET_MAC_STATION_ADDR2_OC1_SHIFT 16
+
+/* MAC Module of JAGCore Address Mapping
+ */
+struct mac_regs { /* Location: */
+ u32 cfg1; /* 0x5000 */
+ u32 cfg2; /* 0x5004 */
+ u32 ipg; /* 0x5008 */
+ u32 hfdp; /* 0x500C */
+ u32 max_fm_len; /* 0x5010 */
+ u32 rsv1; /* 0x5014 */
+ u32 rsv2; /* 0x5018 */
+ u32 mac_test; /* 0x501C */
+ u32 mii_mgmt_cfg; /* 0x5020 */
+ u32 mii_mgmt_cmd; /* 0x5024 */
+ u32 mii_mgmt_addr; /* 0x5028 */
+ u32 mii_mgmt_ctrl; /* 0x502C */
+ u32 mii_mgmt_stat; /* 0x5030 */
+ u32 mii_mgmt_indicator; /* 0x5034 */
+ u32 if_ctrl; /* 0x5038 */
+ u32 if_stat; /* 0x503C */
+ u32 station_addr_1; /* 0x5040 */
+ u32 station_addr_2; /* 0x5044 */
+};
+
+/* END OF MAC REGISTER ADDRESS MAP */
+
+/* START OF MAC STAT REGISTER ADDRESS MAP */
+/* structure for Carry Register One and it's Mask Register reg located in mac
+ * stat address map address 0x6130 and 0x6138.
+ *
+ * 31: tr64
+ * 30: tr127
+ * 29: tr255
+ * 28: tr511
+ * 27: tr1k
+ * 26: trmax
+ * 25: trmgv
+ * 24-17: unused
+ * 16: rbyt
+ * 15: rpkt
+ * 14: rfcs
+ * 13: rmca
+ * 12: rbca
+ * 11: rxcf
+ * 10: rxpf
+ * 9: rxuo
+ * 8: raln
+ * 7: rflr
+ * 6: rcde
+ * 5: rcse
+ * 4: rund
+ * 3: rovr
+ * 2: rfrg
+ * 1: rjbr
+ * 0: rdrp
+ */
+
+/* structure for Carry Register Two Mask Register reg in mac stat address map.
+ * located at address 0x613C
+ *
+ * 31-20: unused
+ * 19: tjbr
+ * 18: tfcs
+ * 17: txcf
+ * 16: tovr
+ * 15: tund
+ * 14: trfg
+ * 13: tbyt
+ * 12: tpkt
+ * 11: tmca
+ * 10: tbca
+ * 9: txpf
+ * 8: tdfr
+ * 7: tedf
+ * 6: tscl
+ * 5: tmcl
+ * 4: tlcl
+ * 3: txcl
+ * 2: tncl
+ * 1: tpfh
+ * 0: tdrp
+ */
+
+/* MAC STATS Module of JAGCore Address Mapping
+ */
+struct macstat_regs { /* Location: */
+ u32 pad[32]; /* 0x6000 - 607C */
+
+ /* counters */
+ u32 txrx_0_64_byte_frames; /* 0x6080 */
+ u32 txrx_65_127_byte_frames; /* 0x6084 */
+ u32 txrx_128_255_byte_frames; /* 0x6088 */
+ u32 txrx_256_511_byte_frames; /* 0x608C */
+ u32 txrx_512_1023_byte_frames; /* 0x6090 */
+ u32 txrx_1024_1518_byte_frames; /* 0x6094 */
+ u32 txrx_1519_1522_gvln_frames; /* 0x6098 */
+ u32 rx_bytes; /* 0x609C */
+ u32 rx_packets; /* 0x60A0 */
+ u32 rx_fcs_errs; /* 0x60A4 */
+ u32 rx_multicast_packets; /* 0x60A8 */
+ u32 rx_broadcast_packets; /* 0x60AC */
+ u32 rx_control_frames; /* 0x60B0 */
+ u32 rx_pause_frames; /* 0x60B4 */
+ u32 rx_unknown_opcodes; /* 0x60B8 */
+ u32 rx_align_errs; /* 0x60BC */
+ u32 rx_frame_len_errs; /* 0x60C0 */
+ u32 rx_code_errs; /* 0x60C4 */
+ u32 rx_carrier_sense_errs; /* 0x60C8 */
+ u32 rx_undersize_packets; /* 0x60CC */
+ u32 rx_oversize_packets; /* 0x60D0 */
+ u32 rx_fragment_packets; /* 0x60D4 */
+ u32 rx_jabbers; /* 0x60D8 */
+ u32 rx_drops; /* 0x60DC */
+ u32 tx_bytes; /* 0x60E0 */
+ u32 tx_packets; /* 0x60E4 */
+ u32 tx_multicast_packets; /* 0x60E8 */
+ u32 tx_broadcast_packets; /* 0x60EC */
+ u32 tx_pause_frames; /* 0x60F0 */
+ u32 tx_deferred; /* 0x60F4 */
+ u32 tx_excessive_deferred; /* 0x60F8 */
+ u32 tx_single_collisions; /* 0x60FC */
+ u32 tx_multiple_collisions; /* 0x6100 */
+ u32 tx_late_collisions; /* 0x6104 */
+ u32 tx_excessive_collisions; /* 0x6108 */
+ u32 tx_total_collisions; /* 0x610C */
+ u32 tx_pause_honored_frames; /* 0x6110 */
+ u32 tx_drops; /* 0x6114 */
+ u32 tx_jabbers; /* 0x6118 */
+ u32 tx_fcs_errs; /* 0x611C */
+ u32 tx_control_frames; /* 0x6120 */
+ u32 tx_oversize_frames; /* 0x6124 */
+ u32 tx_undersize_frames; /* 0x6128 */
+ u32 tx_fragments; /* 0x612C */
+ u32 carry_reg1; /* 0x6130 */
+ u32 carry_reg2; /* 0x6134 */
+ u32 carry_reg1_mask; /* 0x6138 */
+ u32 carry_reg2_mask; /* 0x613C */
+};
+
+/* END OF MAC STAT REGISTER ADDRESS MAP */
+
+/* START OF MMC REGISTER ADDRESS MAP */
+/* Main Memory Controller Control reg in mmc address map.
+ * located at address 0x7000
+ */
+#define ET_MMC_ENABLE 1
+#define ET_MMC_ARB_DISABLE 2
+#define ET_MMC_RXMAC_DISABLE 4
+#define ET_MMC_TXMAC_DISABLE 8
+#define ET_MMC_TXDMA_DISABLE 16
+#define ET_MMC_RXDMA_DISABLE 32
+#define ET_MMC_FORCE_CE 64
+
+/* Main Memory Controller Host Memory Access Address reg in mmc
+ * address map. Located at address 0x7004. Top 16 bits hold the address bits
+ */
+#define ET_SRAM_REQ_ACCESS 1
+#define ET_SRAM_WR_ACCESS 2
+#define ET_SRAM_IS_CTRL 4
+
+/* structure for Main Memory Controller Host Memory Access Data reg in mmc
+ * address map. Located at address 0x7008 - 0x7014
+ * Defined earlier (u32)
+ */
+
+/* Memory Control Module of JAGCore Address Mapping
+ */
+struct mmc_regs { /* Location: */
+ u32 mmc_ctrl; /* 0x7000 */
+ u32 sram_access; /* 0x7004 */
+ u32 sram_word1; /* 0x7008 */
+ u32 sram_word2; /* 0x700C */
+ u32 sram_word3; /* 0x7010 */
+ u32 sram_word4; /* 0x7014 */
+};
+
+/* END OF MMC REGISTER ADDRESS MAP */
+
+/* JAGCore Address Mapping
+ */
+struct address_map {
+ struct global_regs global;
+ /* unused section of global address map */
+ u8 unused_global[4096 - sizeof(struct global_regs)];
+ struct txdma_regs txdma;
+ /* unused section of txdma address map */
+ u8 unused_txdma[4096 - sizeof(struct txdma_regs)];
+ struct rxdma_regs rxdma;
+ /* unused section of rxdma address map */
+ u8 unused_rxdma[4096 - sizeof(struct rxdma_regs)];
+ struct txmac_regs txmac;
+ /* unused section of txmac address map */
+ u8 unused_txmac[4096 - sizeof(struct txmac_regs)];
+ struct rxmac_regs rxmac;
+ /* unused section of rxmac address map */
+ u8 unused_rxmac[4096 - sizeof(struct rxmac_regs)];
+ struct mac_regs mac;
+ /* unused section of mac address map */
+ u8 unused_mac[4096 - sizeof(struct mac_regs)];
+ struct macstat_regs macstat;
+ /* unused section of mac stat address map */
+ u8 unused_mac_stat[4096 - sizeof(struct macstat_regs)];
+ struct mmc_regs mmc;
+ /* unused section of mmc address map */
+ u8 unused_mmc[4096 - sizeof(struct mmc_regs)];
+ /* unused section of address map */
+ u8 unused_[1015808];
+ u8 unused_exp_rom[4096]; /* MGS-size TBD */
+ u8 unused__[524288]; /* unused section of address map */
+};
+
+/* Defines for generic MII registers 0x00 -> 0x0F can be found in
+ * include/linux/mii.h
+ */
+/* some defines for modem registers that seem to be 'reserved' */
+#define PHY_INDEX_REG 0x10
+#define PHY_DATA_REG 0x11
+#define PHY_MPHY_CONTROL_REG 0x12
+
+/* defines for specified registers */
+#define PHY_LOOPBACK_CONTROL 0x13 /* TRU_VMI_LOOPBACK_CONTROL_1_REG 19 */
+ /* TRU_VMI_LOOPBACK_CONTROL_2_REG 20 */
+#define PHY_REGISTER_MGMT_CONTROL 0x15 /* TRU_VMI_MI_SEQ_CONTROL_REG 21 */
+#define PHY_CONFIG 0x16 /* TRU_VMI_CONFIGURATION_REG 22 */
+#define PHY_PHY_CONTROL 0x17 /* TRU_VMI_PHY_CONTROL_REG 23 */
+#define PHY_INTERRUPT_MASK 0x18 /* TRU_VMI_INTERRUPT_MASK_REG 24 */
+#define PHY_INTERRUPT_STATUS 0x19 /* TRU_VMI_INTERRUPT_STATUS_REG 25 */
+#define PHY_PHY_STATUS 0x1A /* TRU_VMI_PHY_STATUS_REG 26 */
+#define PHY_LED_1 0x1B /* TRU_VMI_LED_CONTROL_1_REG 27 */
+#define PHY_LED_2 0x1C /* TRU_VMI_LED_CONTROL_2_REG 28 */
+ /* TRU_VMI_LINK_CONTROL_REG 29 */
+ /* TRU_VMI_TIMING_CONTROL_REG */
+
+/* MI Register 10: Gigabit basic mode status reg(Reg 0x0A) */
+#define ET_1000BT_MSTR_SLV 0x4000
+
+/* MI Register 16 - 18: Reserved Reg(0x10-0x12) */
+
+/* MI Register 19: Loopback Control Reg(0x13)
+ * 15: mii_en
+ * 14: pcs_en
+ * 13: pmd_en
+ * 12: all_digital_en
+ * 11: replica_en
+ * 10: line_driver_en
+ * 9-0: reserved
+ */
+
+/* MI Register 20: Reserved Reg(0x14) */
+
+/* MI Register 21: Management Interface Control Reg(0x15)
+ * 15-11: reserved
+ * 10-4: mi_error_count
+ * 3: reserved
+ * 2: ignore_10g_fr
+ * 1: reserved
+ * 0: preamble_suppress_en
+ */
+
+/* MI Register 22: PHY Configuration Reg(0x16)
+ * 15: crs_tx_en
+ * 14: reserved
+ * 13-12: tx_fifo_depth
+ * 11-10: speed_downshift
+ * 9: pbi_detect
+ * 8: tbi_rate
+ * 7: alternate_np
+ * 6: group_mdio_en
+ * 5: tx_clock_en
+ * 4: sys_clock_en
+ * 3: reserved
+ * 2-0: mac_if_mode
+ */
+#define ET_PHY_CONFIG_TX_FIFO_DEPTH 0x3000
+
+#define ET_PHY_CONFIG_FIFO_DEPTH_8 0x0000
+#define ET_PHY_CONFIG_FIFO_DEPTH_16 0x1000
+#define ET_PHY_CONFIG_FIFO_DEPTH_32 0x2000
+#define ET_PHY_CONFIG_FIFO_DEPTH_64 0x3000
+
+/* MI Register 23: PHY CONTROL Reg(0x17)
+ * 15: reserved
+ * 14: tdr_en
+ * 13: reserved
+ * 12-11: downshift_attempts
+ * 10-6: reserved
+ * 5: jabber_10baseT
+ * 4: sqe_10baseT
+ * 3: tp_loopback_10baseT
+ * 2: preamble_gen_en
+ * 1: reserved
+ * 0: force_int
+ */
+
+/* MI Register 24: Interrupt Mask Reg(0x18)
+ * 15-10: reserved
+ * 9: mdio_sync_lost
+ * 8: autoneg_status
+ * 7: hi_bit_err
+ * 6: np_rx
+ * 5: err_counter_full
+ * 4: fifo_over_underflow
+ * 3: rx_status
+ * 2: link_status
+ * 1: automatic_speed
+ * 0: int_en
+ */
+
+/* MI Register 25: Interrupt Status Reg(0x19)
+ * 15-10: reserved
+ * 9: mdio_sync_lost
+ * 8: autoneg_status
+ * 7: hi_bit_err
+ * 6: np_rx
+ * 5: err_counter_full
+ * 4: fifo_over_underflow
+ * 3: rx_status
+ * 2: link_status
+ * 1: automatic_speed
+ * 0: int_en
+ */
+
+/* MI Register 26: PHY Status Reg(0x1A)
+ * 15: reserved
+ * 14-13: autoneg_fault
+ * 12: autoneg_status
+ * 11: mdi_x_status
+ * 10: polarity_status
+ * 9-8: speed_status
+ * 7: duplex_status
+ * 6: link_status
+ * 5: tx_status
+ * 4: rx_status
+ * 3: collision_status
+ * 2: autoneg_en
+ * 1: pause_en
+ * 0: asymmetric_dir
+ */
+#define ET_PHY_AUTONEG_STATUS 0x1000
+#define ET_PHY_POLARITY_STATUS 0x0400
+#define ET_PHY_SPEED_STATUS 0x0300
+#define ET_PHY_DUPLEX_STATUS 0x0080
+#define ET_PHY_LSTATUS 0x0040
+#define ET_PHY_AUTONEG_ENABLE 0x0020
+
+/* MI Register 27: LED Control Reg 1(0x1B)
+ * 15-14: reserved
+ * 13-12: led_dup_indicate
+ * 11-10: led_10baseT
+ * 9-8: led_collision
+ * 7-4: reserved
+ * 3-2: pulse_dur
+ * 1: pulse_stretch1
+ * 0: pulse_stretch0
+ */
+
+/* MI Register 28: LED Control Reg 2(0x1C)
+ * 15-12: led_link
+ * 11-8: led_tx_rx
+ * 7-4: led_100BaseTX
+ * 3-0: led_1000BaseT
+ */
+#define ET_LED2_LED_LINK 0xF000
+#define ET_LED2_LED_TXRX 0x0F00
+#define ET_LED2_LED_100TX 0x00F0
+#define ET_LED2_LED_1000T 0x000F
+
+/* defines for LED control reg 2 values */
+#define LED_VAL_1000BT 0x0
+#define LED_VAL_100BTX 0x1
+#define LED_VAL_10BT 0x2
+#define LED_VAL_1000BT_100BTX 0x3 /* 1000BT on, 100BTX blink */
+#define LED_VAL_LINKON 0x4
+#define LED_VAL_TX 0x5
+#define LED_VAL_RX 0x6
+#define LED_VAL_TXRX 0x7 /* TX or RX */
+#define LED_VAL_DUPLEXFULL 0x8
+#define LED_VAL_COLLISION 0x9
+#define LED_VAL_LINKON_ACTIVE 0xA /* Link on, activity blink */
+#define LED_VAL_LINKON_RECV 0xB /* Link on, receive blink */
+#define LED_VAL_DUPLEXFULL_COLLISION 0xC /* Duplex on, collision blink */
+#define LED_VAL_BLINK 0xD
+#define LED_VAL_ON 0xE
+#define LED_VAL_OFF 0xF
+
+#define LED_LINK_SHIFT 12
+#define LED_TXRX_SHIFT 8
+#define LED_100TX_SHIFT 4
+
+/* MI Register 29 - 31: Reserved Reg(0x1D - 0x1E) */
diff --git a/drivers/net/ethernet/allwinner/sun4i-emac.c b/drivers/net/ethernet/allwinner/sun4i-emac.c
index 29b9f08..1fcd556 100644
--- a/drivers/net/ethernet/allwinner/sun4i-emac.c
+++ b/drivers/net/ethernet/allwinner/sun4i-emac.c
@@ -878,8 +878,6 @@ static int emac_probe(struct platform_device *pdev)
emac_powerup(ndev);
emac_reset(db);
- ether_setup(ndev);
-
ndev->netdev_ops = &emac_netdev_ops;
ndev->watchdog_timeo = msecs_to_jiffies(watchdog);
ndev->ethtool_ops = &emac_ethtool_ops;
diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c
index 7330681..4efc435 100644
--- a/drivers/net/ethernet/altera/altera_tse_main.c
+++ b/drivers/net/ethernet/altera/altera_tse_main.c
@@ -728,6 +728,44 @@ static struct phy_device *connect_local_phy(struct net_device *dev)
return phydev;
}
+static int altera_tse_phy_get_addr_mdio_create(struct net_device *dev)
+{
+ struct altera_tse_private *priv = netdev_priv(dev);
+ struct device_node *np = priv->device->of_node;
+ int ret = 0;
+
+ priv->phy_iface = of_get_phy_mode(np);
+
+ /* Avoid get phy addr and create mdio if no phy is present */
+ if (!priv->phy_iface)
+ return 0;
+
+ /* try to get PHY address from device tree, use PHY autodetection if
+ * no valid address is given
+ */
+
+ if (of_property_read_u32(priv->device->of_node, "phy-addr",
+ &priv->phy_addr)) {
+ priv->phy_addr = POLL_PHY;
+ }
+
+ if (!((priv->phy_addr == POLL_PHY) ||
+ ((priv->phy_addr >= 0) && (priv->phy_addr < PHY_MAX_ADDR)))) {
+ netdev_err(dev, "invalid phy-addr specified %d\n",
+ priv->phy_addr);
+ return -ENODEV;
+ }
+
+ /* Create/attach to MDIO bus */
+ ret = altera_tse_mdio_create(dev,
+ atomic_add_return(1, &instance_count));
+
+ if (ret)
+ return -ENODEV;
+
+ return 0;
+}
+
/* Initialize driver's PHY state, and attach to the PHY
*/
static int init_phy(struct net_device *dev)
@@ -736,6 +774,10 @@ static int init_phy(struct net_device *dev)
struct phy_device *phydev;
struct device_node *phynode;
+ /* Avoid init phy in case of no phy present */
+ if (!priv->phy_iface)
+ return 0;
+
priv->oldlink = 0;
priv->oldspeed = 0;
priv->oldduplex = -1;
@@ -1231,7 +1273,6 @@ static int altera_tse_probe(struct platform_device *pdev)
struct resource *dma_res;
struct altera_tse_private *priv;
const unsigned char *macaddr;
- struct device_node *np = pdev->dev.of_node;
void __iomem *descmap;
const struct of_device_id *of_id = NULL;
@@ -1408,32 +1449,13 @@ static int altera_tse_probe(struct platform_device *pdev)
else
eth_hw_addr_random(ndev);
- priv->phy_iface = of_get_phy_mode(np);
-
- /* try to get PHY address from device tree, use PHY autodetection if
- * no valid address is given
- */
- if (of_property_read_u32(pdev->dev.of_node, "phy-addr",
- &priv->phy_addr)) {
- priv->phy_addr = POLL_PHY;
- }
-
- if (!((priv->phy_addr == POLL_PHY) ||
- ((priv->phy_addr >= 0) && (priv->phy_addr < PHY_MAX_ADDR)))) {
- dev_err(&pdev->dev, "invalid phy-addr specified %d\n",
- priv->phy_addr);
- goto err_free_netdev;
- }
-
- /* Create/attach to MDIO bus */
- ret = altera_tse_mdio_create(ndev,
- atomic_add_return(1, &instance_count));
+ /* get phy addr and create mdio */
+ ret = altera_tse_phy_get_addr_mdio_create(ndev);
if (ret)
goto err_free_netdev;
/* initialize netdev */
- ether_setup(ndev);
ndev->mem_start = control_port->start;
ndev->mem_end = control_port->end;
ndev->netdev_ops = &altera_tse_netdev_ops;
diff --git a/drivers/net/ethernet/amd/au1000_eth.c b/drivers/net/ethernet/amd/au1000_eth.c
index 31c48a7..6c323f4 100644
--- a/drivers/net/ethernet/amd/au1000_eth.c
+++ b/drivers/net/ethernet/amd/au1000_eth.c
@@ -1140,7 +1140,6 @@ static const struct net_device_ops au1000_netdev_ops = {
static int au1000_probe(struct platform_device *pdev)
{
- static unsigned version_printed;
struct au1000_private *aup = NULL;
struct au1000_eth_platform_data *pd;
struct net_device *dev = NULL;
@@ -1371,9 +1370,8 @@ static int au1000_probe(struct platform_device *pdev)
netdev_info(dev, "Au1xx0 Ethernet found at 0x%lx, irq %d\n",
(unsigned long)base->start, irq);
- if (version_printed++ == 0)
- pr_info("%s version %s %s\n",
- DRV_NAME, DRV_VERSION, DRV_AUTHOR);
+
+ pr_info_once("%s version %s %s\n", DRV_NAME, DRV_VERSION, DRV_AUTHOR);
return 0;
diff --git a/drivers/net/ethernet/amd/nmclan_cs.c b/drivers/net/ethernet/amd/nmclan_cs.c
index abf3b15..5b22764 100644
--- a/drivers/net/ethernet/amd/nmclan_cs.c
+++ b/drivers/net/ethernet/amd/nmclan_cs.c
@@ -621,7 +621,7 @@ static int nmclan_config(struct pcmcia_device *link)
ret = pcmcia_request_io(link);
if (ret)
goto failed;
- ret = pcmcia_request_exclusive_irq(link, mace_interrupt);
+ ret = pcmcia_request_irq(link, mace_interrupt);
if (ret)
goto failed;
ret = pcmcia_enable_device(link);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
index cc25a3a..caade30 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
@@ -271,7 +271,6 @@
#define DMA_PBL_X8_DISABLE 0x00
#define DMA_PBL_X8_ENABLE 0x01
-
/* MAC register offsets */
#define MAC_TCR 0x0000
#define MAC_RCR 0x0004
@@ -792,7 +791,6 @@
#define MTL_Q_DISABLED 0x00
#define MTL_Q_ENABLED 0x02
-
/* MTL traffic class register offsets
* Multiple traffic classes can be active. The first class has registers
* that begin at 0x1100. Each subsequent queue has registers that
@@ -815,7 +813,6 @@
#define MTL_TSA_SP 0x00
#define MTL_TSA_ETS 0x02
-
/* PCS MMD select register offset
* The MMD select register is used for accessing PCS registers
* when the underlying APB3 interface is using indirect addressing.
@@ -825,7 +822,6 @@
*/
#define PCS_MMD_SELECT 0xff
-
/* Descriptor/Packet entry bit positions and sizes */
#define RX_PACKET_ERRORS_CRC_INDEX 2
#define RX_PACKET_ERRORS_CRC_WIDTH 1
@@ -929,7 +925,6 @@
#define MDIO_AN_COMP_STAT 0x0030
#endif
-
/* Bit setting and getting macros
* The get macro will extract the current bit field value from within
* the variable
@@ -957,7 +952,6 @@ do { \
((0x1 << (_width)) - 1)) << (_index))); \
} while (0)
-
/* Bit setting and getting macros based on register fields
* The get macro uses the bit field definitions formed using the input
* names to extract the current bit field value from within the
@@ -986,7 +980,6 @@ do { \
_prefix##_##_field##_INDEX, \
_prefix##_##_field##_WIDTH, (_val))
-
/* Macros for reading or writing registers
* The ioread macros will get bit fields or full values using the
* register definitions formed using the input names
@@ -1014,7 +1007,6 @@ do { \
XGMAC_IOWRITE((_pdata), _reg, reg_val); \
} while (0)
-
/* Macros for reading or writing MTL queue or traffic class registers
* Similar to the standard read and write macros except that the
* base register value is calculated by the queue or traffic class number
@@ -1041,7 +1033,6 @@ do { \
XGMAC_MTL_IOWRITE((_pdata), (_n), _reg, reg_val); \
} while (0)
-
/* Macros for reading or writing DMA channel registers
* Similar to the standard read and write macros except that the
* base register value is obtained from the ring
@@ -1066,7 +1057,6 @@ do { \
XGMAC_DMA_IOWRITE((_channel), _reg, reg_val); \
} while (0)
-
/* Macros for building, reading or writing register values or bits
* within the register values of XPCS registers.
*/
@@ -1076,7 +1066,6 @@ do { \
#define XPCS_IOREAD(_pdata, _off) \
ioread32((_pdata)->xpcs_regs + (_off))
-
/* Macros for building, reading or writing register values or bits
* using MDIO. Different from above because of the use of standardized
* Linux include values. No shifting is performed with the bit
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c b/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c
index 7d6a49b..8a50b01 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dcb.c
@@ -120,7 +120,6 @@
#include "xgbe.h"
#include "xgbe-common.h"
-
static int xgbe_dcb_ieee_getets(struct net_device *netdev,
struct ieee_ets *ets)
{
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c b/drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c
index a3c1135..76479d0 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c
@@ -121,7 +121,6 @@
#include "xgbe.h"
#include "xgbe-common.h"
-
static ssize_t xgbe_common_read(char __user *buffer, size_t count,
loff_t *ppos, unsigned int value)
{
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
index 1c5d62e..6fc5da0 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
@@ -117,7 +117,6 @@
#include "xgbe.h"
#include "xgbe-common.h"
-
static void xgbe_unmap_skb(struct xgbe_prv_data *, struct xgbe_ring_data *);
static void xgbe_free_ring(struct xgbe_prv_data *pdata,
@@ -524,11 +523,8 @@ static void xgbe_realloc_skb(struct xgbe_channel *channel)
/* Allocate skb & assign to each rdesc */
skb = dev_alloc_skb(pdata->rx_buf_size);
- if (skb == NULL) {
- netdev_alert(pdata->netdev,
- "failed to allocate skb\n");
+ if (skb == NULL)
break;
- }
skb_dma = dma_map_single(pdata->dev, skb->data,
pdata->rx_buf_size, DMA_FROM_DEVICE);
if (dma_mapping_error(pdata->dev, skb_dma)) {
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index ea27383..9da3a03 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -122,7 +122,6 @@
#include "xgbe.h"
#include "xgbe-common.h"
-
static unsigned int xgbe_usec_to_riwt(struct xgbe_prv_data *pdata,
unsigned int usec)
{
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
index b26d758..2955499 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
@@ -126,7 +126,6 @@
#include "xgbe.h"
#include "xgbe-common.h"
-
static int xgbe_poll(struct napi_struct *, int);
static void xgbe_set_rx_mode(struct net_device *);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
index 46f6130..49508ec 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
@@ -121,7 +121,6 @@
#include "xgbe.h"
#include "xgbe-common.h"
-
struct xgbe_stats {
char stat_string[ETH_GSTRING_LEN];
int stat_size;
@@ -173,6 +172,7 @@ static const struct xgbe_stats xgbe_gstring_stats[] = {
XGMAC_MMC_STAT("rx_watchdog_errors", rxwatchdogerror),
XGMAC_MMC_STAT("rx_pause_frames", rxpauseframes),
};
+
#define XGBE_STATS_COUNT ARRAY_SIZE(xgbe_gstring_stats)
static void xgbe_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-main.c b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
index bdf9cfa..f5a8fa0 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-main.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-main.c
@@ -128,7 +128,6 @@
#include "xgbe.h"
#include "xgbe-common.h"
-
MODULE_AUTHOR("Tom Lendacky <thomas.lendacky@amd.com>");
MODULE_LICENSE("Dual BSD/GPL");
MODULE_VERSION(XGBE_DRV_VERSION);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
index 6d2221e..363b210 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
@@ -123,7 +123,6 @@
#include "xgbe.h"
#include "xgbe-common.h"
-
static int xgbe_mdio_read(struct mii_bus *mii, int prtad, int mmd_reg)
{
struct xgbe_prv_data *pdata = mii->priv;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c b/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c
index 37e64cf..a1bf9d1c 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-ptp.c
@@ -122,7 +122,6 @@
#include "xgbe.h"
#include "xgbe-common.h"
-
static cycle_t xgbe_cc_read(const struct cyclecounter *cc)
{
struct xgbe_prv_data *pdata = container_of(cc,
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index e9fe6e6..789957d 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -128,7 +128,6 @@
#include <linux/net_tstamp.h>
#include <net/dcbnl.h>
-
#define XGBE_DRV_NAME "amd-xgbe"
#define XGBE_DRV_VERSION "1.0.0-a"
#define XGBE_DRV_DESC "AMD 10 Gigabit Ethernet Driver"
@@ -199,7 +198,6 @@
((_ring)->rdata + \
((_idx) & ((_ring)->rdesc_count - 1)))
-
/* Default coalescing parameters */
#define XGMAC_INIT_DMA_TX_USECS 50
#define XGMAC_INIT_DMA_TX_FRAMES 25
diff --git a/drivers/net/ethernet/arc/Kconfig b/drivers/net/ethernet/arc/Kconfig
index 514c57f..8e262e2 100644
--- a/drivers/net/ethernet/arc/Kconfig
+++ b/drivers/net/ethernet/arc/Kconfig
@@ -17,10 +17,14 @@ config NET_VENDOR_ARC
if NET_VENDOR_ARC
-config ARC_EMAC
- tristate "ARC EMAC support"
+config ARC_EMAC_CORE
+ tristate
select MII
select PHYLIB
+
+config ARC_EMAC
+ tristate "ARC EMAC support"
+ select ARC_EMAC_CORE
depends on OF_IRQ
depends on OF_NET
---help---
@@ -28,4 +32,14 @@ config ARC_EMAC
non-standard on-chip ethernet device ARC EMAC 10/100 is used.
Say Y here if you have such a board. If unsure, say N.
+config EMAC_ROCKCHIP
+ tristate "Rockchip EMAC support"
+ select ARC_EMAC_CORE
+ depends on OF_IRQ && OF_NET && REGULATOR
+ ---help---
+ Support for Rockchip RK3066/RK3188 EMAC ethernet controllers.
+ This selects Rockchip SoC glue layer support for the
+ emac device driver. This driver is used for RK3066/RK3188
+ EMAC ethernet controller.
+
endif # NET_VENDOR_ARC
diff --git a/drivers/net/ethernet/arc/Makefile b/drivers/net/ethernet/arc/Makefile
index 00c8657..79108af 100644
--- a/drivers/net/ethernet/arc/Makefile
+++ b/drivers/net/ethernet/arc/Makefile
@@ -3,4 +3,6 @@
#
arc_emac-objs := emac_main.o emac_mdio.o
-obj-$(CONFIG_ARC_EMAC) += arc_emac.o
+obj-$(CONFIG_ARC_EMAC_CORE) += arc_emac.o
+obj-$(CONFIG_ARC_EMAC) += emac_arc.o
+obj-$(CONFIG_EMAC_ROCKCHIP) += emac_rockchip.o
diff --git a/drivers/net/ethernet/arc/emac.h b/drivers/net/ethernet/arc/emac.h
index 36cc9bd..dae1ac3 100644
--- a/drivers/net/ethernet/arc/emac.h
+++ b/drivers/net/ethernet/arc/emac.h
@@ -123,6 +123,10 @@ struct buffer_state {
* @speed: PHY's last set speed.
*/
struct arc_emac_priv {
+ const char *drv_name;
+ const char *drv_version;
+ void (*set_mac_speed)(void *priv, unsigned int speed);
+
/* Devices */
struct device *dev;
struct phy_device *phy_dev;
@@ -204,7 +208,9 @@ static inline void arc_reg_clr(struct arc_emac_priv *priv, int reg, int mask)
arc_reg_set(priv, reg, value & ~mask);
}
-int arc_mdio_probe(struct platform_device *pdev, struct arc_emac_priv *priv);
+int arc_mdio_probe(struct arc_emac_priv *priv);
int arc_mdio_remove(struct arc_emac_priv *priv);
+int arc_emac_probe(struct net_device *ndev, int interface);
+int arc_emac_remove(struct net_device *ndev);
#endif /* ARC_EMAC_H */
diff --git a/drivers/net/ethernet/arc/emac_arc.c b/drivers/net/ethernet/arc/emac_arc.c
new file mode 100644
index 0000000..f9cb99b
--- /dev/null
+++ b/drivers/net/ethernet/arc/emac_arc.c
@@ -0,0 +1,95 @@
+/**
+ * emac_arc.c - ARC EMAC specific glue layer
+ *
+ * Copyright (C) 2014 Romain Perier
+ *
+ * Romain Perier <romain.perier@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/etherdevice.h>
+#include <linux/module.h>
+#include <linux/of_net.h>
+#include <linux/platform_device.h>
+
+#include "emac.h"
+
+#define DRV_NAME "emac_arc"
+#define DRV_VERSION "1.0"
+
+static int emac_arc_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct net_device *ndev;
+ struct arc_emac_priv *priv;
+ int interface, err;
+
+ if (!dev->of_node)
+ return -ENODEV;
+
+ ndev = alloc_etherdev(sizeof(struct arc_emac_priv));
+ if (!ndev)
+ return -ENOMEM;
+ platform_set_drvdata(pdev, ndev);
+ SET_NETDEV_DEV(ndev, dev);
+
+ priv = netdev_priv(ndev);
+ priv->drv_name = DRV_NAME;
+ priv->drv_version = DRV_VERSION;
+
+ interface = of_get_phy_mode(dev->of_node);
+ if (interface < 0)
+ interface = PHY_INTERFACE_MODE_MII;
+
+ priv->clk = devm_clk_get(dev, "hclk");
+ if (IS_ERR(priv->clk)) {
+ dev_err(dev, "failed to retrieve host clock from device tree\n");
+ err = -EINVAL;
+ goto out_netdev;
+ }
+
+ err = arc_emac_probe(ndev, interface);
+out_netdev:
+ if (err)
+ free_netdev(ndev);
+ return err;
+}
+
+static int emac_arc_remove(struct platform_device *pdev)
+{
+ struct net_device *ndev = platform_get_drvdata(pdev);
+ int err;
+
+ err = arc_emac_remove(ndev);
+ free_netdev(ndev);
+ return err;
+}
+
+static const struct of_device_id emac_arc_dt_ids[] = {
+ { .compatible = "snps,arc-emac" },
+ { /* Sentinel */ }
+};
+
+static struct platform_driver emac_arc_driver = {
+ .probe = emac_arc_probe,
+ .remove = emac_arc_remove,
+ .driver = {
+ .name = DRV_NAME,
+ .of_match_table = emac_arc_dt_ids,
+ },
+};
+
+module_platform_driver(emac_arc_driver);
+
+MODULE_AUTHOR("Romain Perier <romain.perier@gmail.com>");
+MODULE_DESCRIPTION("ARC EMAC platform driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/arc/emac_main.c b/drivers/net/ethernet/arc/emac_main.c
index 5919394..abe1eab 100644
--- a/drivers/net/ethernet/arc/emac_main.c
+++ b/drivers/net/ethernet/arc/emac_main.c
@@ -26,8 +26,6 @@
#include "emac.h"
-#define DRV_NAME "arc_emac"
-#define DRV_VERSION "1.0"
/**
* arc_emac_tx_avail - Return the number of available slots in the tx ring.
@@ -61,6 +59,8 @@ static void arc_emac_adjust_link(struct net_device *ndev)
if (priv->speed != phy_dev->speed) {
priv->speed = phy_dev->speed;
state_changed = 1;
+ if (priv->set_mac_speed)
+ priv->set_mac_speed(priv, priv->speed);
}
if (priv->duplex != phy_dev->duplex) {
@@ -131,8 +131,10 @@ static int arc_emac_set_settings(struct net_device *ndev,
static void arc_emac_get_drvinfo(struct net_device *ndev,
struct ethtool_drvinfo *info)
{
- strlcpy(info->driver, DRV_NAME, sizeof(info->driver));
- strlcpy(info->version, DRV_VERSION, sizeof(info->version));
+ struct arc_emac_priv *priv = netdev_priv(ndev);
+
+ strlcpy(info->driver, priv->drv_name, sizeof(info->driver));
+ strlcpy(info->version, priv->drv_version, sizeof(info->version));
}
static const struct ethtool_ops arc_emac_ethtool_ops = {
@@ -692,46 +694,38 @@ static const struct net_device_ops arc_emac_netdev_ops = {
#endif
};
-static int arc_emac_probe(struct platform_device *pdev)
+int arc_emac_probe(struct net_device *ndev, int interface)
{
+ struct device *dev = ndev->dev.parent;
struct resource res_regs;
struct device_node *phy_node;
struct arc_emac_priv *priv;
- struct net_device *ndev;
const char *mac_addr;
unsigned int id, clock_frequency, irq;
int err;
- if (!pdev->dev.of_node)
- return -ENODEV;
/* Get PHY from device tree */
- phy_node = of_parse_phandle(pdev->dev.of_node, "phy", 0);
+ phy_node = of_parse_phandle(dev->of_node, "phy", 0);
if (!phy_node) {
- dev_err(&pdev->dev, "failed to retrieve phy description from device tree\n");
+ dev_err(dev, "failed to retrieve phy description from device tree\n");
return -ENODEV;
}
/* Get EMAC registers base address from device tree */
- err = of_address_to_resource(pdev->dev.of_node, 0, &res_regs);
+ err = of_address_to_resource(dev->of_node, 0, &res_regs);
if (err) {
- dev_err(&pdev->dev, "failed to retrieve registers base from device tree\n");
+ dev_err(dev, "failed to retrieve registers base from device tree\n");
return -ENODEV;
}
/* Get IRQ from device tree */
- irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+ irq = irq_of_parse_and_map(dev->of_node, 0);
if (!irq) {
- dev_err(&pdev->dev, "failed to retrieve <irq> value from device tree\n");
+ dev_err(dev, "failed to retrieve <irq> value from device tree\n");
return -ENODEV;
}
- ndev = alloc_etherdev(sizeof(struct arc_emac_priv));
- if (!ndev)
- return -ENOMEM;
-
- platform_set_drvdata(pdev, ndev);
- SET_NETDEV_DEV(ndev, &pdev->dev);
ndev->netdev_ops = &arc_emac_netdev_ops;
ndev->ethtool_ops = &arc_emac_ethtool_ops;
@@ -740,60 +734,57 @@ static int arc_emac_probe(struct platform_device *pdev)
ndev->flags &= ~IFF_MULTICAST;
priv = netdev_priv(ndev);
- priv->dev = &pdev->dev;
+ priv->dev = dev;
- priv->regs = devm_ioremap_resource(&pdev->dev, &res_regs);
+ priv->regs = devm_ioremap_resource(dev, &res_regs);
if (IS_ERR(priv->regs)) {
- err = PTR_ERR(priv->regs);
- goto out_netdev;
+ return PTR_ERR(priv->regs);
}
- dev_dbg(&pdev->dev, "Registers base address is 0x%p\n", priv->regs);
+ dev_dbg(dev, "Registers base address is 0x%p\n", priv->regs);
- priv->clk = of_clk_get(pdev->dev.of_node, 0);
- if (IS_ERR(priv->clk)) {
- /* Get CPU clock frequency from device tree */
- if (of_property_read_u32(pdev->dev.of_node, "clock-frequency",
- &clock_frequency)) {
- dev_err(&pdev->dev, "failed to retrieve <clock-frequency> from device tree\n");
- err = -EINVAL;
- goto out_netdev;
- }
- } else {
+ if (priv->clk) {
err = clk_prepare_enable(priv->clk);
if (err) {
- dev_err(&pdev->dev, "failed to enable clock\n");
- goto out_clkget;
+ dev_err(dev, "failed to enable clock\n");
+ return err;
}
clock_frequency = clk_get_rate(priv->clk);
+ } else {
+ /* Get CPU clock frequency from device tree */
+ if (of_property_read_u32(dev->of_node, "clock-frequency",
+ &clock_frequency)) {
+ dev_err(dev, "failed to retrieve <clock-frequency> from device tree\n");
+ return -EINVAL;
+ }
}
id = arc_reg_get(priv, R_ID);
/* Check for EMAC revision 5 or 7, magic number */
if (!(id == 0x0005fd02 || id == 0x0007fd02)) {
- dev_err(&pdev->dev, "ARC EMAC not detected, id=0x%x\n", id);
+ dev_err(dev, "ARC EMAC not detected, id=0x%x\n", id);
err = -ENODEV;
goto out_clken;
}
- dev_info(&pdev->dev, "ARC EMAC detected with id: 0x%x\n", id);
+ dev_info(dev, "ARC EMAC detected with id: 0x%x\n", id);
/* Set poll rate so that it polls every 1 ms */
arc_reg_set(priv, R_POLLRATE, clock_frequency / 1000000);
ndev->irq = irq;
- dev_info(&pdev->dev, "IRQ is %d\n", ndev->irq);
+ dev_info(dev, "IRQ is %d\n", ndev->irq);
/* Register interrupt handler for device */
- err = devm_request_irq(&pdev->dev, ndev->irq, arc_emac_intr, 0,
+ err = devm_request_irq(dev, ndev->irq, arc_emac_intr, 0,
ndev->name, ndev);
if (err) {
- dev_err(&pdev->dev, "could not allocate IRQ\n");
+ dev_err(dev, "could not allocate IRQ\n");
goto out_clken;
}
/* Get MAC address from device tree */
- mac_addr = of_get_mac_address(pdev->dev.of_node);
+ mac_addr = of_get_mac_address(dev->of_node);
if (mac_addr)
memcpy(ndev->dev_addr, mac_addr, ETH_ALEN);
@@ -801,14 +792,14 @@ static int arc_emac_probe(struct platform_device *pdev)
eth_hw_addr_random(ndev);
arc_emac_set_address_internal(ndev);
- dev_info(&pdev->dev, "MAC address is now %pM\n", ndev->dev_addr);
+ dev_info(dev, "MAC address is now %pM\n", ndev->dev_addr);
/* Do 1 allocation instead of 2 separate ones for Rx and Tx BD rings */
- priv->rxbd = dmam_alloc_coherent(&pdev->dev, RX_RING_SZ + TX_RING_SZ,
+ priv->rxbd = dmam_alloc_coherent(dev, RX_RING_SZ + TX_RING_SZ,
&priv->rxbd_dma, GFP_KERNEL);
if (!priv->rxbd) {
- dev_err(&pdev->dev, "failed to allocate data buffers\n");
+ dev_err(dev, "failed to allocate data buffers\n");
err = -ENOMEM;
goto out_clken;
}
@@ -816,31 +807,31 @@ static int arc_emac_probe(struct platform_device *pdev)
priv->txbd = priv->rxbd + RX_BD_NUM;
priv->txbd_dma = priv->rxbd_dma + RX_RING_SZ;
- dev_dbg(&pdev->dev, "EMAC Device addr: Rx Ring [0x%x], Tx Ring[%x]\n",
+ dev_dbg(dev, "EMAC Device addr: Rx Ring [0x%x], Tx Ring[%x]\n",
(unsigned int)priv->rxbd_dma, (unsigned int)priv->txbd_dma);
- err = arc_mdio_probe(pdev, priv);
+ err = arc_mdio_probe(priv);
if (err) {
- dev_err(&pdev->dev, "failed to probe MII bus\n");
+ dev_err(dev, "failed to probe MII bus\n");
goto out_clken;
}
priv->phy_dev = of_phy_connect(ndev, phy_node, arc_emac_adjust_link, 0,
- PHY_INTERFACE_MODE_MII);
+ interface);
if (!priv->phy_dev) {
- dev_err(&pdev->dev, "of_phy_connect() failed\n");
+ dev_err(dev, "of_phy_connect() failed\n");
err = -ENODEV;
goto out_mdio;
}
- dev_info(&pdev->dev, "connected to %s phy with id 0x%x\n",
+ dev_info(dev, "connected to %s phy with id 0x%x\n",
priv->phy_dev->drv->name, priv->phy_dev->phy_id);
netif_napi_add(ndev, &priv->napi, arc_emac_poll, ARC_EMAC_NAPI_WEIGHT);
err = register_netdev(ndev);
if (err) {
- dev_err(&pdev->dev, "failed to register network device\n");
+ dev_err(dev, "failed to register network device\n");
goto out_netif_api;
}
@@ -853,19 +844,14 @@ out_netif_api:
out_mdio:
arc_mdio_remove(priv);
out_clken:
- if (!IS_ERR(priv->clk))
+ if (priv->clk)
clk_disable_unprepare(priv->clk);
-out_clkget:
- if (!IS_ERR(priv->clk))
- clk_put(priv->clk);
-out_netdev:
- free_netdev(ndev);
return err;
}
+EXPORT_SYMBOL_GPL(arc_emac_probe);
-static int arc_emac_remove(struct platform_device *pdev)
+int arc_emac_remove(struct net_device *ndev)
{
- struct net_device *ndev = platform_get_drvdata(pdev);
struct arc_emac_priv *priv = netdev_priv(ndev);
phy_disconnect(priv->phy_dev);
@@ -876,31 +862,12 @@ static int arc_emac_remove(struct platform_device *pdev)
if (!IS_ERR(priv->clk)) {
clk_disable_unprepare(priv->clk);
- clk_put(priv->clk);
}
- free_netdev(ndev);
return 0;
}
-
-static const struct of_device_id arc_emac_dt_ids[] = {
- { .compatible = "snps,arc-emac" },
- { /* Sentinel */ }
-};
-MODULE_DEVICE_TABLE(of, arc_emac_dt_ids);
-
-static struct platform_driver arc_emac_driver = {
- .probe = arc_emac_probe,
- .remove = arc_emac_remove,
- .driver = {
- .name = DRV_NAME,
- .owner = THIS_MODULE,
- .of_match_table = arc_emac_dt_ids,
- },
-};
-
-module_platform_driver(arc_emac_driver);
+EXPORT_SYMBOL_GPL(arc_emac_remove);
MODULE_AUTHOR("Alexey Brodkin <abrodkin@synopsys.com>");
MODULE_DESCRIPTION("ARC EMAC driver");
diff --git a/drivers/net/ethernet/arc/emac_mdio.c b/drivers/net/ethernet/arc/emac_mdio.c
index 26ba242..d5ee986 100644
--- a/drivers/net/ethernet/arc/emac_mdio.c
+++ b/drivers/net/ethernet/arc/emac_mdio.c
@@ -100,7 +100,6 @@ static int arc_mdio_write(struct mii_bus *bus, int phy_addr,
/**
* arc_mdio_probe - MDIO probe function.
- * @pdev: Pointer to platform device.
* @priv: Pointer to ARC EMAC private data structure.
*
* returns: 0 on success, -ENOMEM when mdiobus_alloc
@@ -108,7 +107,7 @@ static int arc_mdio_write(struct mii_bus *bus, int phy_addr,
*
* Sets up and registers the MDIO interface.
*/
-int arc_mdio_probe(struct platform_device *pdev, struct arc_emac_priv *priv)
+int arc_mdio_probe(struct arc_emac_priv *priv)
{
struct mii_bus *bus;
int error;
@@ -124,9 +123,9 @@ int arc_mdio_probe(struct platform_device *pdev, struct arc_emac_priv *priv)
bus->read = &arc_mdio_read;
bus->write = &arc_mdio_write;
- snprintf(bus->id, MII_BUS_ID_SIZE, "%s", pdev->name);
+ snprintf(bus->id, MII_BUS_ID_SIZE, "%s", bus->name);
- error = of_mdiobus_register(bus, pdev->dev.of_node);
+ error = of_mdiobus_register(bus, priv->dev->of_node);
if (error) {
dev_err(priv->dev, "cannot register MDIO bus %s\n", bus->name);
mdiobus_free(bus);
diff --git a/drivers/net/ethernet/arc/emac_rockchip.c b/drivers/net/ethernet/arc/emac_rockchip.c
new file mode 100644
index 0000000..c31c740
--- /dev/null
+++ b/drivers/net/ethernet/arc/emac_rockchip.c
@@ -0,0 +1,229 @@
+/**
+ * emac-rockchip.c - Rockchip EMAC specific glue layer
+ *
+ * Copyright (C) 2014 Romain Perier <romain.perier@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/etherdevice.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/of_net.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+#include <linux/regulator/consumer.h>
+
+#include "emac.h"
+
+#define DRV_NAME "rockchip_emac"
+#define DRV_VERSION "1.0"
+
+#define GRF_MODE_MII (1UL << 0)
+#define GRF_MODE_RMII (0UL << 0)
+#define GRF_SPEED_10M (0UL << 1)
+#define GRF_SPEED_100M (1UL << 1)
+#define GRF_SPEED_ENABLE_BIT (1UL << 17)
+#define GRF_MODE_ENABLE_BIT (1UL << 16)
+
+struct emac_rockchip_soc_data {
+ int grf_offset;
+};
+
+struct rockchip_priv_data {
+ struct arc_emac_priv emac;
+ struct regmap *grf;
+ const struct emac_rockchip_soc_data *soc_data;
+ struct regulator *regulator;
+ struct clk *refclk;
+};
+
+static void emac_rockchip_set_mac_speed(void *priv, unsigned int speed)
+{
+ struct rockchip_priv_data *emac = priv;
+ u32 data;
+ int err = 0;
+
+ /* write-enable bits */
+ data = GRF_SPEED_ENABLE_BIT;
+
+ switch(speed) {
+ case 10:
+ data |= GRF_SPEED_10M;
+ break;
+ case 100:
+ data |= GRF_SPEED_100M;
+ break;
+ default:
+ pr_err("speed %u not supported\n", speed);
+ return;
+ }
+
+ err = regmap_write(emac->grf, emac->soc_data->grf_offset, data);
+ if (err)
+ pr_err("unable to apply speed %u to grf (%d)\n", speed, err);
+}
+
+static const struct emac_rockchip_soc_data emac_rockchip_dt_data[] = {
+ { .grf_offset = 0x154 }, /* rk3066 */
+ { .grf_offset = 0x0a4 }, /* rk3188 */
+};
+
+static const struct of_device_id emac_rockchip_dt_ids[] = {
+ { .compatible = "rockchip,rk3066-emac", .data = &emac_rockchip_dt_data[0] },
+ { .compatible = "rockchip,rk3188-emac", .data = &emac_rockchip_dt_data[1] },
+ { /* Sentinel */ }
+};
+
+MODULE_DEVICE_TABLE(of, emac_rockchip_dt_ids);
+
+static int emac_rockchip_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct net_device *ndev;
+ struct rockchip_priv_data *priv;
+ const struct of_device_id *match;
+ u32 data;
+ int err, interface;
+
+ if (!pdev->dev.of_node)
+ return -ENODEV;
+
+ ndev = alloc_etherdev(sizeof(struct rockchip_priv_data));
+ if (!ndev)
+ return -ENOMEM;
+ platform_set_drvdata(pdev, ndev);
+ SET_NETDEV_DEV(ndev, dev);
+
+ priv = netdev_priv(ndev);
+ priv->emac.drv_name = DRV_NAME;
+ priv->emac.drv_version = DRV_VERSION;
+ priv->emac.set_mac_speed = emac_rockchip_set_mac_speed;
+
+ interface = of_get_phy_mode(dev->of_node);
+
+ /* RK3066 and RK3188 SoCs only support RMII */
+ if (interface != PHY_INTERFACE_MODE_RMII) {
+ dev_err(dev, "unsupported phy interface mode %d\n", interface);
+ err = -ENOTSUPP;
+ goto out_netdev;
+ }
+
+ priv->grf = syscon_regmap_lookup_by_phandle(dev->of_node, "rockchip,grf");
+ if (IS_ERR(priv->grf)) {
+ dev_err(dev, "failed to retrieve global register file (%ld)\n", PTR_ERR(priv->grf));
+ err = PTR_ERR(priv->grf);
+ goto out_netdev;
+ }
+
+ match = of_match_node(emac_rockchip_dt_ids, dev->of_node);
+ priv->soc_data = match->data;
+
+ priv->emac.clk = devm_clk_get(dev, "hclk");
+ if (IS_ERR(priv->emac.clk)) {
+ dev_err(dev, "failed to retrieve host clock (%ld)\n", PTR_ERR(priv->emac.clk));
+ err = PTR_ERR(priv->emac.clk);
+ goto out_netdev;
+ }
+
+ priv->refclk = devm_clk_get(dev, "macref");
+ if (IS_ERR(priv->refclk)) {
+ dev_err(dev, "failed to retrieve reference clock (%ld)\n", PTR_ERR(priv->refclk));
+ err = PTR_ERR(priv->refclk);
+ goto out_netdev;
+ }
+
+ err = clk_prepare_enable(priv->refclk);
+ if (err) {
+ dev_err(dev, "failed to enable reference clock (%d)\n", err);
+ goto out_netdev;
+ }
+
+ /* Optional regulator for PHY */
+ priv->regulator = devm_regulator_get_optional(dev, "phy");
+ if (IS_ERR(priv->regulator)) {
+ if (PTR_ERR(priv->regulator) == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+ dev_err(dev, "no regulator found\n");
+ priv->regulator = NULL;
+ }
+
+ if (priv->regulator) {
+ err = regulator_enable(priv->regulator);
+ if (err) {
+ dev_err(dev, "failed to enable phy-supply (%d)\n", err);
+ goto out_clk_disable;
+ }
+ }
+
+ err = arc_emac_probe(ndev, interface);
+ if (err)
+ goto out_regulator_disable;
+
+ /* write-enable bits */
+ data = GRF_MODE_ENABLE_BIT | GRF_SPEED_ENABLE_BIT;
+
+ data |= GRF_SPEED_100M;
+ data |= GRF_MODE_RMII;
+
+ err = regmap_write(priv->grf, priv->soc_data->grf_offset, data);
+ if (err) {
+ dev_err(dev, "unable to apply initial settings to grf (%d)\n", err);
+ goto out_regulator_disable;
+ }
+
+ /* RMII interface needs always a rate of 50MHz */
+ err = clk_set_rate(priv->refclk, 50000000);
+ if (err)
+ dev_err(dev, "failed to change reference clock rate (%d)\n", err);
+ return 0;
+
+out_regulator_disable:
+ if (priv->regulator)
+ regulator_disable(priv->regulator);
+out_clk_disable:
+ clk_disable_unprepare(priv->refclk);
+out_netdev:
+ free_netdev(ndev);
+ return err;
+}
+
+static int emac_rockchip_remove(struct platform_device *pdev)
+{
+ struct net_device *ndev = platform_get_drvdata(pdev);
+ struct rockchip_priv_data *priv = netdev_priv(ndev);
+ int err;
+
+ err = arc_emac_remove(ndev);
+
+ clk_disable_unprepare(priv->refclk);
+
+ if (priv->regulator)
+ regulator_disable(priv->regulator);
+
+ free_netdev(ndev);
+ return err;
+}
+
+static struct platform_driver emac_rockchip_driver = {
+ .probe = emac_rockchip_probe,
+ .remove = emac_rockchip_remove,
+ .driver = {
+ .name = DRV_NAME,
+ .of_match_table = emac_rockchip_dt_ids,
+ },
+};
+
+module_platform_driver(emac_rockchip_driver);
+
+MODULE_AUTHOR("Romain Perier <romain.perier@gmail.com>");
+MODULE_DESCRIPTION("Rockchip EMAC platform driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
index d8d07a8..c3e260c 100644
--- a/drivers/net/ethernet/broadcom/Kconfig
+++ b/drivers/net/ethernet/broadcom/Kconfig
@@ -122,6 +122,7 @@ config TIGON3
config BNX2X
tristate "Broadcom NetXtremeII 10Gb support"
depends on PCI
+ select PTP_1588_CLOCK
select FW_LOADER
select ZLIB_INFLATE
select LIBCRC32C
diff --git a/drivers/net/ethernet/broadcom/b44.c b/drivers/net/ethernet/broadcom/b44.c
index d588136..416620f 100644
--- a/drivers/net/ethernet/broadcom/b44.c
+++ b/drivers/net/ethernet/broadcom/b44.c
@@ -427,7 +427,7 @@ static void b44_wap54g10_workaround(struct b44 *bp)
}
return;
error:
- pr_warning("PHY: cannot reset MII transceiver isolate bit\n");
+ pr_warn("PHY: cannot reset MII transceiver isolate bit\n");
}
#else
static inline void b44_wap54g10_workaround(struct b44 *bp)
diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
index d9b9170e..0756881 100644
--- a/drivers/net/ethernet/broadcom/bcmsysport.c
+++ b/drivers/net/ethernet/broadcom/bcmsysport.c
@@ -139,6 +139,15 @@ static int bcm_sysport_set_rx_csum(struct net_device *dev,
else
reg &= ~RXCHK_SKIP_FCS;
+ /* If Broadcom tags are enabled (e.g: using a switch), make
+ * sure we tell the RXCHK hardware to expect a 4-bytes Broadcom
+ * tag after the Ethernet MAC Source Address.
+ */
+ if (netdev_uses_dsa(dev))
+ reg |= RXCHK_BRCM_TAG_EN;
+ else
+ reg &= ~RXCHK_BRCM_TAG_EN;
+
rxchk_writel(priv, reg, RXCHK_CONTROL);
return 0;
@@ -848,7 +857,8 @@ static irqreturn_t bcm_sysport_wol_isr(int irq, void *dev_id)
return IRQ_HANDLED;
}
-static int bcm_sysport_insert_tsb(struct sk_buff *skb, struct net_device *dev)
+static struct sk_buff *bcm_sysport_insert_tsb(struct sk_buff *skb,
+ struct net_device *dev)
{
struct sk_buff *nskb;
struct bcm_tsb *tsb;
@@ -864,7 +874,7 @@ static int bcm_sysport_insert_tsb(struct sk_buff *skb, struct net_device *dev)
if (!nskb) {
dev->stats.tx_errors++;
dev->stats.tx_dropped++;
- return -ENOMEM;
+ return NULL;
}
skb = nskb;
}
@@ -883,7 +893,7 @@ static int bcm_sysport_insert_tsb(struct sk_buff *skb, struct net_device *dev)
ip_proto = ipv6_hdr(skb)->nexthdr;
break;
default:
- return 0;
+ return skb;
}
/* Get the checksum offset and the L4 (transport) offset */
@@ -902,7 +912,7 @@ static int bcm_sysport_insert_tsb(struct sk_buff *skb, struct net_device *dev)
tsb->l4_ptr_dest_map = csum_info;
}
- return 0;
+ return skb;
}
static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
@@ -936,8 +946,8 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
/* Insert TSB and checksum infos */
if (priv->tsb_en) {
- ret = bcm_sysport_insert_tsb(skb, dev);
- if (ret) {
+ skb = bcm_sysport_insert_tsb(skb, dev);
+ if (!skb) {
ret = NETDEV_TX_OK;
goto out;
}
@@ -1069,16 +1079,19 @@ static void bcm_sysport_adj_link(struct net_device *dev)
if (!phydev->pause)
cmd_bits |= CMD_RX_PAUSE_IGNORE | CMD_TX_PAUSE_IGNORE;
- if (changed) {
+ if (!changed)
+ return;
+
+ if (phydev->link) {
reg = umac_readl(priv, UMAC_CMD);
reg &= ~((CMD_SPEED_MASK << CMD_SPEED_SHIFT) |
CMD_HD_EN | CMD_RX_PAUSE_IGNORE |
CMD_TX_PAUSE_IGNORE);
reg |= cmd_bits;
umac_writel(priv, reg, UMAC_CMD);
-
- phy_print_status(priv->phydev);
}
+
+ phy_print_status(priv->phydev);
}
static int bcm_sysport_init_tx_ring(struct bcm_sysport_priv *priv,
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index d777fae..c3a6072 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -20,13 +20,17 @@
#include <linux/types.h>
#include <linux/pci_regs.h>
+#include <linux/ptp_clock_kernel.h>
+#include <linux/net_tstamp.h>
+#include <linux/clocksource.h>
+
/* compilation time flags */
/* define this to make the driver freeze on error to allow getting debug info
* (you will need to reboot afterwards) */
/* #define BNX2X_STOP_ON_ERROR */
-#define DRV_MODULE_VERSION "1.78.19-0"
+#define DRV_MODULE_VERSION "1.710.51-0"
#define DRV_MODULE_RELDATE "2014/02/10"
#define BNX2X_BC_VER 0x040200
@@ -70,6 +74,7 @@ enum bnx2x_int_mode {
#define BNX2X_MSG_SP 0x0100000 /* was: NETIF_MSG_INTR */
#define BNX2X_MSG_FP 0x0200000 /* was: NETIF_MSG_INTR */
#define BNX2X_MSG_IOV 0x0800000
+#define BNX2X_MSG_PTP 0x1000000
#define BNX2X_MSG_IDLE 0x2000000 /* used for idle check*/
#define BNX2X_MSG_ETHTOOL 0x4000000
#define BNX2X_MSG_DCB 0x8000000
@@ -1443,6 +1448,12 @@ struct bnx2x_fp_stats {
struct bnx2x_eth_q_stats_old eth_q_stats_old;
};
+enum {
+ SUB_MF_MODE_UNKNOWN = 0,
+ SUB_MF_MODE_UFP,
+ SUB_MF_MODE_NPAR1_DOT_5,
+};
+
struct bnx2x {
/* Fields used in the tx and intr/napi performance paths
* are grouped together in the beginning of the structure
@@ -1587,10 +1598,11 @@ struct bnx2x {
#define USING_SINGLE_MSIX_FLAG (1 << 20)
#define BC_SUPPORTS_DCBX_MSG_NON_PMF (1 << 21)
#define IS_VF_FLAG (1 << 22)
-#define INTERRUPTS_ENABLED_FLAG (1 << 23)
-#define BC_SUPPORTS_RMMOD_CMD (1 << 24)
-#define HAS_PHYS_PORT_ID (1 << 25)
-#define AER_ENABLED (1 << 26)
+#define BC_SUPPORTS_RMMOD_CMD (1 << 23)
+#define HAS_PHYS_PORT_ID (1 << 24)
+#define AER_ENABLED (1 << 25)
+#define PTP_SUPPORTED (1 << 26)
+#define TX_TIMESTAMPING_EN (1 << 27)
#define BP_NOMCP(bp) ((bp)->flags & NO_MCP_FLAG)
@@ -1653,6 +1665,9 @@ struct bnx2x {
#define IS_MF_SI(bp) (bp->mf_mode == MULTI_FUNCTION_SI)
#define IS_MF_SD(bp) (bp->mf_mode == MULTI_FUNCTION_SD)
#define IS_MF_AFEX(bp) (bp->mf_mode == MULTI_FUNCTION_AFEX)
+ u8 mf_sub_mode;
+#define IS_MF_UFP(bp) (IS_MF_SD(bp) && \
+ bp->mf_sub_mode == SUB_MF_MODE_UFP)
u8 wol;
@@ -1684,13 +1699,9 @@ struct bnx2x {
#define BNX2X_STATE_ERROR 0xf000
#define BNX2X_MAX_PRIORITY 8
-#define BNX2X_MAX_ENTRIES_PER_PRI 16
-#define BNX2X_MAX_COS 3
-#define BNX2X_MAX_TX_COS 2
int num_queues;
uint num_ethernet_queues;
uint num_cnic_queues;
- int num_napi_queues;
int disable_tpa;
u32 rx_mode;
@@ -1933,6 +1944,19 @@ struct bnx2x {
u8 phys_port_id[ETH_ALEN];
+ /* PTP related context */
+ struct ptp_clock *ptp_clock;
+ struct ptp_clock_info ptp_clock_info;
+ struct work_struct ptp_task;
+ struct cyclecounter cyclecounter;
+ struct timecounter timecounter;
+ bool timecounter_init_done;
+ struct sk_buff *ptp_tx_skb;
+ unsigned long ptp_tx_start;
+ bool hwtstamp_ioctl_called;
+ u16 tx_type;
+ u16 rx_filter;
+
struct bnx2x_link_report_data vf_link_vars;
};
@@ -2346,7 +2370,7 @@ void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func, u8 idu_sb_id,
#define ATTN_HARD_WIRED_MASK 0xff00
#define ATTENTION_ID 4
-#define IS_MF_STORAGE_ONLY(bp) (IS_MF_STORAGE_SD(bp) || \
+#define IS_MF_STORAGE_ONLY(bp) (IS_MF_STORAGE_PERSONALITY_ONLY(bp) || \
IS_MF_FCOE_AFEX(bp))
/* stuff added to make the code fit 80Col */
@@ -2522,14 +2546,44 @@ void bnx2x_notify_link_changed(struct bnx2x *bp);
#define IS_MF_ISCSI_SD(bp) (IS_MF_SD(bp) && BNX2X_IS_MF_SD_PROTOCOL_ISCSI(bp))
#define IS_MF_FCOE_SD(bp) (IS_MF_SD(bp) && BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp))
+#define IS_MF_ISCSI_SI(bp) (IS_MF_SI(bp) && BNX2X_IS_MF_EXT_PROTOCOL_ISCSI(bp))
+
+#define IS_MF_ISCSI_ONLY(bp) (IS_MF_ISCSI_SD(bp) || IS_MF_ISCSI_SI(bp))
+
+#define BNX2X_MF_EXT_PROTOCOL_MASK \
+ (MACP_FUNC_CFG_FLAGS_ETHERNET | \
+ MACP_FUNC_CFG_FLAGS_ISCSI_OFFLOAD | \
+ MACP_FUNC_CFG_FLAGS_FCOE_OFFLOAD)
+
+#define BNX2X_MF_EXT_PROT(bp) ((bp)->mf_ext_config & \
+ BNX2X_MF_EXT_PROTOCOL_MASK)
-#define BNX2X_MF_EXT_PROTOCOL_FCOE(bp) ((bp)->mf_ext_config & \
- MACP_FUNC_CFG_FLAGS_FCOE_OFFLOAD)
+#define BNX2X_HAS_MF_EXT_PROTOCOL_FCOE(bp) \
+ (BNX2X_MF_EXT_PROT(bp) & MACP_FUNC_CFG_FLAGS_FCOE_OFFLOAD)
+
+#define BNX2X_IS_MF_EXT_PROTOCOL_FCOE(bp) \
+ (BNX2X_MF_EXT_PROT(bp) == MACP_FUNC_CFG_FLAGS_FCOE_OFFLOAD)
+
+#define BNX2X_IS_MF_EXT_PROTOCOL_ISCSI(bp) \
+ (BNX2X_MF_EXT_PROT(bp) == MACP_FUNC_CFG_FLAGS_ISCSI_OFFLOAD)
+
+#define IS_MF_FCOE_AFEX(bp) \
+ (IS_MF_AFEX(bp) && BNX2X_IS_MF_EXT_PROTOCOL_FCOE(bp))
+
+#define IS_MF_SD_STORAGE_PERSONALITY_ONLY(bp) \
+ (IS_MF_SD(bp) && \
+ (BNX2X_IS_MF_SD_PROTOCOL_ISCSI(bp) || \
+ BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp)))
+
+#define IS_MF_SI_STORAGE_PERSONALITY_ONLY(bp) \
+ (IS_MF_SI(bp) && \
+ (BNX2X_IS_MF_EXT_PROTOCOL_ISCSI(bp) || \
+ BNX2X_IS_MF_EXT_PROTOCOL_FCOE(bp)))
+
+#define IS_MF_STORAGE_PERSONALITY_ONLY(bp) \
+ (IS_MF_SD_STORAGE_PERSONALITY_ONLY(bp) || \
+ IS_MF_SI_STORAGE_PERSONALITY_ONLY(bp))
-#define IS_MF_FCOE_AFEX(bp) (IS_MF_AFEX(bp) && BNX2X_MF_EXT_PROTOCOL_FCOE(bp))
-#define IS_MF_STORAGE_SD(bp) (IS_MF_SD(bp) && \
- (BNX2X_IS_MF_SD_PROTOCOL_ISCSI(bp) || \
- BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp)))
#define SET_FLAG(value, mask, flag) \
do {\
@@ -2559,4 +2613,11 @@ void bnx2x_update_mng_version(struct bnx2x *bp);
#define E1H_MAX_MF_SB_COUNT (HC_SB_MAX_SB_E1X/(E1HVN_MAX * PORT_MAX))
+void bnx2x_init_ptp(struct bnx2x *bp);
+int bnx2x_configure_ptp_filters(struct bnx2x *bp);
+void bnx2x_set_rx_ts(struct bnx2x *bp, struct sk_buff *skb);
+
+#define BNX2X_MAX_PHC_DRIFT 31000000
+#define BNX2X_PTP_TX_TIMEOUT
+
#endif /* bnx2x.h */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index 4ccc806..40beef5 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -21,6 +21,7 @@
#include <linux/if_vlan.h>
#include <linux/interrupt.h>
#include <linux/ip.h>
+#include <linux/crash_dump.h>
#include <net/tcp.h>
#include <net/ipv6.h>
#include <net/ip6_checksum.h>
@@ -64,7 +65,7 @@ static int bnx2x_calc_num_queues(struct bnx2x *bp)
int nq = bnx2x_num_queues ? : netif_get_num_default_rss_queues();
/* Reduce memory usage in kdump environment by using only one queue */
- if (reset_devices)
+ if (is_kdump_kernel())
nq = 1;
nq = clamp(nq, 1, BNX2X_MAX_QUEUES(bp));
@@ -1063,6 +1064,11 @@ reuse_rx:
skb_record_rx_queue(skb, fp->rx_queue);
+ /* Check if this packet was timestamped */
+ if (unlikely(cqe->fast_path_cqe.type_error_flags &
+ (1 << ETH_FAST_PATH_RX_CQE_PTP_PKT_SHIFT)))
+ bnx2x_set_rx_ts(bp, skb);
+
if (le16_to_cpu(cqe_fp->pars_flags.flags) &
PARSING_FLAGS_VLAN)
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
@@ -1932,7 +1938,7 @@ void bnx2x_set_num_queues(struct bnx2x *bp)
bp->num_ethernet_queues = bnx2x_calc_num_queues(bp);
/* override in STORAGE SD modes */
- if (IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp))
+ if (IS_MF_STORAGE_ONLY(bp))
bp->num_ethernet_queues = 1;
/* Add special queues */
@@ -2078,6 +2084,10 @@ int bnx2x_rss(struct bnx2x *bp, struct bnx2x_rss_config_obj *rss_obj,
__set_bit(BNX2X_RSS_IPV4_UDP, &params.rss_flags);
if (rss_obj->udp_rss_v6)
__set_bit(BNX2X_RSS_IPV6_UDP, &params.rss_flags);
+
+ if (!CHIP_IS_E1x(bp))
+ /* valid only for TUNN_MODE_GRE tunnel mode */
+ __set_bit(BNX2X_RSS_GRE_INNER_HDRS, &params.rss_flags);
} else {
__set_bit(BNX2X_RSS_MODE_DISABLED, &params.rss_flags);
}
@@ -2800,7 +2810,11 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
/* Initialize Rx filter. */
bnx2x_set_rx_mode_inner(bp);
- /* Start the Tx */
+ if (bp->flags & PTP_SUPPORTED) {
+ bnx2x_init_ptp(bp);
+ bnx2x_configure_ptp_filters(bp);
+ }
+ /* Start Tx */
switch (load_mode) {
case LOAD_NORMAL:
/* Tx queue should be only re-enabled */
@@ -3437,26 +3451,6 @@ exit_lbl:
}
#endif
-static void bnx2x_set_pbd_gso_e2(struct sk_buff *skb, u32 *parsing_data,
- u32 xmit_type)
-{
- struct ipv6hdr *ipv6;
-
- *parsing_data |= (skb_shinfo(skb)->gso_size <<
- ETH_TX_PARSE_BD_E2_LSO_MSS_SHIFT) &
- ETH_TX_PARSE_BD_E2_LSO_MSS;
-
- if (xmit_type & XMIT_GSO_ENC_V6)
- ipv6 = inner_ipv6_hdr(skb);
- else if (xmit_type & XMIT_GSO_V6)
- ipv6 = ipv6_hdr(skb);
- else
- ipv6 = NULL;
-
- if (ipv6 && ipv6->nexthdr == NEXTHDR_IPV6)
- *parsing_data |= ETH_TX_PARSE_BD_E2_IPV6_WITH_EXT_HDR;
-}
-
/**
* bnx2x_set_pbd_gso - update PBD in GSO case.
*
@@ -3466,7 +3460,6 @@ static void bnx2x_set_pbd_gso_e2(struct sk_buff *skb, u32 *parsing_data,
*/
static void bnx2x_set_pbd_gso(struct sk_buff *skb,
struct eth_tx_parse_bd_e1x *pbd,
- struct eth_tx_start_bd *tx_start_bd,
u32 xmit_type)
{
pbd->lso_mss = cpu_to_le16(skb_shinfo(skb)->gso_size);
@@ -3479,9 +3472,6 @@ static void bnx2x_set_pbd_gso(struct sk_buff *skb,
bswab16(~csum_tcpudp_magic(ip_hdr(skb)->saddr,
ip_hdr(skb)->daddr,
0, IPPROTO_TCP, 0));
-
- /* GSO on 57710/57711 needs FW to calculate IP checksum */
- tx_start_bd->bd_flags.as_bitfield |= ETH_TX_BD_FLAGS_IP_CSUM;
} else {
pbd->tcp_pseudo_csum =
bswab16(~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
@@ -3653,18 +3643,23 @@ static void bnx2x_update_pbds_gso_enc(struct sk_buff *skb,
(__force u32)iph->tot_len -
(__force u32)iph->frag_off;
+ outerip_len = iph->ihl << 1;
+
pbd2->fw_ip_csum_wo_len_flags_frag =
bswab16(csum_fold((__force __wsum)csum));
} else {
pbd2->fw_ip_hdr_to_payload_w =
hlen_w - ((sizeof(struct ipv6hdr)) >> 1);
+ pbd_e2->data.tunnel_data.flags |=
+ ETH_TUNNEL_DATA_IP_HDR_TYPE_OUTER;
}
pbd2->tcp_send_seq = bswab32(inner_tcp_hdr(skb)->seq);
pbd2->tcp_flags = pbd_tcp_flags(inner_tcp_hdr(skb));
- if (xmit_type & XMIT_GSO_V4) {
+ /* inner IP header info */
+ if (xmit_type & XMIT_CSUM_ENC_V4) {
pbd2->hw_ip_id = bswab16(inner_ip_hdr(skb)->id);
pbd_e2->data.tunnel_data.pseudo_csum =
@@ -3672,8 +3667,6 @@ static void bnx2x_update_pbds_gso_enc(struct sk_buff *skb,
inner_ip_hdr(skb)->saddr,
inner_ip_hdr(skb)->daddr,
0, IPPROTO_TCP, 0));
-
- outerip_len = ip_hdr(skb)->ihl << 1;
} else {
pbd_e2->data.tunnel_data.pseudo_csum =
bswab16(~csum_ipv6_magic(
@@ -3686,8 +3679,6 @@ static void bnx2x_update_pbds_gso_enc(struct sk_buff *skb,
*global_data |=
outerip_off |
- (!!(xmit_type & XMIT_CSUM_V6) <<
- ETH_TX_PARSE_2ND_BD_IP_HDR_TYPE_OUTER_SHIFT) |
(outerip_len <<
ETH_TX_PARSE_2ND_BD_IP_HDR_LEN_OUTER_W_SHIFT) |
((skb->protocol == cpu_to_be16(ETH_P_8021Q)) <<
@@ -3699,6 +3690,23 @@ static void bnx2x_update_pbds_gso_enc(struct sk_buff *skb,
}
}
+static inline void bnx2x_set_ipv6_ext_e2(struct sk_buff *skb, u32 *parsing_data,
+ u32 xmit_type)
+{
+ struct ipv6hdr *ipv6;
+
+ if (!(xmit_type & (XMIT_GSO_ENC_V6 | XMIT_GSO_V6)))
+ return;
+
+ if (xmit_type & XMIT_GSO_ENC_V6)
+ ipv6 = inner_ipv6_hdr(skb);
+ else /* XMIT_GSO_V6 */
+ ipv6 = ipv6_hdr(skb);
+
+ if (ipv6->nexthdr == NEXTHDR_IPV6)
+ *parsing_data |= ETH_TX_PARSE_BD_E2_IPV6_WITH_EXT_HDR;
+}
+
/* called with netif_tx_lock
* bnx2x_tx_int() runs without netif_tx_lock unless it needs to call
* netif_wake_queue()
@@ -3831,6 +3839,20 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
tx_start_bd->bd_flags.as_bitfield = ETH_TX_BD_FLAGS_START_BD;
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) {
+ if (!(bp->flags & TX_TIMESTAMPING_EN)) {
+ BNX2X_ERR("Tx timestamping was not enabled, this packet will not be timestamped\n");
+ } else if (bp->ptp_tx_skb) {
+ BNX2X_ERR("The device supports only a single outstanding packet to timestamp, this packet will not be timestamped\n");
+ } else {
+ skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ /* schedule check for Tx timestamp */
+ bp->ptp_tx_skb = skb_get(skb);
+ bp->ptp_tx_start = jiffies;
+ schedule_work(&bp->ptp_task);
+ }
+ }
+
/* header nbd: indirectly zero other flags! */
tx_start_bd->general_data = 1 << ETH_TX_START_BD_HDR_NBDS_SHIFT;
@@ -3852,12 +3874,16 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
/* when transmitting in a vf, start bd must hold the ethertype
* for fw to enforce it
*/
+#ifndef BNX2X_STOP_ON_ERROR
if (IS_VF(bp))
+#endif
tx_start_bd->vlan_or_ethertype =
cpu_to_le16(ntohs(eth->h_proto));
+#ifndef BNX2X_STOP_ON_ERROR
else
/* used by FW for packet accounting */
tx_start_bd->vlan_or_ethertype = cpu_to_le16(pkt_prod);
+#endif
}
nbd = 2; /* start_bd + pbd + frags (updated when pages are mapped) */
@@ -3915,6 +3941,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
xmit_type);
}
+ bnx2x_set_ipv6_ext_e2(skb, &pbd_e2_parsing_data, xmit_type);
/* Add the macs to the parsing BD if this is a vf or if
* Tx Switching is enabled.
*/
@@ -3929,11 +3956,22 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
&pbd_e2->data.mac_addr.dst_mid,
&pbd_e2->data.mac_addr.dst_lo,
eth->h_dest);
- } else if (bp->flags & TX_SWITCHING) {
- bnx2x_set_fw_mac_addr(&pbd_e2->data.mac_addr.dst_hi,
- &pbd_e2->data.mac_addr.dst_mid,
- &pbd_e2->data.mac_addr.dst_lo,
- eth->h_dest);
+ } else {
+ if (bp->flags & TX_SWITCHING)
+ bnx2x_set_fw_mac_addr(
+ &pbd_e2->data.mac_addr.dst_hi,
+ &pbd_e2->data.mac_addr.dst_mid,
+ &pbd_e2->data.mac_addr.dst_lo,
+ eth->h_dest);
+#ifdef BNX2X_STOP_ON_ERROR
+ /* Enforce security is always set in Stop on Error -
+ * source mac should be present in the parsing BD
+ */
+ bnx2x_set_fw_mac_addr(&pbd_e2->data.mac_addr.src_hi,
+ &pbd_e2->data.mac_addr.src_mid,
+ &pbd_e2->data.mac_addr.src_lo,
+ eth->h_source);
+#endif
}
SET_FLAG(pbd_e2_parsing_data,
@@ -3980,10 +4018,12 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
bd_prod);
}
if (!CHIP_IS_E1x(bp))
- bnx2x_set_pbd_gso_e2(skb, &pbd_e2_parsing_data,
- xmit_type);
+ pbd_e2_parsing_data |=
+ (skb_shinfo(skb)->gso_size <<
+ ETH_TX_PARSE_BD_E2_LSO_MSS_SHIFT) &
+ ETH_TX_PARSE_BD_E2_LSO_MSS;
else
- bnx2x_set_pbd_gso(skb, pbd_e1x, first_bd, xmit_type);
+ bnx2x_set_pbd_gso(skb, pbd_e1x, xmit_type);
}
/* Set the PBD's parsing_data field if not zero
@@ -4191,14 +4231,13 @@ int bnx2x_change_mac_addr(struct net_device *dev, void *p)
struct bnx2x *bp = netdev_priv(dev);
int rc = 0;
- if (!bnx2x_is_valid_ether_addr(bp, addr->sa_data)) {
+ if (!is_valid_ether_addr(addr->sa_data)) {
BNX2X_ERR("Requested MAC address is not valid\n");
return -EINVAL;
}
- if ((IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp)) &&
- !is_zero_ether_addr(addr->sa_data)) {
- BNX2X_ERR("Can't configure non-zero address on iSCSI or FCoE functions in MF-SD mode\n");
+ if (IS_MF_STORAGE_ONLY(bp)) {
+ BNX2X_ERR("Can't change address on STORAGE ONLY function\n");
return -EINVAL;
}
@@ -4377,8 +4416,7 @@ static int bnx2x_alloc_fp_mem_at(struct bnx2x *bp, int index)
u8 cos;
int rx_ring_size = 0;
- if (!bp->rx_ring_size &&
- (IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp))) {
+ if (!bp->rx_ring_size && IS_MF_STORAGE_ONLY(bp)) {
rx_ring_size = MIN_RX_SIZE_NONTPA;
bp->rx_ring_size = rx_ring_size;
} else if (!bp->rx_ring_size) {
@@ -4771,11 +4809,15 @@ netdev_features_t bnx2x_fix_features(struct net_device *dev,
struct bnx2x *bp = netdev_priv(dev);
/* TPA requires Rx CSUM offloading */
- if (!(features & NETIF_F_RXCSUM) || bp->disable_tpa) {
+ if (!(features & NETIF_F_RXCSUM)) {
features &= ~NETIF_F_LRO;
features &= ~NETIF_F_GRO;
}
+ /* Note: do not disable SW GRO in kernel when HW GRO is off */
+ if (bp->disable_tpa)
+ features &= ~NETIF_F_LRO;
+
return features;
}
@@ -4814,6 +4856,10 @@ int bnx2x_set_features(struct net_device *dev, netdev_features_t features)
if ((changes & GRO_ENABLE_FLAG) && (flags & TPA_ENABLE_FLAG))
changes &= ~GRO_ENABLE_FLAG;
+ /* if GRO is changed while HW TPA is off, don't force a reload */
+ if ((changes & GRO_ENABLE_FLAG) && bp->disable_tpa)
+ changes &= ~GRO_ENABLE_FLAG;
+
if (changes)
bnx2x_reload = true;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index 571427c..adcacda 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -932,8 +932,15 @@ static inline int bnx2x_func_start(struct bnx2x *bp)
else /* CHIP_IS_E1X */
start_params->network_cos_mode = FW_WRR;
- start_params->gre_tunnel_mode = L2GRE_TUNNEL;
- start_params->gre_tunnel_rss = GRE_INNER_HEADERS_RSS;
+ start_params->tunnel_mode = TUNN_MODE_GRE;
+ start_params->gre_tunnel_type = IPGRE_TUNNEL;
+ start_params->inner_gre_rss_en = 1;
+
+ if (IS_MF_UFP(bp) && BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp)) {
+ start_params->class_fail_ethtype = ETH_P_FIP;
+ start_params->class_fail = 1;
+ start_params->no_added_tags = 1;
+ }
return bnx2x_func_state_change(bp, &func_params);
}
@@ -1297,15 +1304,7 @@ static inline void bnx2x_update_drv_flags(struct bnx2x *bp, u32 flags, u32 set)
}
}
-static inline bool bnx2x_is_valid_ether_addr(struct bnx2x *bp, u8 *addr)
-{
- if (is_valid_ether_addr(addr) ||
- (is_zero_ether_addr(addr) &&
- (IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp))))
- return true;
- return false;
-}
/**
* bnx2x_fill_fw_str - Fill buffer with FW version string
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c
index fb26bc4..6e4294e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c
@@ -2092,7 +2092,6 @@ static void bnx2x_dcbnl_get_pfc_cfg(struct net_device *netdev, int prio,
static u8 bnx2x_dcbnl_set_all(struct net_device *netdev)
{
struct bnx2x *bp = netdev_priv(netdev);
- int rc = 0;
DP(BNX2X_MSG_DCB, "SET-ALL\n");
@@ -2110,9 +2109,7 @@ static u8 bnx2x_dcbnl_set_all(struct net_device *netdev)
1);
bnx2x_dcbx_init(bp, true);
}
- DP(BNX2X_MSG_DCB, "set_dcbx_params done (%d)\n", rc);
- if (rc)
- return 1;
+ DP(BNX2X_MSG_DCB, "set_dcbx_params done\n");
return 0;
}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dump.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dump.h
index 12eb4ba..741aa13 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dump.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dump.h
@@ -40,7 +40,7 @@ struct dump_header {
u32 dump_meta_data; /* OR of CHIP and PATH. */
};
-#define BNX2X_DUMP_VERSION 0x50acff01
+#define BNX2X_DUMP_VERSION 0x61111111
struct reg_addr {
u32 addr;
u32 size;
@@ -1464,7 +1464,6 @@ static const struct reg_addr reg_addrs[] = {
{ 0x180398, 1, 0x1c, 0x924},
{ 0x1803a0, 5, 0x1c, 0x924},
{ 0x1803b4, 2, 0x18, 0x924},
- { 0x180400, 256, 0x3, 0xfff},
{ 0x181000, 4, 0x1f, 0x93c},
{ 0x181010, 1020, 0x1f, 0x38},
{ 0x182000, 4, 0x18, 0x924},
@@ -1576,7 +1575,6 @@ static const struct reg_addr reg_addrs[] = {
{ 0x200398, 1, 0x1c, 0x924},
{ 0x2003a0, 1, 0x1c, 0x924},
{ 0x2003a8, 2, 0x1c, 0x924},
- { 0x200400, 256, 0x3, 0xfff},
{ 0x202000, 4, 0x1f, 0x1927},
{ 0x202010, 2044, 0x1f, 0x1007},
{ 0x204000, 4, 0x18, 0x924},
@@ -1688,7 +1686,6 @@ static const struct reg_addr reg_addrs[] = {
{ 0x280398, 1, 0x1c, 0x924},
{ 0x2803a0, 1, 0x1c, 0x924},
{ 0x2803a8, 2, 0x1c, 0x924},
- { 0x280400, 256, 0x3, 0xfff},
{ 0x282000, 4, 0x1f, 0x9e4},
{ 0x282010, 2044, 0x1f, 0x1c0},
{ 0x284000, 4, 0x18, 0x924},
@@ -1800,7 +1797,6 @@ static const struct reg_addr reg_addrs[] = {
{ 0x300398, 1, 0x1c, 0x924},
{ 0x3003a0, 1, 0x1c, 0x924},
{ 0x3003a8, 2, 0x1c, 0x924},
- { 0x300400, 256, 0x3, 0xfff},
{ 0x302000, 4, 0x1f, 0xf24},
{ 0x302010, 2044, 0x1f, 0xe00},
{ 0x304000, 4, 0x18, 0x924},
@@ -2206,10 +2202,10 @@ static const struct wreg_addr wreg_addr_e3b0 = {
0x1b0c00, 128, 2, read_reg_e3b0, 0x1f, 0x1fff};
static const unsigned int dump_num_registers[NUM_CHIPS][NUM_PRESETS] = {
- {20782, 18567, 27975, 19729, 18311, 27719, 20836, 32391, 41799, 20812,
- 26247, 35655, 19074},
- {32774, 19297, 33277, 31721, 19041, 33021, 32828, 33121, 47101, 32804,
- 26977, 40957, 35895},
+ {19758, 17543, 26951, 18705, 17287, 26695, 19812, 31367, 40775, 19788,
+ 25223, 34631, 19074},
+ {31750, 18273, 32253, 30697, 18017, 31997, 31804, 32097, 46077, 31780,
+ 25953, 39933, 35895},
{36527, 17928, 33697, 35474, 18700, 34466, 36581, 31752, 47521, 36557,
25608, 41377, 43903},
{45239, 17936, 34387, 44186, 18708, 35156, 45293, 31760, 48211, 45269,
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
index 92fee84..1edc931 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
@@ -1852,7 +1852,7 @@ static int bnx2x_set_ringparam(struct net_device *dev,
if ((ering->rx_pending > MAX_RX_AVAIL) ||
(ering->rx_pending < (bp->disable_tpa ? MIN_RX_SIZE_NONTPA :
MIN_RX_SIZE_TPA)) ||
- (ering->tx_pending > (IS_MF_FCOE_AFEX(bp) ? 0 : MAX_TX_AVAIL)) ||
+ (ering->tx_pending > (IS_MF_STORAGE_ONLY(bp) ? 0 : MAX_TX_AVAIL)) ||
(ering->tx_pending <= MAX_SKB_FRAGS + 4)) {
DP(BNX2X_MSG_ETHTOOL, "Command parameters not supported\n");
return -EINVAL;
@@ -3481,6 +3481,46 @@ static int bnx2x_set_channels(struct net_device *dev,
return bnx2x_nic_load(bp, LOAD_NORMAL);
}
+static int bnx2x_get_ts_info(struct net_device *dev,
+ struct ethtool_ts_info *info)
+{
+ struct bnx2x *bp = netdev_priv(dev);
+
+ if (bp->flags & PTP_SUPPORTED) {
+ info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE |
+ SOF_TIMESTAMPING_RX_SOFTWARE |
+ SOF_TIMESTAMPING_SOFTWARE |
+ SOF_TIMESTAMPING_TX_HARDWARE |
+ SOF_TIMESTAMPING_RX_HARDWARE |
+ SOF_TIMESTAMPING_RAW_HARDWARE;
+
+ if (bp->ptp_clock)
+ info->phc_index = ptp_clock_index(bp->ptp_clock);
+ else
+ info->phc_index = -1;
+
+ info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+ (1 << HWTSTAMP_FILTER_PTP_V1_L4_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L4_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L4_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L2_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L2_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ);
+
+ info->tx_types = (1 << HWTSTAMP_TX_OFF)|(1 << HWTSTAMP_TX_ON);
+
+ return 0;
+ }
+
+ return ethtool_op_get_ts_info(dev, info);
+}
+
static const struct ethtool_ops bnx2x_ethtool_ops = {
.get_settings = bnx2x_get_settings,
.set_settings = bnx2x_set_settings,
@@ -3522,7 +3562,7 @@ static const struct ethtool_ops bnx2x_ethtool_ops = {
.get_module_eeprom = bnx2x_get_module_eeprom,
.get_eee = bnx2x_get_eee,
.set_eee = bnx2x_set_eee,
- .get_ts_info = ethtool_op_get_ts_info,
+ .get_ts_info = bnx2x_get_ts_info,
};
static const struct ethtool_ops bnx2x_vf_ethtool_ops = {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
index 95dc365..7636e3c 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
@@ -10,170 +10,170 @@
#ifndef BNX2X_FW_DEFS_H
#define BNX2X_FW_DEFS_H
-#define CSTORM_ASSERT_LIST_INDEX_OFFSET (IRO[148].base)
+#define CSTORM_ASSERT_LIST_INDEX_OFFSET (IRO[152].base)
#define CSTORM_ASSERT_LIST_OFFSET(assertListEntry) \
- (IRO[147].base + ((assertListEntry) * IRO[147].m1))
+ (IRO[151].base + ((assertListEntry) * IRO[151].m1))
#define CSTORM_EVENT_RING_DATA_OFFSET(pfId) \
- (IRO[153].base + (((pfId)>>1) * IRO[153].m1) + (((pfId)&1) * \
- IRO[153].m2))
+ (IRO[157].base + (((pfId)>>1) * IRO[157].m1) + (((pfId)&1) * \
+ IRO[157].m2))
#define CSTORM_EVENT_RING_PROD_OFFSET(pfId) \
- (IRO[154].base + (((pfId)>>1) * IRO[154].m1) + (((pfId)&1) * \
- IRO[154].m2))
+ (IRO[158].base + (((pfId)>>1) * IRO[158].m1) + (((pfId)&1) * \
+ IRO[158].m2))
#define CSTORM_FINAL_CLEANUP_COMPLETE_OFFSET(funcId) \
- (IRO[159].base + ((funcId) * IRO[159].m1))
+ (IRO[163].base + ((funcId) * IRO[163].m1))
#define CSTORM_FUNC_EN_OFFSET(funcId) \
- (IRO[149].base + ((funcId) * IRO[149].m1))
+ (IRO[153].base + ((funcId) * IRO[153].m1))
#define CSTORM_HC_SYNC_LINE_INDEX_E1X_OFFSET(hcIndex, sbId) \
- (IRO[139].base + ((hcIndex) * IRO[139].m1) + ((sbId) * IRO[139].m2))
+ (IRO[143].base + ((hcIndex) * IRO[143].m1) + ((sbId) * IRO[143].m2))
#define CSTORM_HC_SYNC_LINE_INDEX_E2_OFFSET(hcIndex, sbId) \
- (IRO[138].base + (((hcIndex)>>2) * IRO[138].m1) + (((hcIndex)&3) \
- * IRO[138].m2) + ((sbId) * IRO[138].m3))
-#define CSTORM_IGU_MODE_OFFSET (IRO[157].base)
+ (IRO[142].base + (((hcIndex)>>2) * IRO[142].m1) + (((hcIndex)&3) \
+ * IRO[142].m2) + ((sbId) * IRO[142].m3))
+#define CSTORM_IGU_MODE_OFFSET (IRO[161].base)
#define CSTORM_ISCSI_CQ_SIZE_OFFSET(pfId) \
- (IRO[317].base + ((pfId) * IRO[317].m1))
+ (IRO[323].base + ((pfId) * IRO[323].m1))
#define CSTORM_ISCSI_CQ_SQN_SIZE_OFFSET(pfId) \
- (IRO[318].base + ((pfId) * IRO[318].m1))
+ (IRO[324].base + ((pfId) * IRO[324].m1))
#define CSTORM_ISCSI_EQ_CONS_OFFSET(pfId, iscsiEqId) \
- (IRO[310].base + ((pfId) * IRO[310].m1) + ((iscsiEqId) * IRO[310].m2))
+ (IRO[316].base + ((pfId) * IRO[316].m1) + ((iscsiEqId) * IRO[316].m2))
#define CSTORM_ISCSI_EQ_NEXT_EQE_ADDR_OFFSET(pfId, iscsiEqId) \
- (IRO[312].base + ((pfId) * IRO[312].m1) + ((iscsiEqId) * IRO[312].m2))
+ (IRO[318].base + ((pfId) * IRO[318].m1) + ((iscsiEqId) * IRO[318].m2))
#define CSTORM_ISCSI_EQ_NEXT_PAGE_ADDR_OFFSET(pfId, iscsiEqId) \
- (IRO[311].base + ((pfId) * IRO[311].m1) + ((iscsiEqId) * IRO[311].m2))
+ (IRO[317].base + ((pfId) * IRO[317].m1) + ((iscsiEqId) * IRO[317].m2))
#define CSTORM_ISCSI_EQ_NEXT_PAGE_ADDR_VALID_OFFSET(pfId, iscsiEqId) \
- (IRO[313].base + ((pfId) * IRO[313].m1) + ((iscsiEqId) * IRO[313].m2))
+ (IRO[319].base + ((pfId) * IRO[319].m1) + ((iscsiEqId) * IRO[319].m2))
#define CSTORM_ISCSI_EQ_PROD_OFFSET(pfId, iscsiEqId) \
- (IRO[309].base + ((pfId) * IRO[309].m1) + ((iscsiEqId) * IRO[309].m2))
-#define CSTORM_ISCSI_EQ_SB_INDEX_OFFSET(pfId, iscsiEqId) \
(IRO[315].base + ((pfId) * IRO[315].m1) + ((iscsiEqId) * IRO[315].m2))
+#define CSTORM_ISCSI_EQ_SB_INDEX_OFFSET(pfId, iscsiEqId) \
+ (IRO[321].base + ((pfId) * IRO[321].m1) + ((iscsiEqId) * IRO[321].m2))
#define CSTORM_ISCSI_EQ_SB_NUM_OFFSET(pfId, iscsiEqId) \
- (IRO[314].base + ((pfId) * IRO[314].m1) + ((iscsiEqId) * IRO[314].m2))
+ (IRO[320].base + ((pfId) * IRO[320].m1) + ((iscsiEqId) * IRO[320].m2))
#define CSTORM_ISCSI_HQ_SIZE_OFFSET(pfId) \
- (IRO[316].base + ((pfId) * IRO[316].m1))
+ (IRO[322].base + ((pfId) * IRO[322].m1))
#define CSTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfId) \
- (IRO[308].base + ((pfId) * IRO[308].m1))
+ (IRO[314].base + ((pfId) * IRO[314].m1))
#define CSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfId) \
- (IRO[307].base + ((pfId) * IRO[307].m1))
+ (IRO[313].base + ((pfId) * IRO[313].m1))
#define CSTORM_ISCSI_PAGE_SIZE_OFFSET(pfId) \
- (IRO[306].base + ((pfId) * IRO[306].m1))
+ (IRO[312].base + ((pfId) * IRO[312].m1))
#define CSTORM_RECORD_SLOW_PATH_OFFSET(funcId) \
- (IRO[151].base + ((funcId) * IRO[151].m1))
+ (IRO[155].base + ((funcId) * IRO[155].m1))
#define CSTORM_SP_STATUS_BLOCK_DATA_OFFSET(pfId) \
- (IRO[142].base + ((pfId) * IRO[142].m1))
+ (IRO[146].base + ((pfId) * IRO[146].m1))
#define CSTORM_SP_STATUS_BLOCK_DATA_STATE_OFFSET(pfId) \
- (IRO[143].base + ((pfId) * IRO[143].m1))
+ (IRO[147].base + ((pfId) * IRO[147].m1))
#define CSTORM_SP_STATUS_BLOCK_OFFSET(pfId) \
- (IRO[141].base + ((pfId) * IRO[141].m1))
-#define CSTORM_SP_STATUS_BLOCK_SIZE (IRO[141].size)
+ (IRO[145].base + ((pfId) * IRO[145].m1))
+#define CSTORM_SP_STATUS_BLOCK_SIZE (IRO[145].size)
#define CSTORM_SP_SYNC_BLOCK_OFFSET(pfId) \
- (IRO[144].base + ((pfId) * IRO[144].m1))
-#define CSTORM_SP_SYNC_BLOCK_SIZE (IRO[144].size)
+ (IRO[148].base + ((pfId) * IRO[148].m1))
+#define CSTORM_SP_SYNC_BLOCK_SIZE (IRO[148].size)
#define CSTORM_STATUS_BLOCK_DATA_FLAGS_OFFSET(sbId, hcIndex) \
- (IRO[136].base + ((sbId) * IRO[136].m1) + ((hcIndex) * IRO[136].m2))
+ (IRO[140].base + ((sbId) * IRO[140].m1) + ((hcIndex) * IRO[140].m2))
#define CSTORM_STATUS_BLOCK_DATA_OFFSET(sbId) \
- (IRO[133].base + ((sbId) * IRO[133].m1))
+ (IRO[137].base + ((sbId) * IRO[137].m1))
#define CSTORM_STATUS_BLOCK_DATA_STATE_OFFSET(sbId) \
- (IRO[134].base + ((sbId) * IRO[134].m1))
+ (IRO[138].base + ((sbId) * IRO[138].m1))
#define CSTORM_STATUS_BLOCK_DATA_TIMEOUT_OFFSET(sbId, hcIndex) \
- (IRO[135].base + ((sbId) * IRO[135].m1) + ((hcIndex) * IRO[135].m2))
+ (IRO[139].base + ((sbId) * IRO[139].m1) + ((hcIndex) * IRO[139].m2))
#define CSTORM_STATUS_BLOCK_OFFSET(sbId) \
- (IRO[132].base + ((sbId) * IRO[132].m1))
-#define CSTORM_STATUS_BLOCK_SIZE (IRO[132].size)
+ (IRO[136].base + ((sbId) * IRO[136].m1))
+#define CSTORM_STATUS_BLOCK_SIZE (IRO[136].size)
#define CSTORM_SYNC_BLOCK_OFFSET(sbId) \
- (IRO[137].base + ((sbId) * IRO[137].m1))
-#define CSTORM_SYNC_BLOCK_SIZE (IRO[137].size)
+ (IRO[141].base + ((sbId) * IRO[141].m1))
+#define CSTORM_SYNC_BLOCK_SIZE (IRO[141].size)
#define CSTORM_VF_PF_CHANNEL_STATE_OFFSET(vfId) \
- (IRO[155].base + ((vfId) * IRO[155].m1))
+ (IRO[159].base + ((vfId) * IRO[159].m1))
#define CSTORM_VF_PF_CHANNEL_VALID_OFFSET(vfId) \
- (IRO[156].base + ((vfId) * IRO[156].m1))
+ (IRO[160].base + ((vfId) * IRO[160].m1))
#define CSTORM_VF_TO_PF_OFFSET(funcId) \
- (IRO[150].base + ((funcId) * IRO[150].m1))
+ (IRO[154].base + ((funcId) * IRO[154].m1))
#define TSTORM_APPROXIMATE_MATCH_MULTICAST_FILTERING_OFFSET(pfId) \
- (IRO[203].base + ((pfId) * IRO[203].m1))
+ (IRO[207].base + ((pfId) * IRO[207].m1))
#define TSTORM_ASSERT_LIST_INDEX_OFFSET (IRO[102].base)
#define TSTORM_ASSERT_LIST_OFFSET(assertListEntry) \
(IRO[101].base + ((assertListEntry) * IRO[101].m1))
#define TSTORM_FUNCTION_COMMON_CONFIG_OFFSET(pfId) \
- (IRO[201].base + ((pfId) * IRO[201].m1))
+ (IRO[205].base + ((pfId) * IRO[205].m1))
#define TSTORM_FUNC_EN_OFFSET(funcId) \
- (IRO[103].base + ((funcId) * IRO[103].m1))
+ (IRO[107].base + ((funcId) * IRO[107].m1))
#define TSTORM_ISCSI_ERROR_BITMAP_OFFSET(pfId) \
- (IRO[272].base + ((pfId) * IRO[272].m1))
+ (IRO[278].base + ((pfId) * IRO[278].m1))
#define TSTORM_ISCSI_L2_ISCSI_OOO_CID_TABLE_OFFSET(pfId) \
- (IRO[273].base + ((pfId) * IRO[273].m1))
+ (IRO[279].base + ((pfId) * IRO[279].m1))
#define TSTORM_ISCSI_L2_ISCSI_OOO_CLIENT_ID_TABLE_OFFSET(pfId) \
- (IRO[274].base + ((pfId) * IRO[274].m1))
+ (IRO[280].base + ((pfId) * IRO[280].m1))
#define TSTORM_ISCSI_L2_ISCSI_OOO_PROD_OFFSET(pfId) \
- (IRO[275].base + ((pfId) * IRO[275].m1))
+ (IRO[281].base + ((pfId) * IRO[281].m1))
#define TSTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfId) \
- (IRO[271].base + ((pfId) * IRO[271].m1))
+ (IRO[277].base + ((pfId) * IRO[277].m1))
#define TSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfId) \
- (IRO[270].base + ((pfId) * IRO[270].m1))
+ (IRO[276].base + ((pfId) * IRO[276].m1))
#define TSTORM_ISCSI_PAGE_SIZE_OFFSET(pfId) \
- (IRO[269].base + ((pfId) * IRO[269].m1))
+ (IRO[275].base + ((pfId) * IRO[275].m1))
#define TSTORM_ISCSI_RQ_SIZE_OFFSET(pfId) \
- (IRO[268].base + ((pfId) * IRO[268].m1))
+ (IRO[274].base + ((pfId) * IRO[274].m1))
#define TSTORM_ISCSI_TCP_LOCAL_ADV_WND_OFFSET(pfId) \
- (IRO[278].base + ((pfId) * IRO[278].m1))
+ (IRO[284].base + ((pfId) * IRO[284].m1))
#define TSTORM_ISCSI_TCP_VARS_FLAGS_OFFSET(pfId) \
- (IRO[264].base + ((pfId) * IRO[264].m1))
+ (IRO[270].base + ((pfId) * IRO[270].m1))
#define TSTORM_ISCSI_TCP_VARS_LSB_LOCAL_MAC_ADDR_OFFSET(pfId) \
- (IRO[265].base + ((pfId) * IRO[265].m1))
+ (IRO[271].base + ((pfId) * IRO[271].m1))
#define TSTORM_ISCSI_TCP_VARS_MID_LOCAL_MAC_ADDR_OFFSET(pfId) \
- (IRO[266].base + ((pfId) * IRO[266].m1))
+ (IRO[272].base + ((pfId) * IRO[272].m1))
#define TSTORM_ISCSI_TCP_VARS_MSB_LOCAL_MAC_ADDR_OFFSET(pfId) \
- (IRO[267].base + ((pfId) * IRO[267].m1))
+ (IRO[273].base + ((pfId) * IRO[273].m1))
#define TSTORM_MAC_FILTER_CONFIG_OFFSET(pfId) \
- (IRO[202].base + ((pfId) * IRO[202].m1))
+ (IRO[206].base + ((pfId) * IRO[206].m1))
#define TSTORM_RECORD_SLOW_PATH_OFFSET(funcId) \
- (IRO[105].base + ((funcId) * IRO[105].m1))
+ (IRO[109].base + ((funcId) * IRO[109].m1))
#define TSTORM_TCP_MAX_CWND_OFFSET(pfId) \
- (IRO[217].base + ((pfId) * IRO[217].m1))
+ (IRO[223].base + ((pfId) * IRO[223].m1))
#define TSTORM_VF_TO_PF_OFFSET(funcId) \
- (IRO[104].base + ((funcId) * IRO[104].m1))
-#define USTORM_AGG_DATA_OFFSET (IRO[206].base)
-#define USTORM_AGG_DATA_SIZE (IRO[206].size)
-#define USTORM_ASSERT_LIST_INDEX_OFFSET (IRO[177].base)
+ (IRO[108].base + ((funcId) * IRO[108].m1))
+#define USTORM_AGG_DATA_OFFSET (IRO[212].base)
+#define USTORM_AGG_DATA_SIZE (IRO[212].size)
+#define USTORM_ASSERT_LIST_INDEX_OFFSET (IRO[181].base)
#define USTORM_ASSERT_LIST_OFFSET(assertListEntry) \
- (IRO[176].base + ((assertListEntry) * IRO[176].m1))
+ (IRO[180].base + ((assertListEntry) * IRO[180].m1))
#define USTORM_ETH_PAUSE_ENABLED_OFFSET(portId) \
- (IRO[183].base + ((portId) * IRO[183].m1))
+ (IRO[187].base + ((portId) * IRO[187].m1))
#define USTORM_FCOE_EQ_PROD_OFFSET(pfId) \
- (IRO[319].base + ((pfId) * IRO[319].m1))
+ (IRO[325].base + ((pfId) * IRO[325].m1))
#define USTORM_FUNC_EN_OFFSET(funcId) \
- (IRO[178].base + ((funcId) * IRO[178].m1))
+ (IRO[182].base + ((funcId) * IRO[182].m1))
#define USTORM_ISCSI_CQ_SIZE_OFFSET(pfId) \
- (IRO[283].base + ((pfId) * IRO[283].m1))
+ (IRO[289].base + ((pfId) * IRO[289].m1))
#define USTORM_ISCSI_CQ_SQN_SIZE_OFFSET(pfId) \
- (IRO[284].base + ((pfId) * IRO[284].m1))
+ (IRO[290].base + ((pfId) * IRO[290].m1))
#define USTORM_ISCSI_ERROR_BITMAP_OFFSET(pfId) \
- (IRO[288].base + ((pfId) * IRO[288].m1))
+ (IRO[294].base + ((pfId) * IRO[294].m1))
#define USTORM_ISCSI_GLOBAL_BUF_PHYS_ADDR_OFFSET(pfId) \
- (IRO[285].base + ((pfId) * IRO[285].m1))
+ (IRO[291].base + ((pfId) * IRO[291].m1))
#define USTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfId) \
- (IRO[281].base + ((pfId) * IRO[281].m1))
+ (IRO[287].base + ((pfId) * IRO[287].m1))
#define USTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfId) \
- (IRO[280].base + ((pfId) * IRO[280].m1))
+ (IRO[286].base + ((pfId) * IRO[286].m1))
#define USTORM_ISCSI_PAGE_SIZE_OFFSET(pfId) \
- (IRO[279].base + ((pfId) * IRO[279].m1))
+ (IRO[285].base + ((pfId) * IRO[285].m1))
#define USTORM_ISCSI_R2TQ_SIZE_OFFSET(pfId) \
- (IRO[282].base + ((pfId) * IRO[282].m1))
+ (IRO[288].base + ((pfId) * IRO[288].m1))
#define USTORM_ISCSI_RQ_BUFFER_SIZE_OFFSET(pfId) \
- (IRO[286].base + ((pfId) * IRO[286].m1))
+ (IRO[292].base + ((pfId) * IRO[292].m1))
#define USTORM_ISCSI_RQ_SIZE_OFFSET(pfId) \
- (IRO[287].base + ((pfId) * IRO[287].m1))
+ (IRO[293].base + ((pfId) * IRO[293].m1))
#define USTORM_MEM_WORKAROUND_ADDRESS_OFFSET(pfId) \
- (IRO[182].base + ((pfId) * IRO[182].m1))
+ (IRO[186].base + ((pfId) * IRO[186].m1))
#define USTORM_RECORD_SLOW_PATH_OFFSET(funcId) \
- (IRO[180].base + ((funcId) * IRO[180].m1))
+ (IRO[184].base + ((funcId) * IRO[184].m1))
#define USTORM_RX_PRODS_E1X_OFFSET(portId, clientId) \
- (IRO[209].base + ((portId) * IRO[209].m1) + ((clientId) * \
- IRO[209].m2))
+ (IRO[215].base + ((portId) * IRO[215].m1) + ((clientId) * \
+ IRO[215].m2))
#define USTORM_RX_PRODS_E2_OFFSET(qzoneId) \
- (IRO[210].base + ((qzoneId) * IRO[210].m1))
-#define USTORM_TPA_BTR_OFFSET (IRO[207].base)
-#define USTORM_TPA_BTR_SIZE (IRO[207].size)
+ (IRO[216].base + ((qzoneId) * IRO[216].m1))
+#define USTORM_TPA_BTR_OFFSET (IRO[213].base)
+#define USTORM_TPA_BTR_SIZE (IRO[213].size)
#define USTORM_VF_TO_PF_OFFSET(funcId) \
- (IRO[179].base + ((funcId) * IRO[179].m1))
+ (IRO[183].base + ((funcId) * IRO[183].m1))
#define XSTORM_AGG_INT_FINAL_CLEANUP_COMP_TYPE (IRO[67].base)
#define XSTORM_AGG_INT_FINAL_CLEANUP_INDEX (IRO[66].base)
#define XSTORM_ASSERT_LIST_INDEX_OFFSET (IRO[51].base)
@@ -186,39 +186,39 @@
#define XSTORM_FUNC_EN_OFFSET(funcId) \
(IRO[47].base + ((funcId) * IRO[47].m1))
#define XSTORM_ISCSI_HQ_SIZE_OFFSET(pfId) \
- (IRO[296].base + ((pfId) * IRO[296].m1))
+ (IRO[302].base + ((pfId) * IRO[302].m1))
#define XSTORM_ISCSI_LOCAL_MAC_ADDR0_OFFSET(pfId) \
- (IRO[299].base + ((pfId) * IRO[299].m1))
+ (IRO[305].base + ((pfId) * IRO[305].m1))
#define XSTORM_ISCSI_LOCAL_MAC_ADDR1_OFFSET(pfId) \
- (IRO[300].base + ((pfId) * IRO[300].m1))
+ (IRO[306].base + ((pfId) * IRO[306].m1))
#define XSTORM_ISCSI_LOCAL_MAC_ADDR2_OFFSET(pfId) \
- (IRO[301].base + ((pfId) * IRO[301].m1))
+ (IRO[307].base + ((pfId) * IRO[307].m1))
#define XSTORM_ISCSI_LOCAL_MAC_ADDR3_OFFSET(pfId) \
- (IRO[302].base + ((pfId) * IRO[302].m1))
+ (IRO[308].base + ((pfId) * IRO[308].m1))
#define XSTORM_ISCSI_LOCAL_MAC_ADDR4_OFFSET(pfId) \
- (IRO[303].base + ((pfId) * IRO[303].m1))
+ (IRO[309].base + ((pfId) * IRO[309].m1))
#define XSTORM_ISCSI_LOCAL_MAC_ADDR5_OFFSET(pfId) \
- (IRO[304].base + ((pfId) * IRO[304].m1))
+ (IRO[310].base + ((pfId) * IRO[310].m1))
#define XSTORM_ISCSI_LOCAL_VLAN_OFFSET(pfId) \
- (IRO[305].base + ((pfId) * IRO[305].m1))
+ (IRO[311].base + ((pfId) * IRO[311].m1))
#define XSTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfId) \
- (IRO[295].base + ((pfId) * IRO[295].m1))
+ (IRO[301].base + ((pfId) * IRO[301].m1))
#define XSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfId) \
- (IRO[294].base + ((pfId) * IRO[294].m1))
+ (IRO[300].base + ((pfId) * IRO[300].m1))
#define XSTORM_ISCSI_PAGE_SIZE_OFFSET(pfId) \
- (IRO[293].base + ((pfId) * IRO[293].m1))
+ (IRO[299].base + ((pfId) * IRO[299].m1))
#define XSTORM_ISCSI_R2TQ_SIZE_OFFSET(pfId) \
- (IRO[298].base + ((pfId) * IRO[298].m1))
+ (IRO[304].base + ((pfId) * IRO[304].m1))
#define XSTORM_ISCSI_SQ_SIZE_OFFSET(pfId) \
- (IRO[297].base + ((pfId) * IRO[297].m1))
+ (IRO[303].base + ((pfId) * IRO[303].m1))
#define XSTORM_ISCSI_TCP_VARS_ADV_WND_SCL_OFFSET(pfId) \
- (IRO[292].base + ((pfId) * IRO[292].m1))
+ (IRO[298].base + ((pfId) * IRO[298].m1))
#define XSTORM_ISCSI_TCP_VARS_FLAGS_OFFSET(pfId) \
- (IRO[291].base + ((pfId) * IRO[291].m1))
+ (IRO[297].base + ((pfId) * IRO[297].m1))
#define XSTORM_ISCSI_TCP_VARS_TOS_OFFSET(pfId) \
- (IRO[290].base + ((pfId) * IRO[290].m1))
+ (IRO[296].base + ((pfId) * IRO[296].m1))
#define XSTORM_ISCSI_TCP_VARS_TTL_OFFSET(pfId) \
- (IRO[289].base + ((pfId) * IRO[289].m1))
+ (IRO[295].base + ((pfId) * IRO[295].m1))
#define XSTORM_RATE_SHAPING_PER_VN_VARS_OFFSET(pfId) \
(IRO[44].base + ((pfId) * IRO[44].m1))
#define XSTORM_RECORD_SLOW_PATH_OFFSET(funcId) \
@@ -231,16 +231,19 @@
#define XSTORM_SPQ_PROD_OFFSET(funcId) \
(IRO[31].base + ((funcId) * IRO[31].m1))
#define XSTORM_TCP_GLOBAL_DEL_ACK_COUNTER_ENABLED_OFFSET(portId) \
- (IRO[211].base + ((portId) * IRO[211].m1))
+ (IRO[217].base + ((portId) * IRO[217].m1))
#define XSTORM_TCP_GLOBAL_DEL_ACK_COUNTER_MAX_COUNT_OFFSET(portId) \
- (IRO[212].base + ((portId) * IRO[212].m1))
+ (IRO[218].base + ((portId) * IRO[218].m1))
#define XSTORM_TCP_TX_SWS_TIMER_VAL_OFFSET(pfId) \
- (IRO[214].base + (((pfId)>>1) * IRO[214].m1) + (((pfId)&1) * \
- IRO[214].m2))
+ (IRO[220].base + (((pfId)>>1) * IRO[220].m1) + (((pfId)&1) * \
+ IRO[220].m2))
#define XSTORM_VF_TO_PF_OFFSET(funcId) \
(IRO[48].base + ((funcId) * IRO[48].m1))
#define COMMON_ASM_INVALID_ASSERT_OPCODE 0x0
+/* eth hsi version */
+#define ETH_FP_HSI_VERSION (ETH_FP_HSI_VER_2)
+
/* Ethernet Ring parameters */
#define X_ETH_LOCAL_RING_SIZE 13
#define FIRST_BD_IN_PKT 0
@@ -356,6 +359,7 @@
#define XSEMI_CLK1_RESUL_CHIP (1e-3)
#define SDM_TIMER_TICK_RESUL_CHIP (4 * (1e-6))
+#define TSDM_TIMER_TICK_RESUL_CHIP (1 * (1e-6))
/**** END DEFINES FOR TIMERS/CLOCKS RESOLUTIONS ****/
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
index c4daa06..583591d 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
@@ -280,17 +280,11 @@ struct shared_hw_cfg { /* NVRAM Offset */
#define SHARED_HW_CFG_MDC_MDIO_ACCESS2_BOTH 0x60000000
#define SHARED_HW_CFG_MDC_MDIO_ACCESS2_SWAPPED 0x80000000
-
- u32 power_dissipated; /* 0x11c */
- #define SHARED_HW_CFG_POWER_MGNT_SCALE_MASK 0x00ff0000
- #define SHARED_HW_CFG_POWER_MGNT_SCALE_SHIFT 16
- #define SHARED_HW_CFG_POWER_MGNT_UNKNOWN_SCALE 0x00000000
- #define SHARED_HW_CFG_POWER_MGNT_DOT_1_WATT 0x00010000
- #define SHARED_HW_CFG_POWER_MGNT_DOT_01_WATT 0x00020000
- #define SHARED_HW_CFG_POWER_MGNT_DOT_001_WATT 0x00030000
-
- #define SHARED_HW_CFG_POWER_DIS_CMN_MASK 0xff000000
- #define SHARED_HW_CFG_POWER_DIS_CMN_SHIFT 24
+ u32 config_3; /* 0x11C */
+ #define SHARED_HW_CFG_EXTENDED_MF_MODE_MASK 0x00000F00
+ #define SHARED_HW_CFG_EXTENDED_MF_MODE_SHIFT 8
+ #define SHARED_HW_CFG_EXTENDED_MF_MODE_NPAR1_DOT_5 0x00000000
+ #define SHARED_HW_CFG_EXTENDED_MF_MODE_NPAR2_DOT_0 0x00000100
u32 ump_nc_si_config; /* 0x120 */
#define SHARED_HW_CFG_UMP_NC_SI_MII_MODE_MASK 0x00000003
@@ -859,6 +853,8 @@ struct shared_feat_cfg { /* NVRAM Offset */
#define SHARED_FEAT_CFG_FORCE_SF_MODE_SPIO4 0x00000200
#define SHARED_FEAT_CFG_FORCE_SF_MODE_SWITCH_INDEPT 0x00000300
#define SHARED_FEAT_CFG_FORCE_SF_MODE_AFEX_MODE 0x00000400
+ #define SHARED_FEAT_CFG_FORCE_SF_MODE_UFP_MODE 0x00000600
+ #define SHARED_FEAT_CFG_FORCE_SF_MODE_EXTENDED_MODE 0x00000700
/* The interval in seconds between sending LLDP packets. Set to zero
to disable the feature */
@@ -1268,6 +1264,10 @@ struct drv_func_mb {
#define DRV_MSG_CODE_GET_UPGRADE_KEY 0x81000000
#define DRV_MSG_CODE_GET_MANUF_KEY 0x82000000
#define DRV_MSG_CODE_LOAD_L2B_PRAM 0x90000000
+ #define DRV_MSG_CODE_OEM_OK 0x00010000
+ #define DRV_MSG_CODE_OEM_FAILURE 0x00020000
+ #define DRV_MSG_CODE_OEM_UPDATE_SVID_OK 0x00030000
+ #define DRV_MSG_CODE_OEM_UPDATE_SVID_FAILURE 0x00040000
/*
* The optic module verification command requires bootcode
* v5.0.6 or later, te specific optic module verification command
@@ -1423,6 +1423,12 @@ struct drv_func_mb {
#define DRV_STATUS_SET_MF_BW 0x00000004
#define DRV_STATUS_LINK_EVENT 0x00000008
+ #define DRV_STATUS_OEM_EVENT_MASK 0x00000070
+ #define DRV_STATUS_OEM_DISABLE_ENABLE_PF 0x00000010
+ #define DRV_STATUS_OEM_BANDWIDTH_ALLOCATION 0x00000020
+
+ #define DRV_STATUS_OEM_UPDATE_SVID 0x00000080
+
#define DRV_STATUS_DCC_EVENT_MASK 0x0000ff00
#define DRV_STATUS_DCC_DISABLE_ENABLE_PF 0x00000100
#define DRV_STATUS_DCC_BANDWIDTH_ALLOCATION 0x00000200
@@ -2881,8 +2887,8 @@ struct afex_stats {
};
#define BCM_5710_FW_MAJOR_VERSION 7
-#define BCM_5710_FW_MINOR_VERSION 8
-#define BCM_5710_FW_REVISION_VERSION 19
+#define BCM_5710_FW_MINOR_VERSION 10
+#define BCM_5710_FW_REVISION_VERSION 51
#define BCM_5710_FW_ENGINEERING_VERSION 0
#define BCM_5710_FW_COMPILE_FLAGS 1
@@ -3451,6 +3457,7 @@ enum classify_rule {
CLASSIFY_RULE_OPCODE_MAC,
CLASSIFY_RULE_OPCODE_VLAN,
CLASSIFY_RULE_OPCODE_PAIR,
+ CLASSIFY_RULE_OPCODE_VXLAN,
MAX_CLASSIFY_RULE
};
@@ -3480,7 +3487,8 @@ struct client_init_general_data {
u8 func_id;
u8 cos;
u8 traffic_type;
- u32 reserved0;
+ u8 fp_hsi_ver;
+ u8 reserved0[3];
};
@@ -3550,7 +3558,9 @@ struct client_init_rx_data {
__le16 rx_cos_mask;
__le16 silent_vlan_value;
__le16 silent_vlan_mask;
- __le32 reserved6[2];
+ u8 handle_ptp_pkts_flg;
+ u8 reserved6[3];
+ __le32 reserved7;
};
/*
@@ -3581,7 +3591,7 @@ struct client_init_tx_data {
u8 tunnel_lso_inc_ip_id;
u8 refuse_outband_vlan_flg;
u8 tunnel_non_lso_pcsum_location;
- u8 reserved1;
+ u8 tunnel_non_lso_outer_ip_csum_location;
};
/*
@@ -3619,7 +3629,9 @@ struct client_update_ramrod_data {
u8 refuse_outband_vlan_change_flg;
u8 tx_switching_flg;
u8 tx_switching_change_flg;
- __le32 reserved1;
+ u8 handle_ptp_pkts_flg;
+ u8 handle_ptp_pkts_change_flg;
+ __le16 reserved1;
__le32 echo;
};
@@ -3639,6 +3651,11 @@ struct double_regpair {
u32 regpair1_hi;
};
+/* 2nd parse bd type used in ethernet tx BDs */
+enum eth_2nd_parse_bd_type {
+ ETH_2ND_PARSE_BD_TYPE_LSO_TUNNEL,
+ MAX_ETH_2ND_PARSE_BD_TYPE
+};
/*
* Ethernet address typesm used in ethernet tx BDs
@@ -3724,12 +3741,25 @@ struct eth_classify_vlan_cmd {
};
/*
+ * Command for adding/removing a VXLAN classification rule
+ */
+struct eth_classify_vxlan_cmd {
+ struct eth_classify_cmd_header header;
+ __le32 vni;
+ __le16 inner_mac_lsb;
+ __le16 inner_mac_mid;
+ __le16 inner_mac_msb;
+ __le16 reserved1;
+};
+
+/*
* union for eth classification rule
*/
union eth_classify_rule_cmd {
struct eth_classify_mac_cmd mac;
struct eth_classify_vlan_cmd vlan;
struct eth_classify_pair_cmd pair;
+ struct eth_classify_vxlan_cmd vxlan;
};
/*
@@ -3835,8 +3865,10 @@ struct eth_fast_path_rx_cqe {
#define ETH_FAST_PATH_RX_CQE_IP_BAD_XSUM_FLG_SHIFT 4
#define ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG (0x1<<5)
#define ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG_SHIFT 5
-#define ETH_FAST_PATH_RX_CQE_RESERVED0 (0x3<<6)
-#define ETH_FAST_PATH_RX_CQE_RESERVED0_SHIFT 6
+#define ETH_FAST_PATH_RX_CQE_PTP_PKT (0x1<<6)
+#define ETH_FAST_PATH_RX_CQE_PTP_PKT_SHIFT 6
+#define ETH_FAST_PATH_RX_CQE_RESERVED0 (0x1<<7)
+#define ETH_FAST_PATH_RX_CQE_RESERVED0_SHIFT 7
u8 status_flags;
#define ETH_FAST_PATH_RX_CQE_RSS_HASH_TYPE (0x7<<0)
#define ETH_FAST_PATH_RX_CQE_RSS_HASH_TYPE_SHIFT 0
@@ -3907,6 +3939,13 @@ struct eth_filter_rules_ramrod_data {
struct eth_filter_rules_cmd rules[FILTER_RULES_COUNT];
};
+/* Hsi version */
+enum eth_fp_hsi_ver {
+ ETH_FP_HSI_VER_0,
+ ETH_FP_HSI_VER_1,
+ ETH_FP_HSI_VER_2,
+ MAX_ETH_FP_HSI_VER
+};
/*
* parameters for eth classification configuration ramrod
@@ -3955,29 +3994,17 @@ struct eth_mac_addresses {
/* tunneling related data */
struct eth_tunnel_data {
-#if defined(__BIG_ENDIAN)
- __le16 dst_mid;
- __le16 dst_lo;
-#elif defined(__LITTLE_ENDIAN)
__le16 dst_lo;
__le16 dst_mid;
-#endif
-#if defined(__BIG_ENDIAN)
- __le16 reserved0;
- __le16 dst_hi;
-#elif defined(__LITTLE_ENDIAN)
__le16 dst_hi;
- __le16 reserved0;
-#endif
-#if defined(__BIG_ENDIAN)
- u8 reserved1;
- u8 ip_hdr_start_inner_w;
- __le16 pseudo_csum;
-#elif defined(__LITTLE_ENDIAN)
+ __le16 fw_ip_hdr_csum;
__le16 pseudo_csum;
u8 ip_hdr_start_inner_w;
- u8 reserved1;
-#endif
+ u8 flags;
+#define ETH_TUNNEL_DATA_IP_HDR_TYPE_OUTER (0x1<<0)
+#define ETH_TUNNEL_DATA_IP_HDR_TYPE_OUTER_SHIFT 0
+#define ETH_TUNNEL_DATA_RESERVED (0x7F<<1)
+#define ETH_TUNNEL_DATA_RESERVED_SHIFT 1
};
/* union for mac addresses and for tunneling data.
@@ -4064,31 +4091,41 @@ enum eth_rss_mode {
*/
struct eth_rss_update_ramrod_data {
u8 rss_engine_id;
- u8 capabilities;
+ u8 rss_mode;
+ __le16 capabilities;
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_CAPABILITY (0x1<<0)
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_CAPABILITY_SHIFT 0
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_TCP_CAPABILITY (0x1<<1)
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_TCP_CAPABILITY_SHIFT 1
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_UDP_CAPABILITY (0x1<<2)
#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_UDP_CAPABILITY_SHIFT 2
-#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_CAPABILITY (0x1<<3)
-#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_CAPABILITY_SHIFT 3
-#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_TCP_CAPABILITY (0x1<<4)
-#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_TCP_CAPABILITY_SHIFT 4
-#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY (0x1<<5)
-#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY_SHIFT 5
-#define ETH_RSS_UPDATE_RAMROD_DATA_EN_5_TUPLE_CAPABILITY (0x1<<6)
-#define ETH_RSS_UPDATE_RAMROD_DATA_EN_5_TUPLE_CAPABILITY_SHIFT 6
-#define ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY (0x1<<7)
-#define ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY_SHIFT 7
+#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_VXLAN_CAPABILITY (0x1<<3)
+#define ETH_RSS_UPDATE_RAMROD_DATA_IPV4_VXLAN_CAPABILITY_SHIFT 3
+#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_CAPABILITY (0x1<<4)
+#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_CAPABILITY_SHIFT 4
+#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_TCP_CAPABILITY (0x1<<5)
+#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_TCP_CAPABILITY_SHIFT 5
+#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY (0x1<<6)
+#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY_SHIFT 6
+#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_VXLAN_CAPABILITY (0x1<<7)
+#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_VXLAN_CAPABILITY_SHIFT 7
+#define ETH_RSS_UPDATE_RAMROD_DATA_EN_5_TUPLE_CAPABILITY (0x1<<8)
+#define ETH_RSS_UPDATE_RAMROD_DATA_EN_5_TUPLE_CAPABILITY_SHIFT 8
+#define ETH_RSS_UPDATE_RAMROD_DATA_NVGRE_KEY_ENTROPY_CAPABILITY (0x1<<9)
+#define ETH_RSS_UPDATE_RAMROD_DATA_NVGRE_KEY_ENTROPY_CAPABILITY_SHIFT 9
+#define ETH_RSS_UPDATE_RAMROD_DATA_GRE_INNER_HDRS_CAPABILITY (0x1<<10)
+#define ETH_RSS_UPDATE_RAMROD_DATA_GRE_INNER_HDRS_CAPABILITY_SHIFT 10
+#define ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY (0x1<<11)
+#define ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY_SHIFT 11
+#define ETH_RSS_UPDATE_RAMROD_DATA_RESERVED (0xF<<12)
+#define ETH_RSS_UPDATE_RAMROD_DATA_RESERVED_SHIFT 12
u8 rss_result_mask;
- u8 rss_mode;
- __le16 udp_4tuple_dst_port_mask;
- __le16 udp_4tuple_dst_port_value;
+ u8 reserved3;
+ __le16 reserved4;
u8 indirection_table[T_ETH_INDIRECTION_TABLE_SIZE];
__le32 rss_key[T_ETH_RSS_KEY];
__le32 echo;
- __le32 reserved3;
+ __le32 reserved5;
};
@@ -4260,10 +4297,10 @@ enum eth_tunnel_lso_inc_ip_id {
/* In case tunnel exist and L4 checksum offload,
* the pseudo checksum location, on packet or on BD.
*/
-enum eth_tunnel_non_lso_pcsum_location {
- PCSUM_ON_PKT,
- PCSUM_ON_BD,
- MAX_ETH_TUNNEL_NON_LSO_PCSUM_LOCATION
+enum eth_tunnel_non_lso_csum_location {
+ CSUM_ON_PKT,
+ CSUM_ON_BD,
+ MAX_ETH_TUNNEL_NON_LSO_CSUM_LOCATION
};
/*
@@ -4310,8 +4347,10 @@ struct eth_tx_start_bd {
__le16 vlan_or_ethertype;
struct eth_tx_bd_flags bd_flags;
u8 general_data;
-#define ETH_TX_START_BD_HDR_NBDS (0xF<<0)
+#define ETH_TX_START_BD_HDR_NBDS (0x7<<0)
#define ETH_TX_START_BD_HDR_NBDS_SHIFT 0
+#define ETH_TX_START_BD_NO_ADDED_TAGS (0x1<<3)
+#define ETH_TX_START_BD_NO_ADDED_TAGS_SHIFT 3
#define ETH_TX_START_BD_FORCE_VLAN_MODE (0x1<<4)
#define ETH_TX_START_BD_FORCE_VLAN_MODE_SHIFT 4
#define ETH_TX_START_BD_PARSE_NBDS (0x3<<5)
@@ -4387,8 +4426,8 @@ struct eth_tx_parse_2nd_bd {
__le16 global_data;
#define ETH_TX_PARSE_2ND_BD_IP_HDR_START_OUTER_W (0xF<<0)
#define ETH_TX_PARSE_2ND_BD_IP_HDR_START_OUTER_W_SHIFT 0
-#define ETH_TX_PARSE_2ND_BD_IP_HDR_TYPE_OUTER (0x1<<4)
-#define ETH_TX_PARSE_2ND_BD_IP_HDR_TYPE_OUTER_SHIFT 4
+#define ETH_TX_PARSE_2ND_BD_RESERVED0 (0x1<<4)
+#define ETH_TX_PARSE_2ND_BD_RESERVED0_SHIFT 4
#define ETH_TX_PARSE_2ND_BD_LLC_SNAP_EN (0x1<<5)
#define ETH_TX_PARSE_2ND_BD_LLC_SNAP_EN_SHIFT 5
#define ETH_TX_PARSE_2ND_BD_NS_FLG (0x1<<6)
@@ -4397,9 +4436,14 @@ struct eth_tx_parse_2nd_bd {
#define ETH_TX_PARSE_2ND_BD_TUNNEL_UDP_EXIST_SHIFT 7
#define ETH_TX_PARSE_2ND_BD_IP_HDR_LEN_OUTER_W (0x1F<<8)
#define ETH_TX_PARSE_2ND_BD_IP_HDR_LEN_OUTER_W_SHIFT 8
-#define ETH_TX_PARSE_2ND_BD_RESERVED0 (0x7<<13)
-#define ETH_TX_PARSE_2ND_BD_RESERVED0_SHIFT 13
- __le16 reserved1;
+#define ETH_TX_PARSE_2ND_BD_RESERVED1 (0x7<<13)
+#define ETH_TX_PARSE_2ND_BD_RESERVED1_SHIFT 13
+ u8 bd_type;
+#define ETH_TX_PARSE_2ND_BD_TYPE (0xF<<0)
+#define ETH_TX_PARSE_2ND_BD_TYPE_SHIFT 0
+#define ETH_TX_PARSE_2ND_BD_RESERVED2 (0xF<<4)
+#define ETH_TX_PARSE_2ND_BD_RESERVED2_SHIFT 4
+ u8 reserved3;
u8 tcp_flags;
#define ETH_TX_PARSE_2ND_BD_FIN_FLG (0x1<<0)
#define ETH_TX_PARSE_2ND_BD_FIN_FLG_SHIFT 0
@@ -4417,7 +4461,7 @@ struct eth_tx_parse_2nd_bd {
#define ETH_TX_PARSE_2ND_BD_ECE_FLG_SHIFT 6
#define ETH_TX_PARSE_2ND_BD_CWR_FLG (0x1<<7)
#define ETH_TX_PARSE_2ND_BD_CWR_FLG_SHIFT 7
- u8 reserved2;
+ u8 reserved4;
u8 tunnel_udp_hdr_start_w;
u8 fw_ip_hdr_to_payload_w;
__le16 fw_ip_csum_wo_len_flags_frag;
@@ -5205,10 +5249,18 @@ struct function_start_data {
u8 path_id;
u8 network_cos_mode;
u8 dmae_cmd_id;
- u8 gre_tunnel_mode;
- u8 gre_tunnel_rss;
- u8 nvgre_clss_en;
- __le16 reserved1[2];
+ u8 tunnel_mode;
+ u8 gre_tunnel_type;
+ u8 tunn_clss_en;
+ u8 inner_gre_rss_en;
+ u8 sd_accept_mf_clss_fail;
+ __le16 vxlan_dst_port;
+ __le16 sd_accept_mf_clss_fail_ethtype;
+ __le16 sd_vlan_eth_type;
+ u8 sd_vlan_force_pri_flg;
+ u8 sd_vlan_force_pri_val;
+ u8 sd_accept_mf_clss_fail_match_ethtype;
+ u8 no_added_tags;
};
struct function_update_data {
@@ -5225,12 +5277,20 @@ struct function_update_data {
u8 tx_switch_suspend_change_flg;
u8 tx_switch_suspend;
u8 echo;
+ u8 update_tunn_cfg_flg;
+ u8 tunnel_mode;
+ u8 gre_tunnel_type;
+ u8 tunn_clss_en;
+ u8 inner_gre_rss_en;
+ __le16 vxlan_dst_port;
+ u8 sd_vlan_force_pri_change_flg;
+ u8 sd_vlan_force_pri_flg;
+ u8 sd_vlan_force_pri_val;
+ u8 sd_vlan_tag_change_flg;
+ u8 sd_vlan_eth_type_change_flg;
u8 reserved1;
- u8 update_gre_cfg_flg;
- u8 gre_tunnel_mode;
- u8 gre_tunnel_rss;
- u8 nvgre_clss_en;
- u32 reserved3;
+ __le16 sd_vlan_tag;
+ __le16 sd_vlan_eth_type;
};
/*
@@ -5259,17 +5319,9 @@ struct fw_version {
#define __FW_VERSION_RESERVED_SHIFT 4
};
-/* GRE RSS Mode */
-enum gre_rss_mode {
- GRE_OUTER_HEADERS_RSS,
- GRE_INNER_HEADERS_RSS,
- NVGRE_KEY_ENTROPY_RSS,
- MAX_GRE_RSS_MODE
-};
/* GRE Tunnel Mode */
enum gre_tunnel_type {
- NO_GRE_TUNNEL,
NVGRE_TUNNEL,
L2GRE_TUNNEL,
IPGRE_TUNNEL,
@@ -5442,6 +5494,7 @@ enum ip_ver {
* Malicious VF error ID
*/
enum malicious_vf_error_id {
+ MALICIOUS_VF_NO_ERROR,
VF_PF_CHANNEL_NOT_READY,
ETH_ILLEGAL_BD_LENGTHS,
ETH_PACKET_TOO_SHORT,
@@ -5602,6 +5655,16 @@ struct protocol_common_spe {
union protocol_common_specific_data data;
};
+/* The data for the Set Timesync Ramrod */
+struct set_timesync_ramrod_data {
+ u8 drift_adjust_cmd;
+ u8 offset_cmd;
+ u8 add_sub_drift_adjust_value;
+ u8 drift_adjust_value;
+ u32 drift_adjust_period;
+ struct regpair offset_delta;
+};
+
/*
* The send queue element
*/
@@ -5724,10 +5787,38 @@ struct tstorm_vf_zone_data {
struct regpair reserved;
};
+/* Add or Subtract Value for Set Timesync Ramrod */
+enum ts_add_sub_value {
+ TS_SUB_VALUE,
+ TS_ADD_VALUE,
+ MAX_TS_ADD_SUB_VALUE
+};
-/*
- * zone A per-queue data
- */
+/* Drift-Adjust Commands for Set Timesync Ramrod */
+enum ts_drift_adjust_cmd {
+ TS_DRIFT_ADJUST_KEEP,
+ TS_DRIFT_ADJUST_SET,
+ TS_DRIFT_ADJUST_RESET,
+ MAX_TS_DRIFT_ADJUST_CMD
+};
+
+/* Offset Commands for Set Timesync Ramrod */
+enum ts_offset_cmd {
+ TS_OFFSET_KEEP,
+ TS_OFFSET_INC,
+ TS_OFFSET_DEC,
+ MAX_TS_OFFSET_CMD
+};
+
+/* Tunnel Mode */
+enum tunnel_mode {
+ TUNN_MODE_NONE,
+ TUNN_MODE_VXLAN,
+ TUNN_MODE_GRE,
+ MAX_TUNNEL_MODE
+};
+
+ /* zone A per-queue data */
struct ustorm_queue_zone_data {
struct ustorm_eth_rx_producers eth_rx_producers;
struct regpair reserved[3];
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index d1c093d..74fbf9e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -41,6 +41,7 @@
#include <linux/ethtool.h>
#include <linux/mii.h>
#include <linux/if_vlan.h>
+#include <linux/crash_dump.h>
#include <net/ip.h>
#include <net/ipv6.h>
#include <net/tcp.h>
@@ -63,7 +64,6 @@
#include "bnx2x_vfpf.h"
#include "bnx2x_dcb.h"
#include "bnx2x_sp.h"
-
#include <linux/firmware.h>
#include "bnx2x_fw_file_hdr.h"
/* FW files */
@@ -290,6 +290,8 @@ static int bnx2x_set_storm_rx_mode(struct bnx2x *bp);
* General service functions
****************************************************************************/
+static int bnx2x_hwtstamp_ioctl(struct bnx2x *bp, struct ifreq *ifr);
+
static void __storm_memset_dma_mapping(struct bnx2x *bp,
u32 addr, dma_addr_t mapping)
{
@@ -523,6 +525,7 @@ int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae,
* as long as this code is called both from syscall context and
* from ndo_set_rx_mode() flow that may be called from BH.
*/
+
spin_lock_bh(&bp->dmae_lock);
/* reset completion */
@@ -551,7 +554,9 @@ int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae,
}
unlock:
+
spin_unlock_bh(&bp->dmae_lock);
+
return rc;
}
@@ -646,119 +651,98 @@ static void bnx2x_write_dmae_phys_len(struct bnx2x *bp, dma_addr_t phys_addr,
bnx2x_write_dmae(bp, phys_addr + offset, addr + offset, len);
}
+enum storms {
+ XSTORM,
+ TSTORM,
+ CSTORM,
+ USTORM,
+ MAX_STORMS
+};
+
+#define STORMS_NUM 4
+#define REGS_IN_ENTRY 4
+
+static inline int bnx2x_get_assert_list_entry(struct bnx2x *bp,
+ enum storms storm,
+ int entry)
+{
+ switch (storm) {
+ case XSTORM:
+ return XSTORM_ASSERT_LIST_OFFSET(entry);
+ case TSTORM:
+ return TSTORM_ASSERT_LIST_OFFSET(entry);
+ case CSTORM:
+ return CSTORM_ASSERT_LIST_OFFSET(entry);
+ case USTORM:
+ return USTORM_ASSERT_LIST_OFFSET(entry);
+ case MAX_STORMS:
+ default:
+ BNX2X_ERR("unknown storm\n");
+ }
+ return -EINVAL;
+}
+
static int bnx2x_mc_assert(struct bnx2x *bp)
{
char last_idx;
- int i, rc = 0;
- u32 row0, row1, row2, row3;
-
- /* XSTORM */
- last_idx = REG_RD8(bp, BAR_XSTRORM_INTMEM +
- XSTORM_ASSERT_LIST_INDEX_OFFSET);
- if (last_idx)
- BNX2X_ERR("XSTORM_ASSERT_LIST_INDEX 0x%x\n", last_idx);
-
- /* print the asserts */
- for (i = 0; i < STROM_ASSERT_ARRAY_SIZE; i++) {
-
- row0 = REG_RD(bp, BAR_XSTRORM_INTMEM +
- XSTORM_ASSERT_LIST_OFFSET(i));
- row1 = REG_RD(bp, BAR_XSTRORM_INTMEM +
- XSTORM_ASSERT_LIST_OFFSET(i) + 4);
- row2 = REG_RD(bp, BAR_XSTRORM_INTMEM +
- XSTORM_ASSERT_LIST_OFFSET(i) + 8);
- row3 = REG_RD(bp, BAR_XSTRORM_INTMEM +
- XSTORM_ASSERT_LIST_OFFSET(i) + 12);
-
- if (row0 != COMMON_ASM_INVALID_ASSERT_OPCODE) {
- BNX2X_ERR("XSTORM_ASSERT_INDEX 0x%x = 0x%08x 0x%08x 0x%08x 0x%08x\n",
- i, row3, row2, row1, row0);
- rc++;
- } else {
- break;
- }
- }
-
- /* TSTORM */
- last_idx = REG_RD8(bp, BAR_TSTRORM_INTMEM +
- TSTORM_ASSERT_LIST_INDEX_OFFSET);
- if (last_idx)
- BNX2X_ERR("TSTORM_ASSERT_LIST_INDEX 0x%x\n", last_idx);
-
- /* print the asserts */
- for (i = 0; i < STROM_ASSERT_ARRAY_SIZE; i++) {
-
- row0 = REG_RD(bp, BAR_TSTRORM_INTMEM +
- TSTORM_ASSERT_LIST_OFFSET(i));
- row1 = REG_RD(bp, BAR_TSTRORM_INTMEM +
- TSTORM_ASSERT_LIST_OFFSET(i) + 4);
- row2 = REG_RD(bp, BAR_TSTRORM_INTMEM +
- TSTORM_ASSERT_LIST_OFFSET(i) + 8);
- row3 = REG_RD(bp, BAR_TSTRORM_INTMEM +
- TSTORM_ASSERT_LIST_OFFSET(i) + 12);
-
- if (row0 != COMMON_ASM_INVALID_ASSERT_OPCODE) {
- BNX2X_ERR("TSTORM_ASSERT_INDEX 0x%x = 0x%08x 0x%08x 0x%08x 0x%08x\n",
- i, row3, row2, row1, row0);
- rc++;
- } else {
- break;
- }
- }
+ int i, j, rc = 0;
+ enum storms storm;
+ u32 regs[REGS_IN_ENTRY];
+ u32 bar_storm_intmem[STORMS_NUM] = {
+ BAR_XSTRORM_INTMEM,
+ BAR_TSTRORM_INTMEM,
+ BAR_CSTRORM_INTMEM,
+ BAR_USTRORM_INTMEM
+ };
+ u32 storm_assert_list_index[STORMS_NUM] = {
+ XSTORM_ASSERT_LIST_INDEX_OFFSET,
+ TSTORM_ASSERT_LIST_INDEX_OFFSET,
+ CSTORM_ASSERT_LIST_INDEX_OFFSET,
+ USTORM_ASSERT_LIST_INDEX_OFFSET
+ };
+ char *storms_string[STORMS_NUM] = {
+ "XSTORM",
+ "TSTORM",
+ "CSTORM",
+ "USTORM"
+ };
- /* CSTORM */
- last_idx = REG_RD8(bp, BAR_CSTRORM_INTMEM +
- CSTORM_ASSERT_LIST_INDEX_OFFSET);
- if (last_idx)
- BNX2X_ERR("CSTORM_ASSERT_LIST_INDEX 0x%x\n", last_idx);
-
- /* print the asserts */
- for (i = 0; i < STROM_ASSERT_ARRAY_SIZE; i++) {
-
- row0 = REG_RD(bp, BAR_CSTRORM_INTMEM +
- CSTORM_ASSERT_LIST_OFFSET(i));
- row1 = REG_RD(bp, BAR_CSTRORM_INTMEM +
- CSTORM_ASSERT_LIST_OFFSET(i) + 4);
- row2 = REG_RD(bp, BAR_CSTRORM_INTMEM +
- CSTORM_ASSERT_LIST_OFFSET(i) + 8);
- row3 = REG_RD(bp, BAR_CSTRORM_INTMEM +
- CSTORM_ASSERT_LIST_OFFSET(i) + 12);
-
- if (row0 != COMMON_ASM_INVALID_ASSERT_OPCODE) {
- BNX2X_ERR("CSTORM_ASSERT_INDEX 0x%x = 0x%08x 0x%08x 0x%08x 0x%08x\n",
- i, row3, row2, row1, row0);
- rc++;
- } else {
- break;
+ for (storm = XSTORM; storm < MAX_STORMS; storm++) {
+ last_idx = REG_RD8(bp, bar_storm_intmem[storm] +
+ storm_assert_list_index[storm]);
+ if (last_idx)
+ BNX2X_ERR("%s_ASSERT_LIST_INDEX 0x%x\n",
+ storms_string[storm], last_idx);
+
+ /* print the asserts */
+ for (i = 0; i < STROM_ASSERT_ARRAY_SIZE; i++) {
+ /* read a single assert entry */
+ for (j = 0; j < REGS_IN_ENTRY; j++)
+ regs[j] = REG_RD(bp, bar_storm_intmem[storm] +
+ bnx2x_get_assert_list_entry(bp,
+ storm,
+ i) +
+ sizeof(u32) * j);
+
+ /* log entry if it contains a valid assert */
+ if (regs[0] != COMMON_ASM_INVALID_ASSERT_OPCODE) {
+ BNX2X_ERR("%s_ASSERT_INDEX 0x%x = 0x%08x 0x%08x 0x%08x 0x%08x\n",
+ storms_string[storm], i, regs[3],
+ regs[2], regs[1], regs[0]);
+ rc++;
+ } else {
+ break;
+ }
}
}
- /* USTORM */
- last_idx = REG_RD8(bp, BAR_USTRORM_INTMEM +
- USTORM_ASSERT_LIST_INDEX_OFFSET);
- if (last_idx)
- BNX2X_ERR("USTORM_ASSERT_LIST_INDEX 0x%x\n", last_idx);
-
- /* print the asserts */
- for (i = 0; i < STROM_ASSERT_ARRAY_SIZE; i++) {
-
- row0 = REG_RD(bp, BAR_USTRORM_INTMEM +
- USTORM_ASSERT_LIST_OFFSET(i));
- row1 = REG_RD(bp, BAR_USTRORM_INTMEM +
- USTORM_ASSERT_LIST_OFFSET(i) + 4);
- row2 = REG_RD(bp, BAR_USTRORM_INTMEM +
- USTORM_ASSERT_LIST_OFFSET(i) + 8);
- row3 = REG_RD(bp, BAR_USTRORM_INTMEM +
- USTORM_ASSERT_LIST_OFFSET(i) + 12);
-
- if (row0 != COMMON_ASM_INVALID_ASSERT_OPCODE) {
- BNX2X_ERR("USTORM_ASSERT_INDEX 0x%x = 0x%08x 0x%08x 0x%08x 0x%08x\n",
- i, row3, row2, row1, row0);
- rc++;
- } else {
- break;
- }
- }
+ BNX2X_ERR("Chip Revision: %s, FW Version: %d_%d_%d\n",
+ CHIP_IS_E1(bp) ? "everest1" :
+ CHIP_IS_E1H(bp) ? "everest1h" :
+ CHIP_IS_E2(bp) ? "everest2" : "everest3",
+ BCM_5710_FW_MAJOR_VERSION,
+ BCM_5710_FW_MINOR_VERSION,
+ BCM_5710_FW_REVISION_VERSION);
return rc;
}
@@ -983,6 +967,12 @@ void bnx2x_panic_dump(struct bnx2x *bp, bool disable_int)
u32 *sb_data_p;
struct bnx2x_fp_txdata txdata;
+ if (!bp->fp)
+ break;
+
+ if (!fp->rx_cons_sb)
+ continue;
+
/* Rx */
BNX2X_ERR("fp%d: rx_bd_prod(0x%x) rx_bd_cons(0x%x) rx_comp_prod(0x%x) rx_comp_cons(0x%x) *rx_cons_sb(0x%x)\n",
i, fp->rx_bd_prod, fp->rx_bd_cons,
@@ -995,7 +985,14 @@ void bnx2x_panic_dump(struct bnx2x *bp, bool disable_int)
/* Tx */
for_each_cos_in_tx_queue(fp, cos)
{
+ if (!fp->txdata_ptr[cos])
+ break;
+
txdata = *fp->txdata_ptr[cos];
+
+ if (!txdata.tx_cons_sb)
+ continue;
+
BNX2X_ERR("fp%d: tx_pkt_prod(0x%x) tx_pkt_cons(0x%x) tx_bd_prod(0x%x) tx_bd_cons(0x%x) *tx_cons_sb(0x%x)\n",
i, txdata.tx_pkt_prod,
txdata.tx_pkt_cons, txdata.tx_bd_prod,
@@ -1097,6 +1094,12 @@ void bnx2x_panic_dump(struct bnx2x *bp, bool disable_int)
for_each_valid_rx_queue(bp, i) {
struct bnx2x_fastpath *fp = &bp->fp[i];
+ if (!bp->fp)
+ break;
+
+ if (!fp->rx_cons_sb)
+ continue;
+
start = RX_BD(le16_to_cpu(*fp->rx_cons_sb) - 10);
end = RX_BD(le16_to_cpu(*fp->rx_cons_sb) + 503);
for (j = start; j != end; j = RX_BD(j + 1)) {
@@ -1130,9 +1133,19 @@ void bnx2x_panic_dump(struct bnx2x *bp, bool disable_int)
/* Tx */
for_each_valid_tx_queue(bp, i) {
struct bnx2x_fastpath *fp = &bp->fp[i];
+
+ if (!bp->fp)
+ break;
+
for_each_cos_in_tx_queue(fp, cos) {
struct bnx2x_fp_txdata *txdata = fp->txdata_ptr[cos];
+ if (!fp->txdata_ptr[cos])
+ break;
+
+ if (!txdata->tx_cons_sb)
+ continue;
+
start = TX_BD(le16_to_cpu(*txdata->tx_cons_sb) - 10);
end = TX_BD(le16_to_cpu(*txdata->tx_cons_sb) + 245);
for (j = start; j != end; j = TX_BD(j + 1)) {
@@ -2071,8 +2084,6 @@ int bnx2x_get_gpio(struct bnx2x *bp, int gpio_num, u8 port)
else
value = 0;
- DP(NETIF_MSG_LINK, "pin %d value 0x%x\n", gpio_num, value);
-
return value;
}
@@ -2894,6 +2905,57 @@ static void bnx2x_handle_afex_cmd(struct bnx2x *bp, u32 cmd)
}
}
+static void bnx2x_handle_update_svid_cmd(struct bnx2x *bp)
+{
+ struct bnx2x_func_switch_update_params *switch_update_params;
+ struct bnx2x_func_state_params func_params;
+
+ memset(&func_params, 0, sizeof(struct bnx2x_func_state_params));
+ switch_update_params = &func_params.params.switch_update;
+ func_params.f_obj = &bp->func_obj;
+ func_params.cmd = BNX2X_F_CMD_SWITCH_UPDATE;
+
+ if (IS_MF_UFP(bp)) {
+ int func = BP_ABS_FUNC(bp);
+ u32 val;
+
+ /* Re-learn the S-tag from shmem */
+ val = MF_CFG_RD(bp, func_mf_config[func].e1hov_tag) &
+ FUNC_MF_CFG_E1HOV_TAG_MASK;
+ if (val != FUNC_MF_CFG_E1HOV_TAG_DEFAULT) {
+ bp->mf_ov = val;
+ } else {
+ BNX2X_ERR("Got an SVID event, but no tag is configured in shmem\n");
+ goto fail;
+ }
+
+ /* Configure new S-tag in LLH */
+ REG_WR(bp, NIG_REG_LLH0_FUNC_VLAN_ID + BP_PORT(bp) * 8,
+ bp->mf_ov);
+
+ /* Send Ramrod to update FW of change */
+ __set_bit(BNX2X_F_UPDATE_SD_VLAN_TAG_CHNG,
+ &switch_update_params->changes);
+ switch_update_params->vlan = bp->mf_ov;
+
+ if (bnx2x_func_state_change(bp, &func_params) < 0) {
+ BNX2X_ERR("Failed to configure FW of S-tag Change to %02x\n",
+ bp->mf_ov);
+ goto fail;
+ }
+
+ DP(BNX2X_MSG_MCP, "Configured S-tag %02x\n", bp->mf_ov);
+
+ bnx2x_fw_command(bp, DRV_MSG_CODE_OEM_UPDATE_SVID_OK, 0);
+
+ return;
+ }
+
+ /* not supported by SW yet */
+fail:
+ bnx2x_fw_command(bp, DRV_MSG_CODE_OEM_UPDATE_SVID_FAILURE, 0);
+}
+
static void bnx2x_pmf_update(struct bnx2x *bp)
{
int port = BP_PORT(bp);
@@ -3286,7 +3348,8 @@ static void bnx2x_e1h_enable(struct bnx2x *bp)
{
int port = BP_PORT(bp);
- REG_WR(bp, NIG_REG_LLH0_FUNC_EN + port*8, 1);
+ if (!(IS_MF_UFP(bp) && BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp)))
+ REG_WR(bp, NIG_REG_LLH0_FUNC_EN + port * 8, 1);
/* Tx queue should be only re-enabled */
netif_tx_wake_all_queues(bp->dev);
@@ -3641,14 +3704,30 @@ out:
ethver, iscsiver, fcoever);
}
-static void bnx2x_dcc_event(struct bnx2x *bp, u32 dcc_event)
+static void bnx2x_oem_event(struct bnx2x *bp, u32 event)
{
- DP(BNX2X_MSG_MCP, "dcc_event 0x%x\n", dcc_event);
+ u32 cmd_ok, cmd_fail;
- if (dcc_event & DRV_STATUS_DCC_DISABLE_ENABLE_PF) {
+ /* sanity */
+ if (event & DRV_STATUS_DCC_EVENT_MASK &&
+ event & DRV_STATUS_OEM_EVENT_MASK) {
+ BNX2X_ERR("Received simultaneous events %08x\n", event);
+ return;
+ }
- /*
- * This is the only place besides the function initialization
+ if (event & DRV_STATUS_DCC_EVENT_MASK) {
+ cmd_fail = DRV_MSG_CODE_DCC_FAILURE;
+ cmd_ok = DRV_MSG_CODE_DCC_OK;
+ } else /* if (event & DRV_STATUS_OEM_EVENT_MASK) */ {
+ cmd_fail = DRV_MSG_CODE_OEM_FAILURE;
+ cmd_ok = DRV_MSG_CODE_OEM_OK;
+ }
+
+ DP(BNX2X_MSG_MCP, "oem_event 0x%x\n", event);
+
+ if (event & (DRV_STATUS_DCC_DISABLE_ENABLE_PF |
+ DRV_STATUS_OEM_DISABLE_ENABLE_PF)) {
+ /* This is the only place besides the function initialization
* where the bp->flags can change so it is done without any
* locks
*/
@@ -3663,18 +3742,22 @@ static void bnx2x_dcc_event(struct bnx2x *bp, u32 dcc_event)
bnx2x_e1h_enable(bp);
}
- dcc_event &= ~DRV_STATUS_DCC_DISABLE_ENABLE_PF;
+ event &= ~(DRV_STATUS_DCC_DISABLE_ENABLE_PF |
+ DRV_STATUS_OEM_DISABLE_ENABLE_PF);
}
- if (dcc_event & DRV_STATUS_DCC_BANDWIDTH_ALLOCATION) {
+
+ if (event & (DRV_STATUS_DCC_BANDWIDTH_ALLOCATION |
+ DRV_STATUS_OEM_BANDWIDTH_ALLOCATION)) {
bnx2x_config_mf_bw(bp);
- dcc_event &= ~DRV_STATUS_DCC_BANDWIDTH_ALLOCATION;
+ event &= ~(DRV_STATUS_DCC_BANDWIDTH_ALLOCATION |
+ DRV_STATUS_OEM_BANDWIDTH_ALLOCATION);
}
/* Report results to MCP */
- if (dcc_event)
- bnx2x_fw_command(bp, DRV_MSG_CODE_DCC_FAILURE, 0);
+ if (event)
+ bnx2x_fw_command(bp, cmd_fail, 0);
else
- bnx2x_fw_command(bp, DRV_MSG_CODE_DCC_OK, 0);
+ bnx2x_fw_command(bp, cmd_ok, 0);
}
/* must be called under the spq lock */
@@ -4156,9 +4239,12 @@ static void bnx2x_attn_int_deasserted3(struct bnx2x *bp, u32 attn)
func_mf_config[BP_ABS_FUNC(bp)].config);
val = SHMEM_RD(bp,
func_mb[BP_FW_MB_IDX(bp)].drv_status);
- if (val & DRV_STATUS_DCC_EVENT_MASK)
- bnx2x_dcc_event(bp,
- (val & DRV_STATUS_DCC_EVENT_MASK));
+
+ if (val & (DRV_STATUS_DCC_EVENT_MASK |
+ DRV_STATUS_OEM_EVENT_MASK))
+ bnx2x_oem_event(bp,
+ (val & (DRV_STATUS_DCC_EVENT_MASK |
+ DRV_STATUS_OEM_EVENT_MASK)));
if (val & DRV_STATUS_SET_MF_BW)
bnx2x_set_mf_bw(bp);
@@ -4184,6 +4270,10 @@ static void bnx2x_attn_int_deasserted3(struct bnx2x *bp, u32 attn)
val & DRV_STATUS_AFEX_EVENT_MASK);
if (val & DRV_STATUS_EEE_NEGOTIATION_RESULTS)
bnx2x_handle_eee_event(bp);
+
+ if (val & DRV_STATUS_OEM_UPDATE_SVID)
+ bnx2x_handle_update_svid_cmd(bp);
+
if (bp->link_vars.periodic_flags &
PERIODIC_FLAGS_LINK_EVENT) {
/* sync with link */
@@ -4678,7 +4768,7 @@ static bool bnx2x_check_blocks_with_parity2(struct bnx2x *bp, u32 sig,
for (i = 0; sig; i++) {
cur_bit = (0x1UL << i);
if (sig & cur_bit) {
- res |= true; /* Each bit is real error! */
+ res = true; /* Each bit is real error! */
if (print) {
switch (cur_bit) {
case AEU_INPUTS_ATTN_BITS_CSEMI_PARITY_ERROR:
@@ -4757,21 +4847,21 @@ static bool bnx2x_check_blocks_with_parity3(struct bnx2x *bp, u32 sig,
_print_next_block((*par_num)++,
"MCP ROM");
*global = true;
- res |= true;
+ res = true;
break;
case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_RX_PARITY:
if (print)
_print_next_block((*par_num)++,
"MCP UMP RX");
*global = true;
- res |= true;
+ res = true;
break;
case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY:
if (print)
_print_next_block((*par_num)++,
"MCP UMP TX");
*global = true;
- res |= true;
+ res = true;
break;
case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY:
if (print)
@@ -4803,7 +4893,7 @@ static bool bnx2x_check_blocks_with_parity4(struct bnx2x *bp, u32 sig,
for (i = 0; sig; i++) {
cur_bit = (0x1UL << i);
if (sig & cur_bit) {
- res |= true; /* Each bit is real error! */
+ res = true; /* Each bit is real error! */
if (print) {
switch (cur_bit) {
case AEU_INPUTS_ATTN_BITS_PGLUE_PARITY_ERROR:
@@ -5452,6 +5542,14 @@ static void bnx2x_eq_int(struct bnx2x *bp)
break;
goto next_spqe;
+
+ case EVENT_RING_OPCODE_SET_TIMESYNC:
+ DP(BNX2X_MSG_SP | BNX2X_MSG_PTP,
+ "got set_timesync ramrod completion\n");
+ if (f_obj->complete_cmd(bp, f_obj,
+ BNX2X_F_CMD_SET_TIMESYNC))
+ break;
+ goto next_spqe;
}
switch (opcode | bp->state) {
@@ -6102,7 +6200,7 @@ static int bnx2x_fill_accept_flags(struct bnx2x *bp, u32 rx_mode,
}
/* Set ACCEPT_ANY_VLAN as we do not enable filtering by VLAN */
- if (bp->rx_mode != BNX2X_RX_MODE_NONE) {
+ if (rx_mode != BNX2X_RX_MODE_NONE) {
__set_bit(BNX2X_ACCEPT_ANY_VLAN, rx_accept_flags);
__set_bit(BNX2X_ACCEPT_ANY_VLAN, tx_accept_flags);
}
@@ -7662,7 +7760,11 @@ static inline int bnx2x_func_switch_update(struct bnx2x *bp, int suspend)
func_params.cmd = BNX2X_F_CMD_SWITCH_UPDATE;
/* Function parameters */
- switch_update_params->suspend = suspend;
+ __set_bit(BNX2X_F_UPDATE_TX_SWITCH_SUSPEND_CHNG,
+ &switch_update_params->changes);
+ if (suspend)
+ __set_bit(BNX2X_F_UPDATE_TX_SWITCH_SUSPEND,
+ &switch_update_params->changes);
rc = bnx2x_func_state_change(bp, &func_params);
@@ -7907,8 +8009,11 @@ static int bnx2x_init_hw_func(struct bnx2x *bp)
REG_WR(bp, CFC_REG_WEAK_ENABLE_PF, 1);
if (IS_MF(bp)) {
- REG_WR(bp, NIG_REG_LLH0_FUNC_EN + port*8, 1);
- REG_WR(bp, NIG_REG_LLH0_FUNC_VLAN_ID + port*8, bp->mf_ov);
+ if (!(IS_MF_UFP(bp) && BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp))) {
+ REG_WR(bp, NIG_REG_LLH0_FUNC_EN + port * 8, 1);
+ REG_WR(bp, NIG_REG_LLH0_FUNC_VLAN_ID + port * 8,
+ bp->mf_ov);
+ }
}
bnx2x_init_block(bp, BLOCK_MISC_AEU, init_phase);
@@ -8300,13 +8405,6 @@ int bnx2x_del_all_macs(struct bnx2x *bp,
int bnx2x_set_eth_mac(struct bnx2x *bp, bool set)
{
- if (is_zero_ether_addr(bp->dev->dev_addr) &&
- (IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp))) {
- DP(NETIF_MSG_IFUP | NETIF_MSG_IFDOWN,
- "Ignoring Zero MAC for STORAGE SD mode\n");
- return 0;
- }
-
if (IS_PF(bp)) {
unsigned long ramrod_flags = 0;
@@ -9025,7 +9123,7 @@ static int bnx2x_func_wait_started(struct bnx2x *bp)
struct bnx2x_func_state_params func_params = {NULL};
DP(NETIF_MSG_IFDOWN,
- "Hmmm... Unexpected function state! Forcing STARTED-->TX_ST0PPED-->STARTED\n");
+ "Hmmm... Unexpected function state! Forcing STARTED-->TX_STOPPED-->STARTED\n");
func_params.f_obj = &bp->func_obj;
__set_bit(RAMROD_DRV_CLR_ONLY,
@@ -9044,6 +9142,48 @@ static int bnx2x_func_wait_started(struct bnx2x *bp)
return 0;
}
+static void bnx2x_disable_ptp(struct bnx2x *bp)
+{
+ int port = BP_PORT(bp);
+
+ /* Disable sending PTP packets to host */
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_TO_HOST :
+ NIG_REG_P0_LLH_PTP_TO_HOST, 0x0);
+
+ /* Reset PTP event detection rules */
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_PARAM_MASK :
+ NIG_REG_P0_LLH_PTP_PARAM_MASK, 0x7FF);
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_RULE_MASK :
+ NIG_REG_P0_LLH_PTP_RULE_MASK, 0x3FFF);
+ REG_WR(bp, port ? NIG_REG_P1_TLLH_PTP_PARAM_MASK :
+ NIG_REG_P0_TLLH_PTP_PARAM_MASK, 0x7FF);
+ REG_WR(bp, port ? NIG_REG_P1_TLLH_PTP_RULE_MASK :
+ NIG_REG_P0_TLLH_PTP_RULE_MASK, 0x3FFF);
+
+ /* Disable the PTP feature */
+ REG_WR(bp, port ? NIG_REG_P1_PTP_EN :
+ NIG_REG_P0_PTP_EN, 0x0);
+}
+
+/* Called during unload, to stop PTP-related stuff */
+void bnx2x_stop_ptp(struct bnx2x *bp)
+{
+ /* Cancel PTP work queue. Should be done after the Tx queues are
+ * drained to prevent additional scheduling.
+ */
+ cancel_work_sync(&bp->ptp_task);
+
+ if (bp->ptp_tx_skb) {
+ dev_kfree_skb_any(bp->ptp_tx_skb);
+ bp->ptp_tx_skb = NULL;
+ }
+
+ /* Disable PTP in HW */
+ bnx2x_disable_ptp(bp);
+
+ DP(BNX2X_MSG_PTP, "PTP stop ended successfully\n");
+}
+
void bnx2x_chip_cleanup(struct bnx2x *bp, int unload_mode, bool keep_link)
{
int port = BP_PORT(bp);
@@ -9162,6 +9302,13 @@ unload_error:
#endif
}
+ /* stop_ptp should be after the Tx queues are drained to prevent
+ * scheduling to the cancelled PTP work queue. It should also be after
+ * function stop ramrod is sent, since as part of this ramrod FW access
+ * PTP registers.
+ */
+ bnx2x_stop_ptp(bp);
+
/* Disable HW interrupts, NAPI */
bnx2x_netif_stop(bp, 1);
/* Delete all NAPI objects */
@@ -11283,15 +11430,14 @@ static void bnx2x_get_fcoe_info(struct bnx2x *bp)
dev_info.port_hw_config[port].
fcoe_wwn_node_name_lower);
} else if (!IS_MF_SD(bp)) {
- /*
- * Read the WWN info only if the FCoE feature is enabled for
+ /* Read the WWN info only if the FCoE feature is enabled for
* this function.
*/
- if (BNX2X_MF_EXT_PROTOCOL_FCOE(bp) && !CHIP_IS_E1x(bp))
+ if (BNX2X_HAS_MF_EXT_PROTOCOL_FCOE(bp))
+ bnx2x_get_ext_wwn_info(bp, func);
+ } else {
+ if (BNX2X_IS_MF_SD_PROTOCOL_FCOE(bp) && !CHIP_IS_E1x(bp))
bnx2x_get_ext_wwn_info(bp, func);
-
- } else if (IS_MF_FCOE_SD(bp) && !CHIP_IS_E1x(bp)) {
- bnx2x_get_ext_wwn_info(bp, func);
}
BNX2X_DEV_INFO("max_fcoe_conn 0x%x\n", bp->cnic_eth_dev.max_fcoe_conn);
@@ -11329,7 +11475,7 @@ static void bnx2x_get_cnic_mac_hwinfo(struct bnx2x *bp)
* In non SD mode features configuration comes from struct
* func_ext_config.
*/
- if (!IS_MF_SD(bp) && !CHIP_IS_E1x(bp)) {
+ if (!IS_MF_SD(bp)) {
u32 cfg = MF_CFG_RD(bp, func_ext_config[func].func_cfg);
if (cfg & MACP_FUNC_CFG_FLAGS_ISCSI_OFFLOAD) {
val2 = MF_CFG_RD(bp, func_ext_config[func].
@@ -11448,7 +11594,7 @@ static void bnx2x_get_mac_hwinfo(struct bnx2x *bp)
memcpy(bp->link_params.mac_addr, bp->dev->dev_addr, ETH_ALEN);
- if (!bnx2x_is_valid_ether_addr(bp, bp->dev->dev_addr))
+ if (!is_valid_ether_addr(bp->dev->dev_addr))
dev_err(&bp->pdev->dev,
"bad Ethernet MAC address configuration: %pM\n"
"change it manually before bringing up the appropriate network interface\n",
@@ -11478,11 +11624,27 @@ static bool bnx2x_get_dropless_info(struct bnx2x *bp)
return cfg;
}
+static void validate_set_si_mode(struct bnx2x *bp)
+{
+ u8 func = BP_ABS_FUNC(bp);
+ u32 val;
+
+ val = MF_CFG_RD(bp, func_mf_config[func].mac_upper);
+
+ /* check for legal mac (upper bytes) */
+ if (val != 0xffff) {
+ bp->mf_mode = MULTI_FUNCTION_SI;
+ bp->mf_config[BP_VN(bp)] =
+ MF_CFG_RD(bp, func_mf_config[func].config);
+ } else
+ BNX2X_DEV_INFO("illegal MAC address for SI\n");
+}
+
static int bnx2x_get_hwinfo(struct bnx2x *bp)
{
int /*abs*/func = BP_ABS_FUNC(bp);
int vn;
- u32 val = 0;
+ u32 val = 0, val2 = 0;
int rc = 0;
bnx2x_get_common_hwinfo(bp);
@@ -11562,6 +11724,7 @@ static int bnx2x_get_hwinfo(struct bnx2x *bp)
bp->mf_ov = 0;
bp->mf_mode = 0;
+ bp->mf_sub_mode = 0;
vn = BP_VN(bp);
if (!CHIP_IS_E1(bp) && !BP_NOMCP(bp)) {
@@ -11591,15 +11754,7 @@ static int bnx2x_get_hwinfo(struct bnx2x *bp)
switch (val) {
case SHARED_FEAT_CFG_FORCE_SF_MODE_SWITCH_INDEPT:
- val = MF_CFG_RD(bp, func_mf_config[func].
- mac_upper);
- /* check for legal mac (upper bytes)*/
- if (val != 0xffff) {
- bp->mf_mode = MULTI_FUNCTION_SI;
- bp->mf_config[vn] = MF_CFG_RD(bp,
- func_mf_config[func].config);
- } else
- BNX2X_DEV_INFO("illegal MAC address for SI\n");
+ validate_set_si_mode(bp);
break;
case SHARED_FEAT_CFG_FORCE_SF_MODE_AFEX_MODE:
if ((!CHIP_IS_E1x(bp)) &&
@@ -11627,9 +11782,33 @@ static int bnx2x_get_hwinfo(struct bnx2x *bp)
} else
BNX2X_DEV_INFO("illegal OV for SD\n");
break;
+ case SHARED_FEAT_CFG_FORCE_SF_MODE_UFP_MODE:
+ bp->mf_mode = MULTI_FUNCTION_SD;
+ bp->mf_sub_mode = SUB_MF_MODE_UFP;
+ bp->mf_config[vn] =
+ MF_CFG_RD(bp,
+ func_mf_config[func].config);
+ break;
case SHARED_FEAT_CFG_FORCE_SF_MODE_FORCED_SF:
bp->mf_config[vn] = 0;
break;
+ case SHARED_FEAT_CFG_FORCE_SF_MODE_EXTENDED_MODE:
+ val2 = SHMEM_RD(bp,
+ dev_info.shared_hw_config.config_3);
+ val2 &= SHARED_HW_CFG_EXTENDED_MF_MODE_MASK;
+ switch (val2) {
+ case SHARED_HW_CFG_EXTENDED_MF_MODE_NPAR1_DOT_5:
+ validate_set_si_mode(bp);
+ bp->mf_sub_mode =
+ SUB_MF_MODE_NPAR1_DOT_5;
+ break;
+ default:
+ /* Unknown configuration */
+ bp->mf_config[vn] = 0;
+ BNX2X_DEV_INFO("unknown extended MF mode 0x%x\n",
+ val);
+ }
+ break;
default:
/* Unknown configuration: reset mf_config */
bp->mf_config[vn] = 0;
@@ -11650,6 +11829,11 @@ static int bnx2x_get_hwinfo(struct bnx2x *bp)
BNX2X_DEV_INFO("MF OV for func %d is %d (0x%04x)\n",
func, bp->mf_ov, bp->mf_ov);
+ } else if (bp->mf_sub_mode == SUB_MF_MODE_UFP) {
+ dev_err(&bp->pdev->dev,
+ "Unexpected - no valid MF OV for func %d in UFP mode\n",
+ func);
+ bp->path_has_ovlan = true;
} else {
dev_err(&bp->pdev->dev,
"No valid MF OV for func %d, aborting\n",
@@ -11898,9 +12082,9 @@ static int bnx2x_init_bp(struct bnx2x *bp)
dev_err(&bp->pdev->dev, "MCP disabled, must load devices in order!\n");
bp->disable_tpa = disable_tpa;
- bp->disable_tpa |= IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp);
+ bp->disable_tpa |= !!IS_MF_STORAGE_ONLY(bp);
/* Reduce memory usage in kdump environment by disabling TPA */
- bp->disable_tpa |= reset_devices;
+ bp->disable_tpa |= is_kdump_kernel();
/* Set TPA flags */
if (bp->disable_tpa) {
@@ -11918,7 +12102,7 @@ static int bnx2x_init_bp(struct bnx2x *bp)
bp->mrrs = mrrs;
- bp->tx_ring_size = IS_MF_FCOE_AFEX(bp) ? 0 : MAX_TX_AVAIL;
+ bp->tx_ring_size = IS_MF_STORAGE_ONLY(bp) ? 0 : MAX_TX_AVAIL;
if (IS_VF(bp))
bp->rx_ring_size = MAX_RX_AVAIL;
@@ -11976,6 +12160,9 @@ static int bnx2x_init_bp(struct bnx2x *bp)
bp->dump_preset_idx = 1;
+ if (CHIP_IS_E3B0(bp))
+ bp->flags |= PTP_SUPPORTED;
+
return rc;
}
@@ -12235,7 +12422,7 @@ void bnx2x_set_rx_mode_inner(struct bnx2x *bp)
bp->rx_mode = rx_mode;
/* handle ISCSI SD mode */
- if (IS_MF_ISCSI_SD(bp))
+ if (IS_MF_ISCSI_ONLY(bp))
bp->rx_mode = BNX2X_RX_MODE_NONE;
/* Schedule the rx_mode command */
@@ -12308,13 +12495,17 @@ static int bnx2x_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
struct bnx2x *bp = netdev_priv(dev);
struct mii_ioctl_data *mdio = if_mii(ifr);
- DP(NETIF_MSG_LINK, "ioctl: phy id 0x%x, reg 0x%x, val_in 0x%x\n",
- mdio->phy_id, mdio->reg_num, mdio->val_in);
-
if (!netif_running(dev))
return -EAGAIN;
- return mdio_mii_ioctl(&bp->mdio, mdio, cmd);
+ switch (cmd) {
+ case SIOCSHWTSTAMP:
+ return bnx2x_hwtstamp_ioctl(bp, ifr);
+ default:
+ DP(NETIF_MSG_LINK, "ioctl: phy id 0x%x, reg 0x%x, val_in 0x%x\n",
+ mdio->phy_id, mdio->reg_num, mdio->val_in);
+ return mdio_mii_ioctl(&bp->mdio, mdio, cmd);
+ }
}
#ifdef CONFIG_NET_POLL_CONTROLLER
@@ -12338,7 +12529,7 @@ static int bnx2x_validate_addr(struct net_device *dev)
if (IS_VF(bp))
bnx2x_sample_bulletin(bp);
- if (!bnx2x_is_valid_ether_addr(bp, dev->dev_addr)) {
+ if (!is_valid_ether_addr(dev->dev_addr)) {
BNX2X_ERR("Non-valid Ethernet address\n");
return -EADDRNOTAVAIL;
}
@@ -12958,6 +13149,191 @@ static int set_is_vf(int chip_id)
}
}
+/* nig_tsgen registers relative address */
+#define tsgen_ctrl 0x0
+#define tsgen_freecount 0x10
+#define tsgen_synctime_t0 0x20
+#define tsgen_offset_t0 0x28
+#define tsgen_drift_t0 0x30
+#define tsgen_synctime_t1 0x58
+#define tsgen_offset_t1 0x60
+#define tsgen_drift_t1 0x68
+
+/* FW workaround for setting drift */
+static int bnx2x_send_update_drift_ramrod(struct bnx2x *bp, int drift_dir,
+ int best_val, int best_period)
+{
+ struct bnx2x_func_state_params func_params = {NULL};
+ struct bnx2x_func_set_timesync_params *set_timesync_params =
+ &func_params.params.set_timesync;
+
+ /* Prepare parameters for function state transitions */
+ __set_bit(RAMROD_COMP_WAIT, &func_params.ramrod_flags);
+ __set_bit(RAMROD_RETRY, &func_params.ramrod_flags);
+
+ func_params.f_obj = &bp->func_obj;
+ func_params.cmd = BNX2X_F_CMD_SET_TIMESYNC;
+
+ /* Function parameters */
+ set_timesync_params->drift_adjust_cmd = TS_DRIFT_ADJUST_SET;
+ set_timesync_params->offset_cmd = TS_OFFSET_KEEP;
+ set_timesync_params->add_sub_drift_adjust_value =
+ drift_dir ? TS_ADD_VALUE : TS_SUB_VALUE;
+ set_timesync_params->drift_adjust_value = best_val;
+ set_timesync_params->drift_adjust_period = best_period;
+
+ return bnx2x_func_state_change(bp, &func_params);
+}
+
+static int bnx2x_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
+{
+ struct bnx2x *bp = container_of(ptp, struct bnx2x, ptp_clock_info);
+ int rc;
+ int drift_dir = 1;
+ int val, period, period1, period2, dif, dif1, dif2;
+ int best_dif = BNX2X_MAX_PHC_DRIFT, best_period = 0, best_val = 0;
+
+ DP(BNX2X_MSG_PTP, "PTP adjfreq called, ppb = %d\n", ppb);
+
+ if (!netif_running(bp->dev)) {
+ DP(BNX2X_MSG_PTP,
+ "PTP adjfreq called while the interface is down\n");
+ return -EFAULT;
+ }
+
+ if (ppb < 0) {
+ ppb = -ppb;
+ drift_dir = 0;
+ }
+
+ if (ppb == 0) {
+ best_val = 1;
+ best_period = 0x1FFFFFF;
+ } else if (ppb >= BNX2X_MAX_PHC_DRIFT) {
+ best_val = 31;
+ best_period = 1;
+ } else {
+ /* Changed not to allow val = 8, 16, 24 as these values
+ * are not supported in workaround.
+ */
+ for (val = 0; val <= 31; val++) {
+ if ((val & 0x7) == 0)
+ continue;
+ period1 = val * 1000000 / ppb;
+ period2 = period1 + 1;
+ if (period1 != 0)
+ dif1 = ppb - (val * 1000000 / period1);
+ else
+ dif1 = BNX2X_MAX_PHC_DRIFT;
+ if (dif1 < 0)
+ dif1 = -dif1;
+ dif2 = ppb - (val * 1000000 / period2);
+ if (dif2 < 0)
+ dif2 = -dif2;
+ dif = (dif1 < dif2) ? dif1 : dif2;
+ period = (dif1 < dif2) ? period1 : period2;
+ if (dif < best_dif) {
+ best_dif = dif;
+ best_val = val;
+ best_period = period;
+ }
+ }
+ }
+
+ rc = bnx2x_send_update_drift_ramrod(bp, drift_dir, best_val,
+ best_period);
+ if (rc) {
+ BNX2X_ERR("Failed to set drift\n");
+ return -EFAULT;
+ }
+
+ DP(BNX2X_MSG_PTP, "Configrued val = %d, period = %d\n", best_val,
+ best_period);
+
+ return 0;
+}
+
+static int bnx2x_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+{
+ struct bnx2x *bp = container_of(ptp, struct bnx2x, ptp_clock_info);
+ u64 now;
+
+ DP(BNX2X_MSG_PTP, "PTP adjtime called, delta = %llx\n", delta);
+
+ now = timecounter_read(&bp->timecounter);
+ now += delta;
+ /* Re-init the timecounter */
+ timecounter_init(&bp->timecounter, &bp->cyclecounter, now);
+
+ return 0;
+}
+
+static int bnx2x_ptp_gettime(struct ptp_clock_info *ptp, struct timespec *ts)
+{
+ struct bnx2x *bp = container_of(ptp, struct bnx2x, ptp_clock_info);
+ u64 ns;
+ u32 remainder;
+
+ ns = timecounter_read(&bp->timecounter);
+
+ DP(BNX2X_MSG_PTP, "PTP gettime called, ns = %llu\n", ns);
+
+ ts->tv_sec = div_u64_rem(ns, 1000000000ULL, &remainder);
+ ts->tv_nsec = remainder;
+
+ return 0;
+}
+
+static int bnx2x_ptp_settime(struct ptp_clock_info *ptp,
+ const struct timespec *ts)
+{
+ struct bnx2x *bp = container_of(ptp, struct bnx2x, ptp_clock_info);
+ u64 ns;
+
+ ns = ts->tv_sec * 1000000000ULL;
+ ns += ts->tv_nsec;
+
+ DP(BNX2X_MSG_PTP, "PTP settime called, ns = %llu\n", ns);
+
+ /* Re-init the timecounter */
+ timecounter_init(&bp->timecounter, &bp->cyclecounter, ns);
+
+ return 0;
+}
+
+/* Enable (or disable) ancillary features of the phc subsystem */
+static int bnx2x_ptp_enable(struct ptp_clock_info *ptp,
+ struct ptp_clock_request *rq, int on)
+{
+ struct bnx2x *bp = container_of(ptp, struct bnx2x, ptp_clock_info);
+
+ BNX2X_ERR("PHC ancillary features are not supported\n");
+ return -ENOTSUPP;
+}
+
+void bnx2x_register_phc(struct bnx2x *bp)
+{
+ /* Fill the ptp_clock_info struct and register PTP clock*/
+ bp->ptp_clock_info.owner = THIS_MODULE;
+ snprintf(bp->ptp_clock_info.name, 16, "%s", bp->dev->name);
+ bp->ptp_clock_info.max_adj = BNX2X_MAX_PHC_DRIFT; /* In PPB */
+ bp->ptp_clock_info.n_alarm = 0;
+ bp->ptp_clock_info.n_ext_ts = 0;
+ bp->ptp_clock_info.n_per_out = 0;
+ bp->ptp_clock_info.pps = 0;
+ bp->ptp_clock_info.adjfreq = bnx2x_ptp_adjfreq;
+ bp->ptp_clock_info.adjtime = bnx2x_ptp_adjtime;
+ bp->ptp_clock_info.gettime = bnx2x_ptp_gettime;
+ bp->ptp_clock_info.settime = bnx2x_ptp_settime;
+ bp->ptp_clock_info.enable = bnx2x_ptp_enable;
+
+ bp->ptp_clock = ptp_clock_register(&bp->ptp_clock_info, &bp->pdev->dev);
+ if (IS_ERR(bp->ptp_clock)) {
+ bp->ptp_clock = NULL;
+ BNX2X_ERR("PTP clock registeration failed\n");
+ }
+}
+
static int bnx2x_init_one(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
@@ -13129,6 +13505,8 @@ static int bnx2x_init_one(struct pci_dev *pdev,
"Unknown",
dev->base_addr, bp->pdev->irq, dev->dev_addr);
+ bnx2x_register_phc(bp);
+
return 0;
init_one_exit:
@@ -13155,6 +13533,11 @@ static void __bnx2x_remove(struct pci_dev *pdev,
struct bnx2x *bp,
bool remove_netdev)
{
+ if (bp->ptp_clock) {
+ ptp_clock_unregister(bp->ptp_clock);
+ bp->ptp_clock = NULL;
+ }
+
/* Delete storage MAC address */
if (!NO_FCOE(bp)) {
rtnl_lock();
@@ -14136,3 +14519,332 @@ int bnx2x_pretend_func(struct bnx2x *bp, u16 pretend_func_val)
REG_RD(bp, pretend_reg);
return 0;
}
+
+static void bnx2x_ptp_task(struct work_struct *work)
+{
+ struct bnx2x *bp = container_of(work, struct bnx2x, ptp_task);
+ int port = BP_PORT(bp);
+ u32 val_seq;
+ u64 timestamp, ns;
+ struct skb_shared_hwtstamps shhwtstamps;
+
+ /* Read Tx timestamp registers */
+ val_seq = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_SEQID :
+ NIG_REG_P0_TLLH_PTP_BUF_SEQID);
+ if (val_seq & 0x10000) {
+ /* There is a valid timestamp value */
+ timestamp = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_TS_MSB :
+ NIG_REG_P0_TLLH_PTP_BUF_TS_MSB);
+ timestamp <<= 32;
+ timestamp |= REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_TS_LSB :
+ NIG_REG_P0_TLLH_PTP_BUF_TS_LSB);
+ /* Reset timestamp register to allow new timestamp */
+ REG_WR(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_SEQID :
+ NIG_REG_P0_TLLH_PTP_BUF_SEQID, 0x10000);
+ ns = timecounter_cyc2time(&bp->timecounter, timestamp);
+
+ memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+ shhwtstamps.hwtstamp = ns_to_ktime(ns);
+ skb_tstamp_tx(bp->ptp_tx_skb, &shhwtstamps);
+ dev_kfree_skb_any(bp->ptp_tx_skb);
+ bp->ptp_tx_skb = NULL;
+
+ DP(BNX2X_MSG_PTP, "Tx timestamp, timestamp cycles = %llu, ns = %llu\n",
+ timestamp, ns);
+ } else {
+ DP(BNX2X_MSG_PTP, "There is no valid Tx timestamp yet\n");
+ /* Reschedule to keep checking for a valid timestamp value */
+ schedule_work(&bp->ptp_task);
+ }
+}
+
+void bnx2x_set_rx_ts(struct bnx2x *bp, struct sk_buff *skb)
+{
+ int port = BP_PORT(bp);
+ u64 timestamp, ns;
+
+ timestamp = REG_RD(bp, port ? NIG_REG_P1_LLH_PTP_HOST_BUF_TS_MSB :
+ NIG_REG_P0_LLH_PTP_HOST_BUF_TS_MSB);
+ timestamp <<= 32;
+ timestamp |= REG_RD(bp, port ? NIG_REG_P1_LLH_PTP_HOST_BUF_TS_LSB :
+ NIG_REG_P0_LLH_PTP_HOST_BUF_TS_LSB);
+
+ /* Reset timestamp register to allow new timestamp */
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_HOST_BUF_SEQID :
+ NIG_REG_P0_LLH_PTP_HOST_BUF_SEQID, 0x10000);
+
+ ns = timecounter_cyc2time(&bp->timecounter, timestamp);
+
+ skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(ns);
+
+ DP(BNX2X_MSG_PTP, "Rx timestamp, timestamp cycles = %llu, ns = %llu\n",
+ timestamp, ns);
+}
+
+/* Read the PHC */
+static cycle_t bnx2x_cyclecounter_read(const struct cyclecounter *cc)
+{
+ struct bnx2x *bp = container_of(cc, struct bnx2x, cyclecounter);
+ int port = BP_PORT(bp);
+ u32 wb_data[2];
+ u64 phc_cycles;
+
+ REG_RD_DMAE(bp, port ? NIG_REG_TIMESYNC_GEN_REG + tsgen_synctime_t1 :
+ NIG_REG_TIMESYNC_GEN_REG + tsgen_synctime_t0, wb_data, 2);
+ phc_cycles = wb_data[1];
+ phc_cycles = (phc_cycles << 32) + wb_data[0];
+
+ DP(BNX2X_MSG_PTP, "PHC read cycles = %llu\n", phc_cycles);
+
+ return phc_cycles;
+}
+
+static void bnx2x_init_cyclecounter(struct bnx2x *bp)
+{
+ memset(&bp->cyclecounter, 0, sizeof(bp->cyclecounter));
+ bp->cyclecounter.read = bnx2x_cyclecounter_read;
+ bp->cyclecounter.mask = CLOCKSOURCE_MASK(64);
+ bp->cyclecounter.shift = 1;
+ bp->cyclecounter.mult = 1;
+}
+
+static int bnx2x_send_reset_timesync_ramrod(struct bnx2x *bp)
+{
+ struct bnx2x_func_state_params func_params = {NULL};
+ struct bnx2x_func_set_timesync_params *set_timesync_params =
+ &func_params.params.set_timesync;
+
+ /* Prepare parameters for function state transitions */
+ __set_bit(RAMROD_COMP_WAIT, &func_params.ramrod_flags);
+ __set_bit(RAMROD_RETRY, &func_params.ramrod_flags);
+
+ func_params.f_obj = &bp->func_obj;
+ func_params.cmd = BNX2X_F_CMD_SET_TIMESYNC;
+
+ /* Function parameters */
+ set_timesync_params->drift_adjust_cmd = TS_DRIFT_ADJUST_RESET;
+ set_timesync_params->offset_cmd = TS_OFFSET_KEEP;
+
+ return bnx2x_func_state_change(bp, &func_params);
+}
+
+int bnx2x_enable_ptp_packets(struct bnx2x *bp)
+{
+ struct bnx2x_queue_state_params q_params;
+ int rc, i;
+
+ /* send queue update ramrod to enable PTP packets */
+ memset(&q_params, 0, sizeof(q_params));
+ __set_bit(RAMROD_COMP_WAIT, &q_params.ramrod_flags);
+ q_params.cmd = BNX2X_Q_CMD_UPDATE;
+ __set_bit(BNX2X_Q_UPDATE_PTP_PKTS_CHNG,
+ &q_params.params.update.update_flags);
+ __set_bit(BNX2X_Q_UPDATE_PTP_PKTS,
+ &q_params.params.update.update_flags);
+
+ /* send the ramrod on all the queues of the PF */
+ for_each_eth_queue(bp, i) {
+ struct bnx2x_fastpath *fp = &bp->fp[i];
+
+ /* Set the appropriate Queue object */
+ q_params.q_obj = &bnx2x_sp_obj(bp, fp).q_obj;
+
+ /* Update the Queue state */
+ rc = bnx2x_queue_state_change(bp, &q_params);
+ if (rc) {
+ BNX2X_ERR("Failed to enable PTP packets\n");
+ return rc;
+ }
+ }
+
+ return 0;
+}
+
+int bnx2x_configure_ptp_filters(struct bnx2x *bp)
+{
+ int port = BP_PORT(bp);
+ int rc;
+
+ if (!bp->hwtstamp_ioctl_called)
+ return 0;
+
+ switch (bp->tx_type) {
+ case HWTSTAMP_TX_ON:
+ bp->flags |= TX_TIMESTAMPING_EN;
+ REG_WR(bp, port ? NIG_REG_P1_TLLH_PTP_PARAM_MASK :
+ NIG_REG_P0_TLLH_PTP_PARAM_MASK, 0x6AA);
+ REG_WR(bp, port ? NIG_REG_P1_TLLH_PTP_RULE_MASK :
+ NIG_REG_P0_TLLH_PTP_RULE_MASK, 0x3EEE);
+ break;
+ case HWTSTAMP_TX_ONESTEP_SYNC:
+ BNX2X_ERR("One-step timestamping is not supported\n");
+ return -ERANGE;
+ }
+
+ switch (bp->rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+ break;
+ case HWTSTAMP_FILTER_ALL:
+ case HWTSTAMP_FILTER_SOME:
+ bp->rx_filter = HWTSTAMP_FILTER_NONE;
+ break;
+ case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
+ bp->rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT;
+ /* Initialize PTP detection for UDP/IPv4 events */
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_PARAM_MASK :
+ NIG_REG_P0_LLH_PTP_PARAM_MASK, 0x7EE);
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_RULE_MASK :
+ NIG_REG_P0_LLH_PTP_RULE_MASK, 0x3FFE);
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+ bp->rx_filter = HWTSTAMP_FILTER_PTP_V2_L4_EVENT;
+ /* Initialize PTP detection for UDP/IPv4 or UDP/IPv6 events */
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_PARAM_MASK :
+ NIG_REG_P0_LLH_PTP_PARAM_MASK, 0x7EA);
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_RULE_MASK :
+ NIG_REG_P0_LLH_PTP_RULE_MASK, 0x3FEE);
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
+ bp->rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
+ /* Initialize PTP detection L2 events */
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_PARAM_MASK :
+ NIG_REG_P0_LLH_PTP_PARAM_MASK, 0x6BF);
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_RULE_MASK :
+ NIG_REG_P0_LLH_PTP_RULE_MASK, 0x3EFF);
+
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+ bp->rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+ /* Initialize PTP detection L2, UDP/IPv4 or UDP/IPv6 events */
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_PARAM_MASK :
+ NIG_REG_P0_LLH_PTP_PARAM_MASK, 0x6AA);
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_RULE_MASK :
+ NIG_REG_P0_LLH_PTP_RULE_MASK, 0x3EEE);
+ break;
+ }
+
+ /* Indicate to FW that this PF expects recorded PTP packets */
+ rc = bnx2x_enable_ptp_packets(bp);
+ if (rc)
+ return rc;
+
+ /* Enable sending PTP packets to host */
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_TO_HOST :
+ NIG_REG_P0_LLH_PTP_TO_HOST, 0x1);
+
+ return 0;
+}
+
+static int bnx2x_hwtstamp_ioctl(struct bnx2x *bp, struct ifreq *ifr)
+{
+ struct hwtstamp_config config;
+ int rc;
+
+ DP(BNX2X_MSG_PTP, "HWTSTAMP IOCTL called\n");
+
+ if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+ return -EFAULT;
+
+ DP(BNX2X_MSG_PTP, "Requested tx_type: %d, requested rx_filters = %d\n",
+ config.tx_type, config.rx_filter);
+
+ if (config.flags) {
+ BNX2X_ERR("config.flags is reserved for future use\n");
+ return -EINVAL;
+ }
+
+ bp->hwtstamp_ioctl_called = 1;
+ bp->tx_type = config.tx_type;
+ bp->rx_filter = config.rx_filter;
+
+ rc = bnx2x_configure_ptp_filters(bp);
+ if (rc)
+ return rc;
+
+ config.rx_filter = bp->rx_filter;
+
+ return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ?
+ -EFAULT : 0;
+}
+
+/* Configrues HW for PTP */
+static int bnx2x_configure_ptp(struct bnx2x *bp)
+{
+ int rc, port = BP_PORT(bp);
+ u32 wb_data[2];
+
+ /* Reset PTP event detection rules - will be configured in the IOCTL */
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_PARAM_MASK :
+ NIG_REG_P0_LLH_PTP_PARAM_MASK, 0x7FF);
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_RULE_MASK :
+ NIG_REG_P0_LLH_PTP_RULE_MASK, 0x3FFF);
+ REG_WR(bp, port ? NIG_REG_P1_TLLH_PTP_PARAM_MASK :
+ NIG_REG_P0_TLLH_PTP_PARAM_MASK, 0x7FF);
+ REG_WR(bp, port ? NIG_REG_P1_TLLH_PTP_RULE_MASK :
+ NIG_REG_P0_TLLH_PTP_RULE_MASK, 0x3FFF);
+
+ /* Disable PTP packets to host - will be configured in the IOCTL*/
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_TO_HOST :
+ NIG_REG_P0_LLH_PTP_TO_HOST, 0x0);
+
+ /* Enable the PTP feature */
+ REG_WR(bp, port ? NIG_REG_P1_PTP_EN :
+ NIG_REG_P0_PTP_EN, 0x3F);
+
+ /* Enable the free-running counter */
+ wb_data[0] = 0;
+ wb_data[1] = 0;
+ REG_WR_DMAE(bp, NIG_REG_TIMESYNC_GEN_REG + tsgen_ctrl, wb_data, 2);
+
+ /* Reset drift register (offset register is not reset) */
+ rc = bnx2x_send_reset_timesync_ramrod(bp);
+ if (rc) {
+ BNX2X_ERR("Failed to reset PHC drift register\n");
+ return -EFAULT;
+ }
+
+ /* Reset possibly old timestamps */
+ REG_WR(bp, port ? NIG_REG_P1_LLH_PTP_HOST_BUF_SEQID :
+ NIG_REG_P0_LLH_PTP_HOST_BUF_SEQID, 0x10000);
+ REG_WR(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_SEQID :
+ NIG_REG_P0_TLLH_PTP_BUF_SEQID, 0x10000);
+
+ return 0;
+}
+
+/* Called during load, to initialize PTP-related stuff */
+void bnx2x_init_ptp(struct bnx2x *bp)
+{
+ int rc;
+
+ /* Configure PTP in HW */
+ rc = bnx2x_configure_ptp(bp);
+ if (rc) {
+ BNX2X_ERR("Stopping PTP initialization\n");
+ return;
+ }
+
+ /* Init work queue for Tx timestamping */
+ INIT_WORK(&bp->ptp_task, bnx2x_ptp_task);
+
+ /* Init cyclecounter and timecounter. This is done only in the first
+ * load. If done in every load, PTP application will fail when doing
+ * unload / load (e.g. MTU change) while it is running.
+ */
+ if (!bp->timecounter_init_done) {
+ bnx2x_init_cyclecounter(bp);
+ timecounter_init(&bp->timecounter, &bp->cyclecounter,
+ ktime_to_ns(ktime_get_real()));
+ bp->timecounter_init_done = 1;
+ }
+
+ DP(BNX2X_MSG_PTP, "PTP initialization ended successfully\n");
+}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
index 2beb543..b0779d7 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
@@ -2182,6 +2182,45 @@
#define NIG_REG_P0_HWPFC_ENABLE 0x18078
#define NIG_REG_P0_LLH_FUNC_MEM2 0x18480
#define NIG_REG_P0_LLH_FUNC_MEM2_ENABLE 0x18440
+/* [RW 17] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * the host. Bits [15:0] return the sequence ID of the packet. Bit 16
+ * indicates the validity of the data in the buffer. Writing a 1 to bit 16
+ * will clear the buffer.
+ */
+#define NIG_REG_P0_LLH_PTP_HOST_BUF_SEQID 0x1875c
+/* [R 32] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * the host. This location returns the lower 32 bits of timestamp value.
+ */
+#define NIG_REG_P0_LLH_PTP_HOST_BUF_TS_LSB 0x18754
+/* [R 32] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * the host. This location returns the upper 32 bits of timestamp value.
+ */
+#define NIG_REG_P0_LLH_PTP_HOST_BUF_TS_MSB 0x18758
+/* [RW 11] Mask register for the various parameters used in determining PTP
+ * packet presence. Set each bit to 1 to mask out the particular parameter.
+ * 0-IPv4 DA 0 of 224.0.1.129. 1-IPv4 DA 1 of 224.0.0.107. 2-IPv6 DA 0 of
+ * 0xFF0*:0:0:0:0:0:0:181. 3-IPv6 DA 1 of 0xFF02:0:0:0:0:0:0:6B. 4-UDP
+ * destination port 0 of 319. 5-UDP destination port 1 of 320. 6-MAC
+ * Ethertype 0 of 0x88F7. 7-configurable MAC Ethertype 1. 8-MAC DA 0 of
+ * 0x01-1B-19-00-00-00. 9-MAC DA 1 of 0x01-80-C2-00-00-0E. 10-configurable
+ * MAC DA 2. The reset default is set to mask out all parameters.
+ */
+#define NIG_REG_P0_LLH_PTP_PARAM_MASK 0x187a0
+/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set
+ * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} .
+ * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP
+ * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1;
+ * UDP DP 0} . 7-{IPv6 DA 1; UDP DP 1} . 8-{MAC DA 0; Ethertype 0} . 9-{MAC
+ * DA 1; Ethertype 0} . 10-{MAC DA 0; Ethertype 1} . 11-{MAC DA 1; Ethertype
+ * 1} . 12-{MAC DA 2; Ethertype 0} . 13-{MAC DA 2; Ethertype 1} . The reset
+ * default is to mask out all of the rules. Note that rules 0-3 are for IPv4
+ * packets only and require that the packet is IPv4 for the rules to match.
+ * Note that rules 4-7 are for IPv6 packets only and require that the packet
+ * is IPv6 for the rules to match.
+ */
+#define NIG_REG_P0_LLH_PTP_RULE_MASK 0x187a4
+/* [RW 1] Set to 1 to enable PTP packets to be forwarded to the host. */
+#define NIG_REG_P0_LLH_PTP_TO_HOST 0x187ac
/* [RW 1] Input enable for RX MAC interface. */
#define NIG_REG_P0_MAC_IN_EN 0x185ac
/* [RW 1] Output enable for TX MAC interface */
@@ -2194,6 +2233,17 @@
* priority field is extracted from the outer-most VLAN in receive packet.
* Only COS 0 and COS 1 are supported in E2. */
#define NIG_REG_P0_PKT_PRIORITY_TO_COS 0x18054
+/* [RW 6] Enable for TimeSync feature. Bits [2:0] are for RX side. Bits
+ * [5:3] are for TX side. Bit 0 enables TimeSync on RX side. Bit 1 enables
+ * V1 frame format in timesync event detection on RX side. Bit 2 enables V2
+ * frame format in timesync event detection on RX side. Bit 3 enables
+ * TimeSync on TX side. Bit 4 enables V1 frame format in timesync event
+ * detection on TX side. Bit 5 enables V2 frame format in timesync event
+ * detection on TX side. Note that for HW to detect PTP packet and extract
+ * data from the packet, at least one of the version bits of that traffic
+ * direction has to be enabled.
+ */
+#define NIG_REG_P0_PTP_EN 0x18788
/* [RW 16] Bit-map indicating which SAFC/PFC priorities to map to COS 0. A
* priority is mapped to COS 0 when the corresponding mask bit is 1. More
* than one bit may be set; allowing multiple priorities to be mapped to one
@@ -2300,7 +2350,46 @@
* Ethernet header. */
#define NIG_REG_P1_HDRS_AFTER_BASIC 0x1818c
#define NIG_REG_P1_LLH_FUNC_MEM2 0x184c0
-#define NIG_REG_P1_LLH_FUNC_MEM2_ENABLE 0x18460
+#define NIG_REG_P1_LLH_FUNC_MEM2_ENABLE 0x18460a
+/* [RW 17] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * the host. Bits [15:0] return the sequence ID of the packet. Bit 16
+ * indicates the validity of the data in the buffer. Writing a 1 to bit 16
+ * will clear the buffer.
+ */
+#define NIG_REG_P1_LLH_PTP_HOST_BUF_SEQID 0x18774
+/* [R 32] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * the host. This location returns the lower 32 bits of timestamp value.
+ */
+#define NIG_REG_P1_LLH_PTP_HOST_BUF_TS_LSB 0x1876c
+/* [R 32] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * the host. This location returns the upper 32 bits of timestamp value.
+ */
+#define NIG_REG_P1_LLH_PTP_HOST_BUF_TS_MSB 0x18770
+/* [RW 11] Mask register for the various parameters used in determining PTP
+ * packet presence. Set each bit to 1 to mask out the particular parameter.
+ * 0-IPv4 DA 0 of 224.0.1.129. 1-IPv4 DA 1 of 224.0.0.107. 2-IPv6 DA 0 of
+ * 0xFF0*:0:0:0:0:0:0:181. 3-IPv6 DA 1 of 0xFF02:0:0:0:0:0:0:6B. 4-UDP
+ * destination port 0 of 319. 5-UDP destination port 1 of 320. 6-MAC
+ * Ethertype 0 of 0x88F7. 7-configurable MAC Ethertype 1. 8-MAC DA 0 of
+ * 0x01-1B-19-00-00-00. 9-MAC DA 1 of 0x01-80-C2-00-00-0E. 10-configurable
+ * MAC DA 2. The reset default is set to mask out all parameters.
+ */
+#define NIG_REG_P1_LLH_PTP_PARAM_MASK 0x187c8
+/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set
+ * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} .
+ * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP
+ * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1;
+ * UDP DP 0} . 7-{IPv6 DA 1; UDP DP 1} . 8-{MAC DA 0; Ethertype 0} . 9-{MAC
+ * DA 1; Ethertype 0} . 10-{MAC DA 0; Ethertype 1} . 11-{MAC DA 1; Ethertype
+ * 1} . 12-{MAC DA 2; Ethertype 0} . 13-{MAC DA 2; Ethertype 1} . The reset
+ * default is to mask out all of the rules. Note that rules 0-3 are for IPv4
+ * packets only and require that the packet is IPv4 for the rules to match.
+ * Note that rules 4-7 are for IPv6 packets only and require that the packet
+ * is IPv6 for the rules to match.
+ */
+#define NIG_REG_P1_LLH_PTP_RULE_MASK 0x187cc
+/* [RW 1] Set to 1 to enable PTP packets to be forwarded to the host. */
+#define NIG_REG_P1_LLH_PTP_TO_HOST 0x187d4
/* [RW 32] Specify the client number to be assigned to each priority of the
* strict priority arbiter. This register specifies bits 31:0 of the 36-bit
* value. Priority 0 is the highest priority. Bits [3:0] are for priority 0
@@ -2342,6 +2431,17 @@
* priority field is extracted from the outer-most VLAN in receive packet.
* Only COS 0 and COS 1 are supported in E2. */
#define NIG_REG_P1_PKT_PRIORITY_TO_COS 0x181a8
+/* [RW 6] Enable for TimeSync feature. Bits [2:0] are for RX side. Bits
+ * [5:3] are for TX side. Bit 0 enables TimeSync on RX side. Bit 1 enables
+ * V1 frame format in timesync event detection on RX side. Bit 2 enables V2
+ * frame format in timesync event detection on RX side. Bit 3 enables
+ * TimeSync on TX side. Bit 4 enables V1 frame format in timesync event
+ * detection on TX side. Bit 5 enables V2 frame format in timesync event
+ * detection on TX side. Note that for HW to detect PTP packet and extract
+ * data from the packet, at least one of the version bits of that traffic
+ * direction has to be enabled.
+ */
+#define NIG_REG_P1_PTP_EN 0x187b0
/* [RW 16] Bit-map indicating which SAFC/PFC priorities to map to COS 0. A
* priority is mapped to COS 0 when the corresponding mask bit is 1. More
* than one bit may be set; allowing multiple priorities to be mapped to one
@@ -2361,6 +2461,78 @@
#define NIG_REG_P1_RX_MACFIFO_EMPTY 0x1858c
/* [R 1] TLLH FIFO is empty. */
#define NIG_REG_P1_TLLH_FIFO_EMPTY 0x18338
+/* [RW 19] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * TX side. Bits [15:0] reflect the sequence ID of the packet. Bit 16
+ * indicates the validity of the data in the buffer. Bit 17 indicates that
+ * the sequence ID is valid and it is waiting for the TX timestamp value.
+ * Bit 18 indicates whether the timestamp is from a SW request (value of 1)
+ * or HW request (value of 0). Writing a 1 to bit 16 will clear the buffer.
+ */
+#define NIG_REG_P0_TLLH_PTP_BUF_SEQID 0x187e0
+/* [R 32] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * MCP. This location returns the lower 32 bits of timestamp value.
+ */
+#define NIG_REG_P0_TLLH_PTP_BUF_TS_LSB 0x187d8
+/* [R 32] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * MCP. This location returns the upper 32 bits of timestamp value.
+ */
+#define NIG_REG_P0_TLLH_PTP_BUF_TS_MSB 0x187dc
+/* [RW 11] Mask register for the various parameters used in determining PTP
+ * packet presence. Set each bit to 1 to mask out the particular parameter.
+ * 0-IPv4 DA 0 of 224.0.1.129. 1-IPv4 DA 1 of 224.0.0.107. 2-IPv6 DA 0 of
+ * 0xFF0*:0:0:0:0:0:0:181. 3-IPv6 DA 1 of 0xFF02:0:0:0:0:0:0:6B. 4-UDP
+ * destination port 0 of 319. 5-UDP destination port 1 of 320. 6-MAC
+ * Ethertype 0 of 0x88F7. 7-configurable MAC Ethertype 1. 8-MAC DA 0 of
+ * 0x01-1B-19-00-00-00. 9-MAC DA 1 of 0x01-80-C2-00-00-0E. 10-configurable
+ * MAC DA 2. The reset default is set to mask out all parameters.
+ */
+#define NIG_REG_P0_TLLH_PTP_PARAM_MASK 0x187f0
+/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set
+ * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} .
+ * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP
+ * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1;
+ * UDP DP 0} . 7-{IPv6 DA 1; UDP DP 1} . 8-{MAC DA 0; Ethertype 0} . 9-{MAC
+ * DA 1; Ethertype 0} . 10-{MAC DA 0; Ethertype 1} . 11-{MAC DA 1; Ethertype
+ * 1} . 12-{MAC DA 2; Ethertype 0} . 13-{MAC DA 2; Ethertype 1} . The reset
+ * default is to mask out all of the rules.
+ */
+#define NIG_REG_P0_TLLH_PTP_RULE_MASK 0x187f4
+/* [RW 19] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * TX side. Bits [15:0] reflect the sequence ID of the packet. Bit 16
+ * indicates the validity of the data in the buffer. Bit 17 indicates that
+ * the sequence ID is valid and it is waiting for the TX timestamp value.
+ * Bit 18 indicates whether the timestamp is from a SW request (value of 1)
+ * or HW request (value of 0). Writing a 1 to bit 16 will clear the buffer.
+ */
+#define NIG_REG_P1_TLLH_PTP_BUF_SEQID 0x187ec
+/* [R 32] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * MCP. This location returns the lower 32 bits of timestamp value.
+ */
+#define NIG_REG_P1_TLLH_PTP_BUF_TS_LSB 0x187e4
+/* [R 32] Packet TimeSync information that is buffered in 1-deep FIFOs for
+ * MCP. This location returns the upper 32 bits of timestamp value.
+ */
+#define NIG_REG_P1_TLLH_PTP_BUF_TS_MSB 0x187e8
+/* [RW 11] Mask register for the various parameters used in determining PTP
+ * packet presence. Set each bit to 1 to mask out the particular parameter.
+ * 0-IPv4 DA 0 of 224.0.1.129. 1-IPv4 DA 1 of 224.0.0.107. 2-IPv6 DA 0 of
+ * 0xFF0*:0:0:0:0:0:0:181. 3-IPv6 DA 1 of 0xFF02:0:0:0:0:0:0:6B. 4-UDP
+ * destination port 0 of 319. 5-UDP destination port 1 of 320. 6-MAC
+ * Ethertype 0 of 0x88F7. 7-configurable MAC Ethertype 1. 8-MAC DA 0 of
+ * 0x01-1B-19-00-00-00. 9-MAC DA 1 of 0x01-80-C2-00-00-0E. 10-configurable
+ * MAC DA 2. The reset default is set to mask out all parameters.
+ */
+#define NIG_REG_P1_TLLH_PTP_PARAM_MASK 0x187f8
+/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set
+ * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} .
+ * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP
+ * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1;
+ * UDP DP 0} . 7-{IPv6 DA 1; UDP DP 1} . 8-{MAC DA 0; Ethertype 0} . 9-{MAC
+ * DA 1; Ethertype 0} . 10-{MAC DA 0; Ethertype 1} . 11-{MAC DA 1; Ethertype
+ * 1} . 12-{MAC DA 2; Ethertype 0} . 13-{MAC DA 2; Ethertype 1} . The reset
+ * default is to mask out all of the rules.
+ */
+#define NIG_REG_P1_TLLH_PTP_RULE_MASK 0x187fc
/* [RW 32] Specify which of the credit registers the client is to be mapped
* to. This register specifies bits 31:0 of the 36-bit value. Bits[3:0] are
* for client 0; bits [35:32] are for client 8. For clients that are not
@@ -2513,6 +2685,10 @@
swap is equal to SPIO pin that inputs from ifmux_serdes_swap. If 1 then
ort swap is equal to ~nig_registers_port_swap.port_swap */
#define NIG_REG_STRAP_OVERRIDE 0x10398
+/* [WB 64] Addresses for TimeSync related registers in the timesync
+ * generator sub-module.
+ */
+#define NIG_REG_TIMESYNC_GEN_REG 0x18800
/* [RW 1] output enable for RX_XCM0 IF */
#define NIG_REG_XCM0_OUT_EN 0x100f0
/* [RW 1] output enable for RX_XCM1 IF */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
index b193604..7bc2924 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
@@ -4019,6 +4019,7 @@ static int bnx2x_setup_rss(struct bnx2x *bp,
struct bnx2x_raw_obj *r = &o->raw;
struct eth_rss_update_ramrod_data *data =
(struct eth_rss_update_ramrod_data *)(r->rdata);
+ u16 caps = 0;
u8 rss_mode = 0;
int rc;
@@ -4042,28 +4043,34 @@ static int bnx2x_setup_rss(struct bnx2x *bp,
/* RSS capabilities */
if (test_bit(BNX2X_RSS_IPV4, &p->rss_flags))
- data->capabilities |=
- ETH_RSS_UPDATE_RAMROD_DATA_IPV4_CAPABILITY;
+ caps |= ETH_RSS_UPDATE_RAMROD_DATA_IPV4_CAPABILITY;
if (test_bit(BNX2X_RSS_IPV4_TCP, &p->rss_flags))
- data->capabilities |=
- ETH_RSS_UPDATE_RAMROD_DATA_IPV4_TCP_CAPABILITY;
+ caps |= ETH_RSS_UPDATE_RAMROD_DATA_IPV4_TCP_CAPABILITY;
if (test_bit(BNX2X_RSS_IPV4_UDP, &p->rss_flags))
- data->capabilities |=
- ETH_RSS_UPDATE_RAMROD_DATA_IPV4_UDP_CAPABILITY;
+ caps |= ETH_RSS_UPDATE_RAMROD_DATA_IPV4_UDP_CAPABILITY;
if (test_bit(BNX2X_RSS_IPV6, &p->rss_flags))
- data->capabilities |=
- ETH_RSS_UPDATE_RAMROD_DATA_IPV6_CAPABILITY;
+ caps |= ETH_RSS_UPDATE_RAMROD_DATA_IPV6_CAPABILITY;
if (test_bit(BNX2X_RSS_IPV6_TCP, &p->rss_flags))
- data->capabilities |=
- ETH_RSS_UPDATE_RAMROD_DATA_IPV6_TCP_CAPABILITY;
+ caps |= ETH_RSS_UPDATE_RAMROD_DATA_IPV6_TCP_CAPABILITY;
if (test_bit(BNX2X_RSS_IPV6_UDP, &p->rss_flags))
- data->capabilities |=
- ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY;
+ caps |= ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY;
+
+ if (test_bit(BNX2X_RSS_GRE_INNER_HDRS, &p->rss_flags))
+ caps |= ETH_RSS_UPDATE_RAMROD_DATA_GRE_INNER_HDRS_CAPABILITY;
+
+ /* RSS keys */
+ if (test_bit(BNX2X_RSS_SET_SRCH, &p->rss_flags)) {
+ memcpy(&data->rss_key[0], &p->rss_key[0],
+ sizeof(data->rss_key));
+ caps |= ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY;
+ }
+
+ data->capabilities = cpu_to_le16(caps);
/* Hashing mask */
data->rss_result_mask = p->rss_result_mask;
@@ -4084,13 +4091,6 @@ static int bnx2x_setup_rss(struct bnx2x *bp,
if (netif_msg_ifup(bp))
bnx2x_debug_print_ind_table(bp, p);
- /* RSS keys */
- if (test_bit(BNX2X_RSS_SET_SRCH, &p->rss_flags)) {
- memcpy(&data->rss_key[0], &p->rss_key[0],
- sizeof(data->rss_key));
- data->capabilities |= ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY;
- }
-
/* No need for an explicit memory barrier here as long as we
* ensure the ordering of writing to the SPQ element
* and updating of the SPQ producer which involves a memory
@@ -4336,6 +4336,8 @@ static void bnx2x_q_fill_init_general_data(struct bnx2x *bp,
test_bit(BNX2X_Q_FLG_FCOE, flags) ?
LLFC_TRAFFIC_TYPE_FCOE : LLFC_TRAFFIC_TYPE_NW;
+ gen_data->fp_hsi_ver = ETH_FP_HSI_VERSION;
+
DP(BNX2X_MSG_SP, "flags: active %d, cos %d, stats en %d\n",
gen_data->activate_flg, gen_data->cos, gen_data->statistics_en_flg);
}
@@ -4357,12 +4359,13 @@ static void bnx2x_q_fill_init_tx_data(struct bnx2x_queue_sp_obj *o,
test_bit(BNX2X_Q_FLG_ANTI_SPOOF, flags);
tx_data->force_default_pri_flg =
test_bit(BNX2X_Q_FLG_FORCE_DEFAULT_PRI, flags);
-
+ tx_data->refuse_outband_vlan_flg =
+ test_bit(BNX2X_Q_FLG_REFUSE_OUTBAND_VLAN, flags);
tx_data->tunnel_lso_inc_ip_id =
test_bit(BNX2X_Q_FLG_TUN_INC_INNER_IP_ID, flags);
tx_data->tunnel_non_lso_pcsum_location =
- test_bit(BNX2X_Q_FLG_PCSUM_ON_PKT, flags) ? PCSUM_ON_PKT :
- PCSUM_ON_BD;
+ test_bit(BNX2X_Q_FLG_PCSUM_ON_PKT, flags) ? CSUM_ON_PKT :
+ CSUM_ON_BD;
tx_data->tx_status_block_id = params->fw_sb_id;
tx_data->tx_sb_index_number = params->sb_cq_index;
@@ -4722,6 +4725,12 @@ static void bnx2x_q_fill_update_data(struct bnx2x *bp,
data->tx_switching_change_flg =
test_bit(BNX2X_Q_UPDATE_TX_SWITCHING_CHNG,
&params->update_flags);
+
+ /* PTP */
+ data->handle_ptp_pkts_flg =
+ test_bit(BNX2X_Q_UPDATE_PTP_PKTS, &params->update_flags);
+ data->handle_ptp_pkts_change_flg =
+ test_bit(BNX2X_Q_UPDATE_PTP_PKTS_CHNG, &params->update_flags);
}
static inline int bnx2x_q_send_update(struct bnx2x *bp,
@@ -5376,6 +5385,10 @@ static int bnx2x_func_chk_transition(struct bnx2x *bp,
(!test_bit(BNX2X_F_CMD_STOP, &o->pending)))
next_state = BNX2X_F_STATE_STARTED;
+ else if ((cmd == BNX2X_F_CMD_SET_TIMESYNC) &&
+ (!test_bit(BNX2X_F_CMD_STOP, &o->pending)))
+ next_state = BNX2X_F_STATE_STARTED;
+
else if (cmd == BNX2X_F_CMD_TX_STOP)
next_state = BNX2X_F_STATE_TX_STOPPED;
@@ -5385,6 +5398,10 @@ static int bnx2x_func_chk_transition(struct bnx2x *bp,
(!test_bit(BNX2X_F_CMD_STOP, &o->pending)))
next_state = BNX2X_F_STATE_TX_STOPPED;
+ else if ((cmd == BNX2X_F_CMD_SET_TIMESYNC) &&
+ (!test_bit(BNX2X_F_CMD_STOP, &o->pending)))
+ next_state = BNX2X_F_STATE_TX_STOPPED;
+
else if (cmd == BNX2X_F_CMD_TX_START)
next_state = BNX2X_F_STATE_STARTED;
@@ -5652,9 +5669,27 @@ static inline int bnx2x_func_send_start(struct bnx2x *bp,
rdata->sd_vlan_tag = cpu_to_le16(start_params->sd_vlan_tag);
rdata->path_id = BP_PATH(bp);
rdata->network_cos_mode = start_params->network_cos_mode;
- rdata->gre_tunnel_mode = start_params->gre_tunnel_mode;
- rdata->gre_tunnel_rss = start_params->gre_tunnel_rss;
+ rdata->tunnel_mode = start_params->tunnel_mode;
+ rdata->gre_tunnel_type = start_params->gre_tunnel_type;
+ rdata->inner_gre_rss_en = start_params->inner_gre_rss_en;
+ rdata->vxlan_dst_port = cpu_to_le16(4789);
+ rdata->sd_accept_mf_clss_fail = start_params->class_fail;
+ if (start_params->class_fail_ethtype) {
+ rdata->sd_accept_mf_clss_fail_match_ethtype = 1;
+ rdata->sd_accept_mf_clss_fail_ethtype =
+ cpu_to_le16(start_params->class_fail_ethtype);
+ }
+ rdata->sd_vlan_force_pri_flg = start_params->sd_vlan_force_pri;
+ rdata->sd_vlan_force_pri_val = start_params->sd_vlan_force_pri_val;
+ if (start_params->sd_vlan_eth_type)
+ rdata->sd_vlan_eth_type =
+ cpu_to_le16(start_params->sd_vlan_eth_type);
+ else
+ rdata->sd_vlan_eth_type =
+ cpu_to_le16(0x8100);
+
+ rdata->no_added_tags = start_params->no_added_tags;
/* No need for an explicit memory barrier here as long we would
* need to ensure the ordering of writing to the SPQ element
* and updating of the SPQ producer which involves a memory
@@ -5680,8 +5715,52 @@ static inline int bnx2x_func_send_switch_update(struct bnx2x *bp,
memset(rdata, 0, sizeof(*rdata));
/* Fill the ramrod data with provided parameters */
- rdata->tx_switch_suspend_change_flg = 1;
- rdata->tx_switch_suspend = switch_update_params->suspend;
+ if (test_bit(BNX2X_F_UPDATE_TX_SWITCH_SUSPEND_CHNG,
+ &switch_update_params->changes)) {
+ rdata->tx_switch_suspend_change_flg = 1;
+ rdata->tx_switch_suspend =
+ test_bit(BNX2X_F_UPDATE_TX_SWITCH_SUSPEND,
+ &switch_update_params->changes);
+ }
+
+ if (test_bit(BNX2X_F_UPDATE_SD_VLAN_TAG_CHNG,
+ &switch_update_params->changes)) {
+ rdata->sd_vlan_tag_change_flg = 1;
+ rdata->sd_vlan_tag =
+ cpu_to_le16(switch_update_params->vlan);
+ }
+
+ if (test_bit(BNX2X_F_UPDATE_SD_VLAN_ETH_TYPE_CHNG,
+ &switch_update_params->changes)) {
+ rdata->sd_vlan_eth_type_change_flg = 1;
+ rdata->sd_vlan_eth_type =
+ cpu_to_le16(switch_update_params->vlan_eth_type);
+ }
+
+ if (test_bit(BNX2X_F_UPDATE_VLAN_FORCE_PRIO_CHNG,
+ &switch_update_params->changes)) {
+ rdata->sd_vlan_force_pri_change_flg = 1;
+ if (test_bit(BNX2X_F_UPDATE_VLAN_FORCE_PRIO_FLAG,
+ &switch_update_params->changes))
+ rdata->sd_vlan_force_pri_flg = 1;
+ rdata->sd_vlan_force_pri_flg =
+ switch_update_params->vlan_force_prio;
+ }
+
+ if (test_bit(BNX2X_F_UPDATE_TUNNEL_CFG_CHNG,
+ &switch_update_params->changes)) {
+ rdata->update_tunn_cfg_flg = 1;
+ if (test_bit(BNX2X_F_UPDATE_TUNNEL_CLSS_EN,
+ &switch_update_params->changes))
+ rdata->tunn_clss_en = 1;
+ if (test_bit(BNX2X_F_UPDATE_TUNNEL_INNER_GRE_RSS_EN,
+ &switch_update_params->changes))
+ rdata->inner_gre_rss_en = 1;
+ rdata->tunnel_mode = switch_update_params->tunnel_mode;
+ rdata->gre_tunnel_type = switch_update_params->gre_tunnel_type;
+ rdata->vxlan_dst_port = cpu_to_le16(4789);
+ }
+
rdata->echo = SWITCH_UPDATE;
/* No need for an explicit memory barrier here as long as we
@@ -5817,6 +5896,42 @@ static inline int bnx2x_func_send_tx_start(struct bnx2x *bp,
U64_LO(data_mapping), NONE_CONNECTION_TYPE);
}
+static inline
+int bnx2x_func_send_set_timesync(struct bnx2x *bp,
+ struct bnx2x_func_state_params *params)
+{
+ struct bnx2x_func_sp_obj *o = params->f_obj;
+ struct set_timesync_ramrod_data *rdata =
+ (struct set_timesync_ramrod_data *)o->rdata;
+ dma_addr_t data_mapping = o->rdata_mapping;
+ struct bnx2x_func_set_timesync_params *set_timesync_params =
+ &params->params.set_timesync;
+
+ memset(rdata, 0, sizeof(*rdata));
+
+ /* Fill the ramrod data with provided parameters */
+ rdata->drift_adjust_cmd = set_timesync_params->drift_adjust_cmd;
+ rdata->offset_cmd = set_timesync_params->offset_cmd;
+ rdata->add_sub_drift_adjust_value =
+ set_timesync_params->add_sub_drift_adjust_value;
+ rdata->drift_adjust_value = set_timesync_params->drift_adjust_value;
+ rdata->drift_adjust_period = set_timesync_params->drift_adjust_period;
+ rdata->offset_delta.lo =
+ cpu_to_le32(U64_LO(set_timesync_params->offset_delta));
+ rdata->offset_delta.hi =
+ cpu_to_le32(U64_HI(set_timesync_params->offset_delta));
+
+ DP(BNX2X_MSG_SP, "Set timesync command params: drift_cmd = %d, offset_cmd = %d, add_sub_drift = %d, drift_val = %d, drift_period = %d, offset_lo = %d, offset_hi = %d\n",
+ rdata->drift_adjust_cmd, rdata->offset_cmd,
+ rdata->add_sub_drift_adjust_value, rdata->drift_adjust_value,
+ rdata->drift_adjust_period, rdata->offset_delta.lo,
+ rdata->offset_delta.hi);
+
+ return bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_SET_TIMESYNC, 0,
+ U64_HI(data_mapping),
+ U64_LO(data_mapping), NONE_CONNECTION_TYPE);
+}
+
static int bnx2x_func_send_cmd(struct bnx2x *bp,
struct bnx2x_func_state_params *params)
{
@@ -5839,6 +5954,8 @@ static int bnx2x_func_send_cmd(struct bnx2x *bp,
return bnx2x_func_send_tx_start(bp, params);
case BNX2X_F_CMD_SWITCH_UPDATE:
return bnx2x_func_send_switch_update(bp, params);
+ case BNX2X_F_CMD_SET_TIMESYNC:
+ return bnx2x_func_send_set_timesync(bp, params);
default:
BNX2X_ERR("Unknown command: %d\n", params->cmd);
return -EINVAL;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
index 718ecd2..e97275f 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
@@ -711,6 +711,7 @@ enum {
BNX2X_RSS_IPV6,
BNX2X_RSS_IPV6_TCP,
BNX2X_RSS_IPV6_UDP,
+ BNX2X_RSS_GRE_INNER_HDRS,
};
struct bnx2x_config_rss_params {
@@ -769,7 +770,9 @@ enum {
BNX2X_Q_UPDATE_SILENT_VLAN_REM_CHNG,
BNX2X_Q_UPDATE_SILENT_VLAN_REM,
BNX2X_Q_UPDATE_TX_SWITCHING_CHNG,
- BNX2X_Q_UPDATE_TX_SWITCHING
+ BNX2X_Q_UPDATE_TX_SWITCHING,
+ BNX2X_Q_UPDATE_PTP_PKTS_CHNG,
+ BNX2X_Q_UPDATE_PTP_PKTS,
};
/* Allowed Queue states */
@@ -831,6 +834,7 @@ enum {
BNX2X_Q_FLG_ANTI_SPOOF,
BNX2X_Q_FLG_SILENT_VLAN_REM,
BNX2X_Q_FLG_FORCE_DEFAULT_PRI,
+ BNX2X_Q_FLG_REFUSE_OUTBAND_VLAN,
BNX2X_Q_FLG_PCSUM_ON_PKT,
BNX2X_Q_FLG_TUN_INC_INNER_IP_ID
};
@@ -851,6 +855,10 @@ enum bnx2x_q_type {
#define BNX2X_MULTI_TX_COS 3 /* Maximum possible */
#define MAC_PAD (ALIGN(ETH_ALEN, sizeof(u32)) - ETH_ALEN)
+/* DMAE channel to be used by FW for timesync workaroun. A driver that sends
+ * timesync-related ramrods must not use this DMAE command ID.
+ */
+#define FW_DMAE_CMD_ID 6
struct bnx2x_queue_init_params {
struct {
@@ -1085,6 +1093,20 @@ struct bnx2x_queue_sp_obj {
};
/********************** Function state update *********************************/
+
+/* UPDATE command options */
+enum {
+ BNX2X_F_UPDATE_TX_SWITCH_SUSPEND_CHNG,
+ BNX2X_F_UPDATE_TX_SWITCH_SUSPEND,
+ BNX2X_F_UPDATE_SD_VLAN_TAG_CHNG,
+ BNX2X_F_UPDATE_SD_VLAN_ETH_TYPE_CHNG,
+ BNX2X_F_UPDATE_VLAN_FORCE_PRIO_CHNG,
+ BNX2X_F_UPDATE_VLAN_FORCE_PRIO_FLAG,
+ BNX2X_F_UPDATE_TUNNEL_CFG_CHNG,
+ BNX2X_F_UPDATE_TUNNEL_CLSS_EN,
+ BNX2X_F_UPDATE_TUNNEL_INNER_GRE_RSS_EN,
+};
+
/* Allowed Function states */
enum bnx2x_func_state {
BNX2X_F_STATE_RESET,
@@ -1105,6 +1127,7 @@ enum bnx2x_func_cmd {
BNX2X_F_CMD_TX_STOP,
BNX2X_F_CMD_TX_START,
BNX2X_F_CMD_SWITCH_UPDATE,
+ BNX2X_F_CMD_SET_TIMESYNC,
BNX2X_F_CMD_MAX,
};
@@ -1146,18 +1169,44 @@ struct bnx2x_func_start_params {
/* Function cos mode */
u8 network_cos_mode;
- /* NVGRE classification enablement */
- u8 nvgre_clss_en;
+ /* TUNN_MODE_NONE/TUNN_MODE_VXLAN/TUNN_MODE_GRE */
+ u8 tunnel_mode;
- /* NO_GRE_TUNNEL/NVGRE_TUNNEL/L2GRE_TUNNEL/IPGRE_TUNNEL */
- u8 gre_tunnel_mode;
+ /* tunneling classification enablement */
+ u8 tunn_clss_en;
+
+ /* NVGRE_TUNNEL/L2GRE_TUNNEL/IPGRE_TUNNEL */
+ u8 gre_tunnel_type;
+
+ /* Enables Inner GRE RSS on the function, depends on the client RSS
+ * capailities
+ */
+ u8 inner_gre_rss_en;
+
+ /* Allows accepting of packets failing MF classification, possibly
+ * only matching a given ethertype
+ */
+ u8 class_fail;
+ u16 class_fail_ethtype;
- /* GRE_OUTER_HEADERS_RSS/GRE_INNER_HEADERS_RSS/NVGRE_KEY_ENTROPY_RSS */
- u8 gre_tunnel_rss;
+ /* Override priority of output packets */
+ u8 sd_vlan_force_pri;
+ u8 sd_vlan_force_pri_val;
+
+ /* Replace vlan's ethertype */
+ u16 sd_vlan_eth_type;
+
+ /* Prevent inner vlans from being added by FW */
+ u8 no_added_tags;
};
struct bnx2x_func_switch_update_params {
- u8 suspend;
+ unsigned long changes; /* BNX2X_F_UPDATE_XX bits */
+ u16 vlan;
+ u16 vlan_eth_type;
+ u8 vlan_force_prio;
+ u8 tunnel_mode;
+ u8 gre_tunnel_type;
};
struct bnx2x_func_afex_update_params {
@@ -1172,6 +1221,7 @@ struct bnx2x_func_afex_viflists_params {
u8 afex_vif_list_command;
u8 func_to_clear;
};
+
struct bnx2x_func_tx_start_params {
struct priority_cos traffic_type_to_priority_cos[MAX_TRAFFIC_TYPES];
u8 dcb_enabled;
@@ -1179,6 +1229,24 @@ struct bnx2x_func_tx_start_params {
u8 dont_add_pri_0_en;
};
+struct bnx2x_func_set_timesync_params {
+ /* Reset, set or keep the current drift value */
+ u8 drift_adjust_cmd;
+
+ /* Dec, inc or keep the current offset */
+ u8 offset_cmd;
+
+ /* Drift value direction */
+ u8 add_sub_drift_adjust_value;
+
+ /* Drift, period and offset values to be used according to the commands
+ * above.
+ */
+ u8 drift_adjust_value;
+ u32 drift_adjust_period;
+ u64 offset_delta;
+};
+
struct bnx2x_func_state_params {
struct bnx2x_func_sp_obj *f_obj;
@@ -1197,6 +1265,7 @@ struct bnx2x_func_state_params {
struct bnx2x_func_afex_update_params afex_update;
struct bnx2x_func_afex_viflists_params afex_viflists;
struct bnx2x_func_tx_start_params tx_start;
+ struct bnx2x_func_set_timesync_params set_timesync;
} params;
};
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index 662310c..c88b20a 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -1125,7 +1125,7 @@ static int bnx2x_ari_enabled(struct pci_dev *dev)
return dev->bus->self && dev->bus->self->ari_enabled;
}
-static void
+static int
bnx2x_get_vf_igu_cam_info(struct bnx2x *bp)
{
int sb_id;
@@ -1150,6 +1150,7 @@ bnx2x_get_vf_igu_cam_info(struct bnx2x *bp)
GET_FIELD((val), IGU_REG_MAPPING_MEMORY_VECTOR));
}
DP(BNX2X_MSG_IOV, "vf_sbs_pool is %d\n", BP_VFDB(bp)->vf_sbs_pool);
+ return BP_VFDB(bp)->vf_sbs_pool;
}
static void __bnx2x_iov_free_vfdb(struct bnx2x *bp)
@@ -1314,15 +1315,17 @@ int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
}
/* re-read the IGU CAM for VFs - index and abs_vfid must be set */
- bnx2x_get_vf_igu_cam_info(bp);
+ if (!bnx2x_get_vf_igu_cam_info(bp)) {
+ BNX2X_ERR("No entries in IGU CAM for vfs\n");
+ err = -EINVAL;
+ goto failed;
+ }
/* allocate the queue arrays for all VFs */
bp->vfdb->vfqs = kzalloc(
BNX2X_MAX_NUM_VF_QUEUES * sizeof(struct bnx2x_vf_queue),
GFP_KERNEL);
- DP(BNX2X_MSG_IOV, "bp->vfdb->vfqs was %p\n", bp->vfdb->vfqs);
-
if (!bp->vfdb->vfqs) {
BNX2X_ERR("failed to allocate vf queue array\n");
err = -ENOMEM;
@@ -1349,9 +1352,7 @@ void bnx2x_iov_remove_one(struct bnx2x *bp)
if (!IS_SRIOV(bp))
return;
- DP(BNX2X_MSG_IOV, "about to call disable sriov\n");
- pci_disable_sriov(bp->pdev);
- DP(BNX2X_MSG_IOV, "sriov disabled\n");
+ bnx2x_disable_sriov(bp);
/* disable access to all VFs */
for (vf_idx = 0; vf_idx < bp->vfdb->sriov.total; vf_idx++) {
@@ -1985,21 +1986,6 @@ void bnx2x_iov_adjust_stats_req(struct bnx2x *bp)
bp->fw_stats_req->hdr.cmd_num = bp->fw_stats_num + stats_count;
}
-static inline
-struct bnx2x_virtf *__vf_from_stat_id(struct bnx2x *bp, u8 stat_id)
-{
- int i;
- struct bnx2x_virtf *vf = NULL;
-
- for_each_vf(bp, i) {
- vf = BP_VF(bp, i);
- if (stat_id >= vf->igu_base_id &&
- stat_id < vf->igu_base_id + vf_sb_count(vf))
- break;
- }
- return vf;
-}
-
/* VF API helpers */
static void bnx2x_vf_qtbl_set_q(struct bnx2x *bp, u8 abs_vfid, u8 qid,
u8 enable)
@@ -2362,12 +2348,6 @@ int bnx2x_vf_release(struct bnx2x *bp, struct bnx2x_virtf *vf)
return rc;
}
-static inline void bnx2x_vf_get_sbdf(struct bnx2x *bp,
- struct bnx2x_virtf *vf, u32 *sbdf)
-{
- *sbdf = vf->devfn | (vf->bus << 8);
-}
-
void bnx2x_lock_vf_pf_channel(struct bnx2x *bp, struct bnx2x_virtf *vf,
enum channel_tlvs tlv)
{
@@ -2416,7 +2396,7 @@ void bnx2x_unlock_vf_pf_channel(struct bnx2x *bp, struct bnx2x_virtf *vf,
/* log the unlock */
DP(BNX2X_MSG_IOV, "VF[%d]: vf pf channel unlocked by %d\n",
- vf->abs_vfid, vf->op_current);
+ vf->abs_vfid, current_tlv);
}
static int bnx2x_set_pf_tx_switching(struct bnx2x *bp, bool enable)
@@ -2501,7 +2481,7 @@ int bnx2x_sriov_configure(struct pci_dev *dev, int num_vfs_param)
bp->requested_nr_virtfn = num_vfs_param;
if (num_vfs_param == 0) {
bnx2x_set_pf_tx_switching(bp, false);
- pci_disable_sriov(dev);
+ bnx2x_disable_sriov(bp);
return 0;
} else {
return bnx2x_enable_sriov(bp);
@@ -2614,6 +2594,12 @@ void bnx2x_pf_set_vfs_vlan(struct bnx2x *bp)
void bnx2x_disable_sriov(struct bnx2x *bp)
{
+ if (pci_vfs_assigned(bp->pdev)) {
+ DP(BNX2X_MSG_IOV,
+ "Unloading driver while VFs are assigned - VFs will not be deallocated\n");
+ return;
+ }
+
pci_disable_sriov(bp->pdev);
}
@@ -2628,7 +2614,7 @@ static int bnx2x_vf_op_prep(struct bnx2x *bp, int vfidx,
}
if (!IS_SRIOV(bp)) {
- BNX2X_ERR("sriov is disabled - can't utilize iov-realted functionality\n");
+ BNX2X_ERR("sriov is disabled - can't utilize iov-related functionality\n");
return -EINVAL;
}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index ca1055f..01bafa4 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -299,7 +299,8 @@ struct bnx2x_vfdb {
#define BP_VFDB(bp) ((bp)->vfdb)
/* vf array */
struct bnx2x_virtf *vfs;
-#define BP_VF(bp, idx) (&((bp)->vfdb->vfs[idx]))
+#define BP_VF(bp, idx) ((BP_VFDB(bp) && (bp)->vfdb->vfs) ? \
+ &((bp)->vfdb->vfs[idx]) : NULL)
#define bnx2x_vf(bp, idx, var) ((bp)->vfdb->vfs[idx].var)
/* queue array - for all vfs */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
index ca47665..d160829 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
@@ -137,7 +137,7 @@ static void bnx2x_storm_stats_post(struct bnx2x *bp)
cpu_to_le16(bp->stats_counter++);
DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n",
- bp->fw_stats_req->hdr.drv_stats_counter);
+ le16_to_cpu(bp->fw_stats_req->hdr.drv_stats_counter));
/* adjust the ramrod to include VF queues statistics */
bnx2x_iov_adjust_stats_req(bp);
@@ -200,7 +200,7 @@ static void bnx2x_hw_stats_post(struct bnx2x *bp)
}
}
-static int bnx2x_stats_comp(struct bnx2x *bp)
+static void bnx2x_stats_comp(struct bnx2x *bp)
{
u32 *stats_comp = bnx2x_sp(bp, stats_comp);
int cnt = 10;
@@ -214,7 +214,6 @@ static int bnx2x_stats_comp(struct bnx2x *bp)
cnt--;
usleep_range(1000, 2000);
}
- return 1;
}
/*
@@ -1630,6 +1629,11 @@ void bnx2x_stats_init(struct bnx2x *bp)
int /*abs*/port = BP_PORT(bp);
int mb_idx = BP_FW_MB_IDX(bp);
+ if (IS_VF(bp)) {
+ bnx2x_memset_stats(bp);
+ return;
+ }
+
bp->stats_pending = 0;
bp->executer_idx = 0;
bp->stats_counter = 0;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index 54e0427..b1d9c44 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -583,7 +583,6 @@ int bnx2x_vfpf_setup_q(struct bnx2x *bp, struct bnx2x_fastpath *fp,
flags |= VFPF_QUEUE_FLG_STATS;
flags |= VFPF_QUEUE_FLG_CACHE_ALIGN;
flags |= VFPF_QUEUE_FLG_VLAN;
- DP(NETIF_MSG_IFUP, "vlan removal enabled\n");
/* Common */
req->vf_qid = fp_idx;
@@ -952,14 +951,6 @@ static void storm_memset_vf_mbx_valid(struct bnx2x *bp, u16 abs_fid)
REG_WR8(bp, addr, 1);
}
-static inline void bnx2x_set_vf_mbxs_valid(struct bnx2x *bp)
-{
- int i;
-
- for_each_vf(bp, i)
- storm_memset_vf_mbx_valid(bp, bnx2x_vf(bp, i, abs_vfid));
-}
-
/* enable vf_pf mailbox (aka vf-pf-channel) */
void bnx2x_vf_enable_mbx(struct bnx2x *bp, u8 abs_vfid)
{
diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c
index a6a9f28..23f23c9 100644
--- a/drivers/net/ethernet/broadcom/cnic.c
+++ b/drivers/net/ethernet/broadcom/cnic.c
@@ -383,7 +383,7 @@ static int cnic_iscsi_nl_msg_recv(struct cnic_dev *dev, u32 msg_type,
break;
rcu_read_lock();
- if (!rcu_dereference(cp->ulp_ops[CNIC_ULP_L4])) {
+ if (!rcu_access_pointer(cp->ulp_ops[CNIC_ULP_L4])) {
rc = -ENODEV;
rcu_read_unlock();
break;
@@ -527,7 +527,7 @@ int cnic_unregister_driver(int ulp_type)
list_for_each_entry(dev, &cnic_dev_list, list) {
struct cnic_local *cp = dev->cnic_priv;
- if (rcu_dereference(cp->ulp_ops[ulp_type])) {
+ if (rcu_access_pointer(cp->ulp_ops[ulp_type])) {
pr_err("%s: Type %d still has devices registered\n",
__func__, ulp_type);
read_unlock(&cnic_dev_lock);
@@ -575,7 +575,7 @@ static int cnic_register_device(struct cnic_dev *dev, int ulp_type,
mutex_unlock(&cnic_lock);
return -EAGAIN;
}
- if (rcu_dereference(cp->ulp_ops[ulp_type])) {
+ if (rcu_access_pointer(cp->ulp_ops[ulp_type])) {
pr_err("%s: Type %d has already been registered to this device\n",
__func__, ulp_type);
mutex_unlock(&cnic_lock);
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index 5cc9cae..fff2634 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -191,8 +191,9 @@ enum dma_reg {
DMA_STATUS,
DMA_SCB_BURST_SIZE,
DMA_ARB_CTRL,
- DMA_PRIORITY,
- DMA_RING_PRIORITY,
+ DMA_PRIORITY_0,
+ DMA_PRIORITY_1,
+ DMA_PRIORITY_2,
};
static const u8 bcmgenet_dma_regs_v3plus[] = {
@@ -201,8 +202,9 @@ static const u8 bcmgenet_dma_regs_v3plus[] = {
[DMA_STATUS] = 0x08,
[DMA_SCB_BURST_SIZE] = 0x0C,
[DMA_ARB_CTRL] = 0x2C,
- [DMA_PRIORITY] = 0x30,
- [DMA_RING_PRIORITY] = 0x38,
+ [DMA_PRIORITY_0] = 0x30,
+ [DMA_PRIORITY_1] = 0x34,
+ [DMA_PRIORITY_2] = 0x38,
};
static const u8 bcmgenet_dma_regs_v2[] = {
@@ -211,8 +213,9 @@ static const u8 bcmgenet_dma_regs_v2[] = {
[DMA_STATUS] = 0x08,
[DMA_SCB_BURST_SIZE] = 0x0C,
[DMA_ARB_CTRL] = 0x30,
- [DMA_PRIORITY] = 0x34,
- [DMA_RING_PRIORITY] = 0x3C,
+ [DMA_PRIORITY_0] = 0x34,
+ [DMA_PRIORITY_1] = 0x38,
+ [DMA_PRIORITY_2] = 0x3C,
};
static const u8 bcmgenet_dma_regs_v1[] = {
@@ -220,8 +223,9 @@ static const u8 bcmgenet_dma_regs_v1[] = {
[DMA_STATUS] = 0x04,
[DMA_SCB_BURST_SIZE] = 0x0C,
[DMA_ARB_CTRL] = 0x30,
- [DMA_PRIORITY] = 0x34,
- [DMA_RING_PRIORITY] = 0x3C,
+ [DMA_PRIORITY_0] = 0x34,
+ [DMA_PRIORITY_1] = 0x38,
+ [DMA_PRIORITY_2] = 0x3C,
};
/* Set at runtime once bcmgenet version is known */
@@ -1054,7 +1058,8 @@ static int bcmgenet_xmit_frag(struct net_device *dev,
/* Reallocate the SKB to put enough headroom in front of it and insert
* the transmit checksum offsets in the descriptors
*/
-static int bcmgenet_put_tx_csum(struct net_device *dev, struct sk_buff *skb)
+static struct sk_buff *bcmgenet_put_tx_csum(struct net_device *dev,
+ struct sk_buff *skb)
{
struct status_64 *status = NULL;
struct sk_buff *new_skb;
@@ -1072,7 +1077,7 @@ static int bcmgenet_put_tx_csum(struct net_device *dev, struct sk_buff *skb)
if (!new_skb) {
dev->stats.tx_errors++;
dev->stats.tx_dropped++;
- return -ENOMEM;
+ return NULL;
}
skb = new_skb;
}
@@ -1090,7 +1095,7 @@ static int bcmgenet_put_tx_csum(struct net_device *dev, struct sk_buff *skb)
ip_proto = ipv6_hdr(skb)->nexthdr;
break;
default:
- return 0;
+ return skb;
}
offset = skb_checksum_start_offset(skb) - sizeof(*status);
@@ -1111,7 +1116,7 @@ static int bcmgenet_put_tx_csum(struct net_device *dev, struct sk_buff *skb)
status->tx_csum_info = tx_csum_info;
}
- return 0;
+ return skb;
}
static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1158,8 +1163,8 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
/* set the SKB transmit checksum */
if (priv->desc_64b_en) {
- ret = bcmgenet_put_tx_csum(dev, skb);
- if (ret) {
+ skb = bcmgenet_put_tx_csum(dev, skb);
+ if (!skb) {
ret = NETDEV_TX_OK;
goto out;
}
@@ -1695,7 +1700,8 @@ static void bcmgenet_init_multiq(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
unsigned int i, dma_enable;
- u32 reg, dma_ctrl, ring_cfg = 0, dma_priority = 0;
+ u32 reg, dma_ctrl, ring_cfg = 0;
+ u32 dma_priority[3] = {0, 0, 0};
if (!netif_is_multiqueue(dev)) {
netdev_warn(dev, "called with non multi queue aware HW\n");
@@ -1720,22 +1726,25 @@ static void bcmgenet_init_multiq(struct net_device *dev)
/* Configure ring as descriptor ring and setup priority */
ring_cfg |= 1 << i;
- dma_priority |= ((GENET_Q0_PRIORITY + i) <<
- (GENET_MAX_MQ_CNT + 1) * i);
dma_ctrl |= 1 << (i + DMA_RING_BUF_EN_SHIFT);
+
+ dma_priority[DMA_PRIO_REG_INDEX(i)] |=
+ ((GENET_Q0_PRIORITY + i) << DMA_PRIO_REG_SHIFT(i));
}
+ /* Set ring 16 priority and program the hardware registers */
+ dma_priority[DMA_PRIO_REG_INDEX(DESC_INDEX)] |=
+ ((GENET_Q0_PRIORITY + priv->hw_params->tx_queues) <<
+ DMA_PRIO_REG_SHIFT(DESC_INDEX));
+ bcmgenet_tdma_writel(priv, dma_priority[0], DMA_PRIORITY_0);
+ bcmgenet_tdma_writel(priv, dma_priority[1], DMA_PRIORITY_1);
+ bcmgenet_tdma_writel(priv, dma_priority[2], DMA_PRIORITY_2);
+
/* Enable rings */
reg = bcmgenet_tdma_readl(priv, DMA_RING_CFG);
reg |= ring_cfg;
bcmgenet_tdma_writel(priv, reg, DMA_RING_CFG);
- /* Use configured rings priority and set ring #16 priority */
- reg = bcmgenet_tdma_readl(priv, DMA_RING_PRIORITY);
- reg |= ((GENET_Q0_PRIORITY + priv->hw_params->tx_queues) << 20);
- reg |= dma_priority;
- bcmgenet_tdma_writel(priv, reg, DMA_PRIORITY);
-
/* Configure ring as descriptor ring and re-enable DMA if enabled */
reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
reg |= dma_ctrl;
@@ -2017,19 +2026,6 @@ static void bcmgenet_set_hw_addr(struct bcmgenet_priv *priv,
bcmgenet_umac_writel(priv, (addr[4] << 8) | addr[5], UMAC_MAC1);
}
-static int bcmgenet_wol_resume(struct bcmgenet_priv *priv)
-{
- /* From WOL-enabled suspend, switch to regular clock */
- if (priv->wolopts)
- clk_disable_unprepare(priv->clk_wol);
-
- phy_init_hw(priv->phydev);
- /* Speed settings must be restored */
- bcmgenet_mii_config(priv->dev);
-
- return 0;
-}
-
/* Returns a reusable dma control register value */
static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)
{
@@ -2174,9 +2170,10 @@ static void bcmgenet_netif_stop(struct net_device *dev)
*/
cancel_work_sync(&priv->bcmgenet_irq_work);
- priv->old_pause = -1;
priv->old_link = -1;
+ priv->old_speed = -1;
priv->old_duplex = -1;
+ priv->old_pause = -1;
}
static int bcmgenet_close(struct net_device *dev)
@@ -2439,6 +2436,13 @@ static void bcmgenet_set_hw_params(struct bcmgenet_priv *priv)
dev_info(&priv->pdev->dev, "GENET " GENET_VER_FMT,
major, (reg >> 16) & 0x0f, reg & 0xffff);
+ /* Store the integrated PHY revision for the MDIO probing function
+ * to pass this information to the PHY driver. The PHY driver expects
+ * to find the PHY major revision in bits 15:8 while the GENET register
+ * stores that information in bits 7:0, account for that.
+ */
+ priv->gphy_rev = (reg & 0xffff) << 8;
+
#ifdef CONFIG_PHYS_ADDR_T_64BIT
if (!(params->flags & GENET_HAS_40BITS))
pr_warn("GENET does not support 40-bits PA\n");
@@ -2676,9 +2680,13 @@ static int bcmgenet_resume(struct device *d)
if (ret)
goto out_clk_disable;
- ret = bcmgenet_wol_resume(priv);
- if (ret)
- goto out_clk_disable;
+ /* From WOL-enabled suspend, switch to regular clock */
+ if (priv->wolopts)
+ clk_disable_unprepare(priv->clk_wol);
+
+ phy_init_hw(priv->phydev);
+ /* Speed settings must be restored */
+ bcmgenet_mii_config(priv->dev);
/* disable ethernet MAC while updating its registers */
umac_enable_set(priv, CMD_TX_EN | CMD_RX_EN, false);
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
index c862d06..dbf524e 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -401,6 +401,8 @@ struct bcmgenet_mib_counters {
#define DMA_ARBITER_MODE_MASK 0x03
#define DMA_RING_BUF_PRIORITY_MASK 0x1F
#define DMA_RING_BUF_PRIORITY_SHIFT 5
+#define DMA_PRIO_REG_INDEX(q) ((q) / 6)
+#define DMA_PRIO_REG_SHIFT(q) (((q) % 6) * DMA_RING_BUF_PRIORITY_SHIFT)
#define DMA_RATE_ADJ_MASK 0xFF
/* Tx/Rx Dma Descriptor common bits*/
@@ -545,10 +547,12 @@ struct bcmgenet_priv {
struct phy_device *phydev;
struct device_node *phy_dn;
struct mii_bus *mii_bus;
+ u16 gphy_rev;
/* PHY device variables */
- int old_duplex;
int old_link;
+ int old_speed;
+ int old_duplex;
int old_pause;
phy_interface_t phy_interface;
int phy_addr;
diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
index c88f7ae..9ff799a 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
@@ -82,24 +82,33 @@ static void bcmgenet_mii_setup(struct net_device *dev)
struct bcmgenet_priv *priv = netdev_priv(dev);
struct phy_device *phydev = priv->phydev;
u32 reg, cmd_bits = 0;
- unsigned int status_changed = 0;
+ bool status_changed = false;
if (priv->old_link != phydev->link) {
- status_changed = 1;
+ status_changed = true;
priv->old_link = phydev->link;
}
if (phydev->link) {
- /* program UMAC and RGMII block based on established link
- * speed, pause, and duplex.
- * the speed set in umac->cmd tell RGMII block which clock
- * 25MHz(100Mbps)/125MHz(1Gbps) to use for transmit.
- * receive clock is provided by PHY.
- */
- reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
- reg &= ~OOB_DISABLE;
- reg |= RGMII_LINK;
- bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
+ /* check speed/duplex/pause changes */
+ if (priv->old_speed != phydev->speed) {
+ status_changed = true;
+ priv->old_speed = phydev->speed;
+ }
+
+ if (priv->old_duplex != phydev->duplex) {
+ status_changed = true;
+ priv->old_duplex = phydev->duplex;
+ }
+
+ if (priv->old_pause != phydev->pause) {
+ status_changed = true;
+ priv->old_pause = phydev->pause;
+ }
+
+ /* done if nothing has changed */
+ if (!status_changed)
+ return;
/* speed */
if (phydev->speed == SPEED_1000)
@@ -110,36 +119,39 @@ static void bcmgenet_mii_setup(struct net_device *dev)
cmd_bits = UMAC_SPEED_10;
cmd_bits <<= CMD_SPEED_SHIFT;
- if (priv->old_duplex != phydev->duplex) {
- status_changed = 1;
- priv->old_duplex = phydev->duplex;
- }
-
/* duplex */
if (phydev->duplex != DUPLEX_FULL)
cmd_bits |= CMD_HD_EN;
- if (priv->old_pause != phydev->pause) {
- status_changed = 1;
- priv->old_pause = phydev->pause;
- }
-
/* pause capability */
if (!phydev->pause)
cmd_bits |= CMD_RX_PAUSE_IGNORE | CMD_TX_PAUSE_IGNORE;
- }
- if (!status_changed)
- return;
+ /*
+ * Program UMAC and RGMII block based on established
+ * link speed, duplex, and pause. The speed set in
+ * umac->cmd tell RGMII block which clock to use for
+ * transmit -- 25MHz(100Mbps) or 125MHz(1Gbps).
+ * Receive clock is provided by the PHY.
+ */
+ reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
+ reg &= ~OOB_DISABLE;
+ reg |= RGMII_LINK;
+ bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
- if (phydev->link) {
reg = bcmgenet_umac_readl(priv, UMAC_CMD);
reg &= ~((CMD_SPEED_MASK << CMD_SPEED_SHIFT) |
CMD_HD_EN |
CMD_RX_PAUSE_IGNORE | CMD_TX_PAUSE_IGNORE);
reg |= cmd_bits;
bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+ } else {
+ /* done if nothing has changed */
+ if (!status_changed)
+ return;
+ /* needed for MoCA fixed PHY to reflect correct link status */
+ netif_carrier_off(dev);
}
phy_print_status(phydev);
@@ -296,7 +308,7 @@ static int bcmgenet_mii_probe(struct net_device *dev)
struct bcmgenet_priv *priv = netdev_priv(dev);
struct device_node *dn = priv->pdev->dev.of_node;
struct phy_device *phydev;
- unsigned int phy_flags;
+ u32 phy_flags;
int ret;
if (priv->phydev) {
@@ -315,16 +327,22 @@ static int bcmgenet_mii_probe(struct net_device *dev)
priv->phy_dn = of_node_get(dn);
}
- phydev = of_phy_connect(dev, priv->phy_dn, bcmgenet_mii_setup, 0,
- priv->phy_interface);
+ /* Communicate the integrated PHY revision */
+ phy_flags = priv->gphy_rev;
+
+ /* Initialize link state variables that bcmgenet_mii_setup() uses */
+ priv->old_link = -1;
+ priv->old_speed = -1;
+ priv->old_duplex = -1;
+ priv->old_pause = -1;
+
+ phydev = of_phy_connect(dev, priv->phy_dn, bcmgenet_mii_setup,
+ phy_flags, priv->phy_interface);
if (!phydev) {
pr_err("could not attach to PHY\n");
return -ENODEV;
}
- priv->old_link = -1;
- priv->old_duplex = -1;
- priv->old_pause = -1;
priv->phydev = phydev;
/* Configure port multiplexer based on what the probed PHY device since
@@ -338,15 +356,6 @@ static int bcmgenet_mii_probe(struct net_device *dev)
return ret;
}
- phy_flags = PHY_BRCM_100MBPS_WAR;
-
- /* workarounds are only needed for 100Mpbs PHYs, and
- * never on GENET V1 hardware
- */
- if ((phydev->supported & PHY_GBIT_FEATURES) || GENET_IS_V1(priv))
- phy_flags = 0;
-
- phydev->dev_flags |= phy_flags;
phydev->advertising = phydev->supported;
/* The internal PHY has its link interrupts routed to the
diff --git a/drivers/net/ethernet/brocade/bna/bna_enet.c b/drivers/net/ethernet/brocade/bna/bna_enet.c
index 13f9636..903466e 100644
--- a/drivers/net/ethernet/brocade/bna/bna_enet.c
+++ b/drivers/net/ethernet/brocade/bna/bna_enet.c
@@ -107,7 +107,8 @@ bna_bfi_ethport_admin_rsp(struct bna_ethport *ethport,
{
struct bfi_enet_enable_req *admin_req =
&ethport->bfi_enet_cmd.admin_req;
- struct bfi_enet_rsp *rsp = (struct bfi_enet_rsp *)msghdr;
+ struct bfi_enet_rsp *rsp =
+ container_of(msghdr, struct bfi_enet_rsp, mh);
switch (admin_req->enable) {
case BNA_STATUS_T_ENABLED:
@@ -133,7 +134,8 @@ bna_bfi_ethport_lpbk_rsp(struct bna_ethport *ethport,
{
struct bfi_enet_diag_lb_req *diag_lb_req =
&ethport->bfi_enet_cmd.lpbk_req;
- struct bfi_enet_rsp *rsp = (struct bfi_enet_rsp *)msghdr;
+ struct bfi_enet_rsp *rsp =
+ container_of(msghdr, struct bfi_enet_rsp, mh);
switch (diag_lb_req->enable) {
case BNA_STATUS_T_ENABLED:
@@ -161,7 +163,8 @@ static void
bna_bfi_attr_get_rsp(struct bna_ioceth *ioceth,
struct bfi_msgq_mhdr *msghdr)
{
- struct bfi_enet_attr_rsp *rsp = (struct bfi_enet_attr_rsp *)msghdr;
+ struct bfi_enet_attr_rsp *rsp =
+ container_of(msghdr, struct bfi_enet_attr_rsp, mh);
/**
* Store only if not set earlier, since BNAD can override the HW
diff --git a/drivers/net/ethernet/brocade/bna/bna_tx_rx.c b/drivers/net/ethernet/brocade/bna/bna_tx_rx.c
index 85e6354..5fac411 100644
--- a/drivers/net/ethernet/brocade/bna/bna_tx_rx.c
+++ b/drivers/net/ethernet/brocade/bna/bna_tx_rx.c
@@ -715,7 +715,7 @@ bna_bfi_rxf_ucast_set_rsp(struct bna_rxf *rxf,
struct bfi_msgq_mhdr *msghdr)
{
struct bfi_enet_rsp *rsp =
- (struct bfi_enet_rsp *)msghdr;
+ container_of(msghdr, struct bfi_enet_rsp, mh);
if (rsp->error) {
/* Clear ucast from cache */
@@ -732,7 +732,7 @@ bna_bfi_rxf_mcast_add_rsp(struct bna_rxf *rxf,
struct bfi_enet_mcast_add_req *req =
&rxf->bfi_enet_cmd.mcast_add_req;
struct bfi_enet_mcast_add_rsp *rsp =
- (struct bfi_enet_mcast_add_rsp *)msghdr;
+ container_of(msghdr, struct bfi_enet_mcast_add_rsp, mh);
bna_rxf_mchandle_attach(rxf, (u8 *)&req->mac_addr,
ntohs(rsp->handle));
@@ -3410,7 +3410,7 @@ bna_bfi_tx_enet_start(struct bna_tx *tx)
cfg_req->tx_cfg.vlan_mode = BFI_ENET_TX_VLAN_WI;
cfg_req->tx_cfg.vlan_id = htons((u16)tx->txf_vlan_id);
- cfg_req->tx_cfg.admit_tagged_frame = BNA_STATUS_T_DISABLED;
+ cfg_req->tx_cfg.admit_tagged_frame = BNA_STATUS_T_ENABLED;
cfg_req->tx_cfg.apply_vlan_filter = BNA_STATUS_T_DISABLED;
bfa_msgq_cmd_set(&tx->msgq_cmd, NULL, NULL,
diff --git a/drivers/net/ethernet/brocade/bna/bnad.c b/drivers/net/ethernet/brocade/bna/bnad.c
index ffc92a4..153cafa 100644
--- a/drivers/net/ethernet/brocade/bna/bnad.c
+++ b/drivers/net/ethernet/brocade/bna/bnad.c
@@ -2864,7 +2864,7 @@ bnad_txq_wi_prepare(struct bnad *bnad, struct bna_tcb *tcb,
txqent->hdr.wi.opcode = htons(BNA_TXQ_WI_SEND);
txqent->hdr.wi.lso_mss = 0;
- if (unlikely(skb->len > (bnad->netdev->mtu + ETH_HLEN))) {
+ if (unlikely(skb->len > (bnad->netdev->mtu + VLAN_ETH_HLEN))) {
BNAD_UPDATE_CTR(bnad, tx_skb_non_tso_too_long);
return -EINVAL;
}
diff --git a/drivers/net/ethernet/cadence/at91_ether.c b/drivers/net/ethernet/cadence/at91_ether.c
index 4a79eda..4a24b9a 100644
--- a/drivers/net/ethernet/cadence/at91_ether.c
+++ b/drivers/net/ethernet/cadence/at91_ether.c
@@ -351,7 +351,6 @@ static int __init at91ether_probe(struct platform_device *pdev)
if (res)
goto err_disable_clock;
- ether_setup(dev);
dev->netdev_ops = &at91ether_netdev_ops;
dev->ethtool_ops = &macb_ethtool_ops;
platform_set_drvdata(pdev, dev);
diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c
index e1e02fb..4d9fc05 100644
--- a/drivers/net/ethernet/cadence/macb.c
+++ b/drivers/net/ethernet/cadence/macb.c
@@ -2230,9 +2230,9 @@ static int __init macb_probe(struct platform_device *pdev)
netif_carrier_off(dev);
- netdev_info(dev, "Cadence %s at 0x%08lx irq %d (%pM)\n",
- macb_is_gem(bp) ? "GEM" : "MACB", dev->base_addr,
- dev->irq, dev->dev_addr);
+ netdev_info(dev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n",
+ macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID),
+ dev->base_addr, dev->irq, dev->dev_addr);
phydev = bp->phy_dev;
netdev_info(dev, "attached PHY driver [%s] (mii_bus:phy_addr=%s, irq=%d)\n",
diff --git a/drivers/net/ethernet/calxeda/xgmac.c b/drivers/net/ethernet/calxeda/xgmac.c
index 25d6b2a..47bfea2 100644
--- a/drivers/net/ethernet/calxeda/xgmac.c
+++ b/drivers/net/ethernet/calxeda/xgmac.c
@@ -1735,7 +1735,6 @@ static int xgmac_probe(struct platform_device *pdev)
SET_NETDEV_DEV(ndev, &pdev->dev);
priv = netdev_priv(ndev);
platform_set_drvdata(pdev, ndev);
- ether_setup(ndev);
ndev->netdev_ops = &xgmac_netdev_ops;
ndev->ethtool_ops = &xgmac_ethtool_ops;
spin_lock_init(&priv->stats_lock);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
index c067b78..9b2c669 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
@@ -431,6 +431,7 @@ struct sge_fl { /* SGE free-buffer queue state */
struct rx_sw_desc *sdesc; /* address of SW Rx descriptor ring */
__be64 *desc; /* address of HW Rx descriptor ring */
dma_addr_t addr; /* bus address of HW ring start */
+ u64 udb; /* BAR2 offset of User Doorbell area */
};
/* A packet gather list */
@@ -451,6 +452,7 @@ struct sge_rspq { /* state for an SGE response queue */
u8 gen; /* current generation bit */
u8 intr_params; /* interrupt holdoff parameters */
u8 next_intr_params; /* holdoff params for next interrupt */
+ u8 adaptive_rx;
u8 pktcnt_idx; /* interrupt packet threshold */
u8 uld; /* ULD handling this queue */
u8 idx; /* queue index within its group */
@@ -459,6 +461,7 @@ struct sge_rspq { /* state for an SGE response queue */
u16 abs_id; /* absolute SGE id for the response q */
__be64 *desc; /* address of HW response ring */
dma_addr_t phys_addr; /* physical address of the ring */
+ u64 udb; /* BAR2 offset of User Doorbell area */
unsigned int iqe_len; /* entry size */
unsigned int size; /* capacity of response queue */
struct adapter *adap;
@@ -516,7 +519,7 @@ struct sge_txq {
int db_disabled;
unsigned short db_pidx;
unsigned short db_pidx_inc;
- u64 udb;
+ u64 udb; /* BAR2 offset of User Doorbell area */
};
struct sge_eth_txq { /* state for an SGE Ethernet Tx queue */
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index e5be511..321f3d9 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -283,6 +283,9 @@ static const struct pci_device_id cxgb4_pci_tbl[] = {
CH_DEVICE(0x5083, 4),
CH_DEVICE(0x5084, 4),
CH_DEVICE(0x5085, 4),
+ CH_DEVICE(0x5086, 4),
+ CH_DEVICE(0x5087, 4),
+ CH_DEVICE(0x5088, 4),
CH_DEVICE(0x5401, 4),
CH_DEVICE(0x5402, 4),
CH_DEVICE(0x5403, 4),
@@ -310,6 +313,9 @@ static const struct pci_device_id cxgb4_pci_tbl[] = {
CH_DEVICE(0x5483, 4),
CH_DEVICE(0x5484, 4),
CH_DEVICE(0x5485, 4),
+ CH_DEVICE(0x5486, 4),
+ CH_DEVICE(0x5487, 4),
+ CH_DEVICE(0x5488, 4),
{ 0, }
};
@@ -2747,8 +2753,31 @@ static int set_rx_intr_params(struct net_device *dev,
return 0;
}
+static int set_adaptive_rx_setting(struct net_device *dev, int adaptive_rx)
+{
+ int i;
+ struct port_info *pi = netdev_priv(dev);
+ struct adapter *adap = pi->adapter;
+ struct sge_eth_rxq *q = &adap->sge.ethrxq[pi->first_qset];
+
+ for (i = 0; i < pi->nqsets; i++, q++)
+ q->rspq.adaptive_rx = adaptive_rx;
+
+ return 0;
+}
+
+static int get_adaptive_rx_setting(struct net_device *dev)
+{
+ struct port_info *pi = netdev_priv(dev);
+ struct adapter *adap = pi->adapter;
+ struct sge_eth_rxq *q = &adap->sge.ethrxq[pi->first_qset];
+
+ return q->rspq.adaptive_rx;
+}
+
static int set_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
{
+ set_adaptive_rx_setting(dev, c->use_adaptive_rx_coalesce);
return set_rx_intr_params(dev, c->rx_coalesce_usecs,
c->rx_max_coalesced_frames);
}
@@ -2762,6 +2791,7 @@ static int get_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
c->rx_coalesce_usecs = qtimer_val(adap, rq);
c->rx_max_coalesced_frames = (rq->intr_params & QINTR_CNT_EN) ?
adap->sge.counter_val[rq->pktcnt_idx] : 0;
+ c->use_adaptive_rx_coalesce = get_adaptive_rx_setting(dev);
return 0;
}
@@ -4390,7 +4420,6 @@ static int cxgb4_inet6addr_handler(struct notifier_block *this,
* bond. We need to find such different adapters and add clip
* in all of them only once.
*/
- read_lock(&bond->lock);
bond_for_each_slave(bond, slave, iter) {
if (!first_pdev) {
ret = clip_add(slave->dev, ifa, event);
@@ -4404,7 +4433,6 @@ static int cxgb4_inet6addr_handler(struct notifier_block *this,
to_pci_dev(slave->dev->dev.parent))
ret = clip_add(slave->dev, ifa, event);
}
- read_unlock(&bond->lock);
} else
ret = clip_add(ifa->idev->dev, ifa, event);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
index d22d728..fab4c84 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
@@ -203,6 +203,9 @@ enum {
RX_LARGE_MTU_BUF = 0x3, /* large MTU buffer */
};
+static int timer_pkt_quota[] = {1, 1, 2, 3, 4, 5};
+#define MIN_NAPI_WORK 1
+
static inline dma_addr_t get_buf_addr(const struct rx_sw_desc *d)
{
return d->dma_addr & ~(dma_addr_t)RX_BUF_FLAGS;
@@ -521,9 +524,23 @@ static inline void ring_fl_db(struct adapter *adap, struct sge_fl *q)
val = PIDX(q->pend_cred / 8);
if (!is_t4(adap->params.chip))
val |= DBTYPE(1);
+ val |= DBPRIO(1);
wmb();
- t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL), DBPRIO(1) |
- QID(q->cntxt_id) | val);
+
+ /* If we're on T4, use the old doorbell mechanism; otherwise
+ * use the new BAR2 mechanism.
+ */
+ if (is_t4(adap->params.chip)) {
+ t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
+ val | QID(q->cntxt_id));
+ } else {
+ writel(val, adap->bar2 + q->udb + SGE_UDB_KDOORBELL);
+
+ /* This Write memory Barrier will force the write to
+ * the User Doorbell area to be flushed.
+ */
+ wmb();
+ }
q->pend_cred &= 7;
}
}
@@ -833,13 +850,14 @@ static void write_sgl(const struct sk_buff *skb, struct sge_txq *q,
*end = 0;
}
-/* This function copies 64 byte coalesced work request to
- * memory mapped BAR2 space(user space writes).
- * For coalesced WR SGE, fetches data from the FIFO instead of from Host.
+/* This function copies a tx_desc struct to memory mapped BAR2 space(user space
+ * writes). For coalesced WR SGE, fetches data from the FIFO instead of from
+ * Host.
*/
-static void cxgb_pio_copy(u64 __iomem *dst, u64 *src)
+static void cxgb_pio_copy(u64 __iomem *dst, struct tx_desc *desc)
{
- int count = 8;
+ int count = sizeof(*desc) / sizeof(u64);
+ u64 *src = (u64 *)desc;
while (count) {
writeq(*src, dst);
@@ -859,30 +877,63 @@ static void cxgb_pio_copy(u64 __iomem *dst, u64 *src)
*/
static inline void ring_tx_db(struct adapter *adap, struct sge_txq *q, int n)
{
- unsigned int *wr, index;
- unsigned long flags;
-
wmb(); /* write descriptors before telling HW */
- spin_lock_irqsave(&q->db_lock, flags);
- if (!q->db_disabled) {
- if (is_t4(adap->params.chip)) {
+
+ if (is_t4(adap->params.chip)) {
+ u32 val = PIDX(n);
+ unsigned long flags;
+
+ /* For T4 we need to participate in the Doorbell Recovery
+ * mechanism.
+ */
+ spin_lock_irqsave(&q->db_lock, flags);
+ if (!q->db_disabled)
t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
- QID(q->cntxt_id) | PIDX(n));
+ QID(q->cntxt_id) | val);
+ else
+ q->db_pidx_inc += n;
+ q->db_pidx = q->pidx;
+ spin_unlock_irqrestore(&q->db_lock, flags);
+ } else {
+ u32 val = PIDX_T5(n);
+
+ /* T4 and later chips share the same PIDX field offset within
+ * the doorbell, but T5 and later shrank the field in order to
+ * gain a bit for Doorbell Priority. The field was absurdly
+ * large in the first place (14 bits) so we just use the T5
+ * and later limits and warn if a Queue ID is too large.
+ */
+ WARN_ON(val & DBPRIO(1));
+
+ /* For T5 and later we use the Write-Combine mapped BAR2 User
+ * Doorbell mechanism. If we're only writing a single TX
+ * Descriptor and TX Write Combining hasn't been disabled, we
+ * can use the Write Combining Gather Buffer; otherwise we use
+ * the simple doorbell.
+ */
+ if (n == 1) {
+ int index = (q->pidx
+ ? (q->pidx - 1)
+ : (q->size - 1));
+
+ cxgb_pio_copy(adap->bar2 + q->udb + SGE_UDB_WCDOORBELL,
+ q->desc + index);
} else {
- if (n == 1) {
- index = q->pidx ? (q->pidx - 1) : (q->size - 1);
- wr = (unsigned int *)&q->desc[index];
- cxgb_pio_copy((u64 __iomem *)
- (adap->bar2 + q->udb + 64),
- (u64 *)wr);
- } else
- writel(n, adap->bar2 + q->udb + 8);
- wmb();
+ writel(val, adap->bar2 + q->udb + SGE_UDB_KDOORBELL);
}
- } else
- q->db_pidx_inc += n;
- q->db_pidx = q->pidx;
- spin_unlock_irqrestore(&q->db_lock, flags);
+
+ /* This Write Memory Barrier will force the write to the User
+ * Doorbell area to be flushed. This is needed to prevent
+ * writes on different CPUs for the same queue from hitting
+ * the adapter out of order. This is required when some Work
+ * Requests take the Write Combine Gather Buffer path (user
+ * doorbell area offset [SGE_UDB_WCDOORBELL..+63]) and some
+ * take the traditional path where we simply increment the
+ * PIDX (User Doorbell area SGE_UDB_KDOORBELL) and have the
+ * hardware DMA read the actual Work Request.
+ */
+ wmb();
+ }
}
/**
@@ -1916,16 +1967,40 @@ static int napi_rx_handler(struct napi_struct *napi, int budget)
unsigned int params;
struct sge_rspq *q = container_of(napi, struct sge_rspq, napi);
int work_done = process_responses(q, budget);
+ u32 val;
if (likely(work_done < budget)) {
+ int timer_index;
+
napi_complete(napi);
- params = q->next_intr_params;
- q->next_intr_params = q->intr_params;
+ timer_index = QINTR_TIMER_IDX_GET(q->next_intr_params);
+
+ if (q->adaptive_rx) {
+ if (work_done > max(timer_pkt_quota[timer_index],
+ MIN_NAPI_WORK))
+ timer_index = (timer_index + 1);
+ else
+ timer_index = timer_index - 1;
+
+ timer_index = clamp(timer_index, 0, SGE_TIMERREGS - 1);
+ q->next_intr_params = QINTR_TIMER_IDX(timer_index) |
+ V_QINTR_CNT_EN;
+ params = q->next_intr_params;
+ } else {
+ params = q->next_intr_params;
+ q->next_intr_params = q->intr_params;
+ }
} else
params = QINTR_TIMER_IDX(7);
- t4_write_reg(q->adap, MYPF_REG(SGE_PF_GTS), CIDXINC(work_done) |
- INGRESSQID((u32)q->cntxt_id) | SEINTARM(params));
+ val = CIDXINC(work_done) | SEINTARM(params);
+ if (is_t4(q->adap->params.chip)) {
+ t4_write_reg(q->adap, MYPF_REG(SGE_PF_GTS),
+ val | INGRESSQID((u32)q->cntxt_id));
+ } else {
+ writel(val, q->adap->bar2 + q->udb + SGE_UDB_GTS);
+ wmb();
+ }
return work_done;
}
@@ -1949,6 +2024,7 @@ static unsigned int process_intrq(struct adapter *adap)
unsigned int credits;
const struct rsp_ctrl *rc;
struct sge_rspq *q = &adap->sge.intrq;
+ u32 val;
spin_lock(&adap->sge.intrq_lock);
for (credits = 0; ; credits++) {
@@ -1967,8 +2043,14 @@ static unsigned int process_intrq(struct adapter *adap)
rspq_next(q);
}
- t4_write_reg(adap, MYPF_REG(SGE_PF_GTS), CIDXINC(credits) |
- INGRESSQID(q->cntxt_id) | SEINTARM(q->intr_params));
+ val = CIDXINC(credits) | SEINTARM(q->intr_params);
+ if (is_t4(adap->params.chip)) {
+ t4_write_reg(adap, MYPF_REG(SGE_PF_GTS),
+ val | INGRESSQID(q->cntxt_id));
+ } else {
+ writel(val, adap->bar2 + q->udb + SGE_UDB_GTS);
+ wmb();
+ }
spin_unlock(&adap->sge.intrq_lock);
return credits;
}
@@ -2149,6 +2231,51 @@ static void sge_tx_timer_cb(unsigned long data)
mod_timer(&s->tx_timer, jiffies + (budget ? TX_QCHECK_PERIOD : 2));
}
+/**
+ * udb_address - return the BAR2 User Doorbell address for a Queue
+ * @adap: the adapter
+ * @cntxt_id: the Queue Context ID
+ * @qpp: Queues Per Page (for all PFs)
+ *
+ * Returns the BAR2 address of the user Doorbell associated with the
+ * indicated Queue Context ID. Note that this is only applicable
+ * for T5 and later.
+ */
+static u64 udb_address(struct adapter *adap, unsigned int cntxt_id,
+ unsigned int qpp)
+{
+ u64 udb;
+ unsigned int s_qpp;
+ unsigned short udb_density;
+ unsigned long qpshift;
+ int page;
+
+ BUG_ON(is_t4(adap->params.chip));
+
+ s_qpp = (QUEUESPERPAGEPF0 +
+ (QUEUESPERPAGEPF1 - QUEUESPERPAGEPF0) * adap->fn);
+ udb_density = 1 << ((qpp >> s_qpp) & QUEUESPERPAGEPF0_MASK);
+ qpshift = PAGE_SHIFT - ilog2(udb_density);
+ udb = (u64)cntxt_id << qpshift;
+ udb &= PAGE_MASK;
+ page = udb / PAGE_SIZE;
+ udb += (cntxt_id - (page * udb_density)) * SGE_UDB_SIZE;
+
+ return udb;
+}
+
+static u64 udb_address_eq(struct adapter *adap, unsigned int cntxt_id)
+{
+ return udb_address(adap, cntxt_id,
+ t4_read_reg(adap, SGE_EGRESS_QUEUES_PER_PAGE_PF));
+}
+
+static u64 udb_address_iq(struct adapter *adap, unsigned int cntxt_id)
+{
+ return udb_address(adap, cntxt_id,
+ t4_read_reg(adap, SGE_INGRESS_QUEUES_PER_PAGE_PF));
+}
+
int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq,
struct net_device *dev, int intr_idx,
struct sge_fl *fl, rspq_handler_t hnd)
@@ -2214,6 +2341,8 @@ int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq,
iq->next_intr_params = iq->intr_params;
iq->cntxt_id = ntohs(c.iqid);
iq->abs_id = ntohs(c.physiqid);
+ if (!is_t4(adap->params.chip))
+ iq->udb = udb_address_iq(adap, iq->cntxt_id);
iq->size--; /* subtract status entry */
iq->netdev = dev;
iq->handler = hnd;
@@ -2229,6 +2358,12 @@ int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq,
fl->pidx = fl->cidx = 0;
fl->alloc_failed = fl->large_alloc_failed = fl->starving = 0;
adap->sge.egr_map[fl->cntxt_id - adap->sge.egr_start] = fl;
+
+ /* Note, we must initialize the Free List User Doorbell
+ * address before refilling the Free List!
+ */
+ if (!is_t4(adap->params.chip))
+ fl->udb = udb_address_eq(adap, fl->cntxt_id);
refill_fl(adap, fl, fl_cap(fl), GFP_KERNEL);
}
return 0;
@@ -2254,21 +2389,8 @@ err:
static void init_txq(struct adapter *adap, struct sge_txq *q, unsigned int id)
{
q->cntxt_id = id;
- if (!is_t4(adap->params.chip)) {
- unsigned int s_qpp;
- unsigned short udb_density;
- unsigned long qpshift;
- int page;
-
- s_qpp = QUEUESPERPAGEPF1 * adap->fn;
- udb_density = 1 << QUEUESPERPAGEPF0_GET((t4_read_reg(adap,
- SGE_EGRESS_QUEUES_PER_PAGE_PF) >> s_qpp));
- qpshift = PAGE_SHIFT - ilog2(udb_density);
- q->udb = q->cntxt_id << qpshift;
- q->udb &= PAGE_MASK;
- page = q->udb / PAGE_SIZE;
- q->udb += (q->cntxt_id - (page * udb_density)) * 128;
- }
+ if (!is_t4(adap->params.chip))
+ q->udb = udb_address_eq(adap, q->cntxt_id);
q->in_use = 0;
q->cidx = q->pidx = 0;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
index 41d0446..22d7581 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
@@ -1099,6 +1099,9 @@ static int t4_flash_erase_sectors(struct adapter *adapter, int start, int end)
{
int ret = 0;
+ if (end >= adapter->params.sf_nsec)
+ return -EINVAL;
+
while (start <= end) {
if ((ret = sf1_write(adapter, 1, 0, 1, SF_WR_ENABLE)) != 0 ||
(ret = sf1_write(adapter, 4, 0, 1,
@@ -3850,8 +3853,20 @@ int t4_wait_dev_ready(struct adapter *adap)
return t4_read_reg(adap, PL_WHOAMI) != 0xffffffff ? 0 : -EIO;
}
+struct flash_desc {
+ u32 vendor_and_model_id;
+ u32 size_mb;
+};
+
static int get_flash_params(struct adapter *adap)
{
+ /* Table for non-Numonix supported flash parts. Numonix parts are left
+ * to the preexisting code. All flash parts have 64KB sectors.
+ */
+ static struct flash_desc supported_flash[] = {
+ { 0x150201, 4 << 20 }, /* Spansion 4MB S25FL032P */
+ };
+
int ret;
u32 info;
@@ -3862,6 +3877,14 @@ static int get_flash_params(struct adapter *adap)
if (ret)
return ret;
+ for (ret = 0; ret < ARRAY_SIZE(supported_flash); ++ret)
+ if (supported_flash[ret].vendor_and_model_id == info) {
+ adap->params.sf_size = supported_flash[ret].size_mb;
+ adap->params.sf_nsec =
+ adap->params.sf_size / SF_SEC_SIZE;
+ return 0;
+ }
+
if ((info & 0xff) != 0x20) /* not a Numonix flash */
return -EINVAL;
info >>= 16; /* log2 of size */
@@ -3874,6 +3897,10 @@ static int get_flash_params(struct adapter *adap)
adap->params.sf_size = 1 << info;
adap->params.sf_fw_start =
t4_read_reg(adap, CIM_BOOT_CFG) & BOOTADDR_MASK;
+
+ if (adap->params.sf_size < FLASH_MIN_SIZE)
+ dev_warn(adap->pdev_dev, "WARNING!!! FLASH size %#x < %#x!!!\n",
+ adap->params.sf_size, FLASH_MIN_SIZE);
return 0;
}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
index 35e3d8e..c19a90e 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
@@ -135,6 +135,7 @@ struct rsp_ctrl {
#define RSPD_GEN(x) ((x) >> 7)
#define RSPD_TYPE(x) (((x) >> 4) & 3)
+#define V_QINTR_CNT_EN 0x0
#define QINTR_CNT_EN 0x1
#define QINTR_TIMER_IDX(x) ((x) << 1)
#define QINTR_TIMER_IDX_GET(x) (((x) >> 1) & 0x7)
@@ -175,7 +176,7 @@ enum {
* Location of firmware image in FLASH.
*/
FLASH_FW_START_SEC = 8,
- FLASH_FW_NSECS = 8,
+ FLASH_FW_NSECS = 16,
FLASH_FW_START = FLASH_START(FLASH_FW_START_SEC),
FLASH_FW_MAX_SIZE = FLASH_MAX_SIZE(FLASH_FW_NSECS),
@@ -206,6 +207,12 @@ enum {
FLASH_CFG_START = FLASH_START(FLASH_CFG_START_SEC),
FLASH_CFG_MAX_SIZE = FLASH_MAX_SIZE(FLASH_CFG_NSECS),
+ /* We don't support FLASH devices which can't support the full
+ * standard set of sections which we need for normal
+ * operations.
+ */
+ FLASH_MIN_SIZE = FLASH_CFG_START + FLASH_CFG_MAX_SIZE,
+
FLASH_FPGA_CFG_START_SEC = 15,
FLASH_FPGA_CFG_START = FLASH_START(FLASH_FPGA_CFG_START_SEC),
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
index 39fb325..eee2728 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
@@ -77,6 +77,7 @@
#define PIDX_T5(x) (((x) >> S_PIDX_T5) & M_PIDX_T5)
+#define SGE_TIMERREGS 6
#define SGE_PF_GTS 0x4
#define INGRESSQID_MASK 0xffff0000U
#define INGRESSQID_SHIFT 16
@@ -157,8 +158,27 @@
#define QUEUESPERPAGEPF0_MASK 0x0000000fU
#define QUEUESPERPAGEPF0_GET(x) ((x) & QUEUESPERPAGEPF0_MASK)
+#define QUEUESPERPAGEPF0 0
#define QUEUESPERPAGEPF1 4
+/* T5 and later support a new BAR2-based doorbell mechanism for Egress Queues.
+ * The User Doorbells are each 128 bytes in length with a Simple Doorbell at
+ * offsets 8x and a Write Combining single 64-byte Egress Queue Unit
+ * (X_IDXSIZE_UNIT) Gather Buffer interface at offset 64. For Ingress Queues,
+ * we have a Going To Sleep register at offsets 8x+4.
+ *
+ * As noted above, we have many instances of the Simple Doorbell and Going To
+ * Sleep registers at offsets 8x and 8x+4, respectively. We want to use a
+ * non-64-byte aligned offset for the Simple Doorbell in order to attempt to
+ * avoid buffering of the writes to the Simple Doorbell and we want to use a
+ * non-contiguous offset for the Going To Sleep writes in order to avoid
+ * possible combining between them.
+ */
+#define SGE_UDB_SIZE 128
+#define SGE_UDB_KDOORBELL 8
+#define SGE_UDB_GTS 20
+#define SGE_UDB_WCDOORBELL 64
+
#define SGE_INT_CAUSE1 0x1024
#define SGE_INT_CAUSE2 0x1030
#define SGE_INT_CAUSE3 0x103c
diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
index 2102a4c..8498a64 100644
--- a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
@@ -2907,61 +2907,62 @@ static void cxgb4vf_pci_shutdown(struct pci_dev *pdev)
/*
* PCI Device registration data structures.
*/
-#define CH_DEVICE(devid, idx) \
- { PCI_VENDOR_ID_CHELSIO, devid, PCI_ANY_ID, PCI_ANY_ID, 0, 0, idx }
+#define CH_DEVICE(devid) \
+ { PCI_VENDOR_ID_CHELSIO, devid, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }
static const struct pci_device_id cxgb4vf_pci_tbl[] = {
- CH_DEVICE(0xb000, 0), /* PE10K FPGA */
- CH_DEVICE(0x4800, 0), /* T440-dbg */
- CH_DEVICE(0x4801, 0), /* T420-cr */
- CH_DEVICE(0x4802, 0), /* T422-cr */
- CH_DEVICE(0x4803, 0), /* T440-cr */
- CH_DEVICE(0x4804, 0), /* T420-bch */
- CH_DEVICE(0x4805, 0), /* T440-bch */
- CH_DEVICE(0x4806, 0), /* T460-ch */
- CH_DEVICE(0x4807, 0), /* T420-so */
- CH_DEVICE(0x4808, 0), /* T420-cx */
- CH_DEVICE(0x4809, 0), /* T420-bt */
- CH_DEVICE(0x480a, 0), /* T404-bt */
- CH_DEVICE(0x480d, 0), /* T480-cr */
- CH_DEVICE(0x480e, 0), /* T440-lp-cr */
- CH_DEVICE(0x4880, 0),
- CH_DEVICE(0x4880, 1),
- CH_DEVICE(0x4880, 2),
- CH_DEVICE(0x4880, 3),
- CH_DEVICE(0x4880, 4),
- CH_DEVICE(0x4880, 5),
- CH_DEVICE(0x4880, 6),
- CH_DEVICE(0x4880, 7),
- CH_DEVICE(0x4880, 8),
- CH_DEVICE(0x5800, 0), /* T580-dbg */
- CH_DEVICE(0x5801, 0), /* T520-cr */
- CH_DEVICE(0x5802, 0), /* T522-cr */
- CH_DEVICE(0x5803, 0), /* T540-cr */
- CH_DEVICE(0x5804, 0), /* T520-bch */
- CH_DEVICE(0x5805, 0), /* T540-bch */
- CH_DEVICE(0x5806, 0), /* T540-ch */
- CH_DEVICE(0x5807, 0), /* T520-so */
- CH_DEVICE(0x5808, 0), /* T520-cx */
- CH_DEVICE(0x5809, 0), /* T520-bt */
- CH_DEVICE(0x580a, 0), /* T504-bt */
- CH_DEVICE(0x580b, 0), /* T520-sr */
- CH_DEVICE(0x580c, 0), /* T504-bt */
- CH_DEVICE(0x580d, 0), /* T580-cr */
- CH_DEVICE(0x580e, 0), /* T540-lp-cr */
- CH_DEVICE(0x580f, 0), /* Amsterdam */
- CH_DEVICE(0x5810, 0), /* T580-lp-cr */
- CH_DEVICE(0x5811, 0), /* T520-lp-cr */
- CH_DEVICE(0x5812, 0), /* T560-cr */
- CH_DEVICE(0x5813, 0), /* T580-cr */
- CH_DEVICE(0x5814, 0), /* T580-so-cr */
- CH_DEVICE(0x5815, 0), /* T502-bt */
- CH_DEVICE(0x5880, 0),
- CH_DEVICE(0x5881, 0),
- CH_DEVICE(0x5882, 0),
- CH_DEVICE(0x5883, 0),
- CH_DEVICE(0x5884, 0),
- CH_DEVICE(0x5885, 0),
+ CH_DEVICE(0xb000), /* PE10K FPGA */
+ CH_DEVICE(0x4801), /* T420-cr */
+ CH_DEVICE(0x4802), /* T422-cr */
+ CH_DEVICE(0x4803), /* T440-cr */
+ CH_DEVICE(0x4804), /* T420-bch */
+ CH_DEVICE(0x4805), /* T440-bch */
+ CH_DEVICE(0x4806), /* T460-ch */
+ CH_DEVICE(0x4807), /* T420-so */
+ CH_DEVICE(0x4808), /* T420-cx */
+ CH_DEVICE(0x4809), /* T420-bt */
+ CH_DEVICE(0x480a), /* T404-bt */
+ CH_DEVICE(0x480d), /* T480-cr */
+ CH_DEVICE(0x480e), /* T440-lp-cr */
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x4880),
+ CH_DEVICE(0x5801), /* T520-cr */
+ CH_DEVICE(0x5802), /* T522-cr */
+ CH_DEVICE(0x5803), /* T540-cr */
+ CH_DEVICE(0x5804), /* T520-bch */
+ CH_DEVICE(0x5805), /* T540-bch */
+ CH_DEVICE(0x5806), /* T540-ch */
+ CH_DEVICE(0x5807), /* T520-so */
+ CH_DEVICE(0x5808), /* T520-cx */
+ CH_DEVICE(0x5809), /* T520-bt */
+ CH_DEVICE(0x580a), /* T504-bt */
+ CH_DEVICE(0x580b), /* T520-sr */
+ CH_DEVICE(0x580c), /* T504-bt */
+ CH_DEVICE(0x580d), /* T580-cr */
+ CH_DEVICE(0x580e), /* T540-lp-cr */
+ CH_DEVICE(0x580f), /* Amsterdam */
+ CH_DEVICE(0x5810), /* T580-lp-cr */
+ CH_DEVICE(0x5811), /* T520-lp-cr */
+ CH_DEVICE(0x5812), /* T560-cr */
+ CH_DEVICE(0x5813), /* T580-cr */
+ CH_DEVICE(0x5814), /* T580-so-cr */
+ CH_DEVICE(0x5815), /* T502-bt */
+ CH_DEVICE(0x5880),
+ CH_DEVICE(0x5881),
+ CH_DEVICE(0x5882),
+ CH_DEVICE(0x5883),
+ CH_DEVICE(0x5884),
+ CH_DEVICE(0x5885),
+ CH_DEVICE(0x5886),
+ CH_DEVICE(0x5887),
+ CH_DEVICE(0x5888),
{ 0, }
};
diff --git a/drivers/net/ethernet/cisco/enic/enic.h b/drivers/net/ethernet/cisco/enic/enic.h
index 962510f..5ba5ad0 100644
--- a/drivers/net/ethernet/cisco/enic/enic.h
+++ b/drivers/net/ethernet/cisco/enic/enic.h
@@ -186,6 +186,7 @@ struct enic {
____cacheline_aligned struct vnic_cq cq[ENIC_CQ_MAX];
unsigned int cq_count;
struct enic_rfs_flw_tbl rfs_h;
+ u32 rx_copybreak;
};
static inline struct device *enic_get_dev(struct enic *enic)
diff --git a/drivers/net/ethernet/cisco/enic/enic_ethtool.c b/drivers/net/ethernet/cisco/enic/enic_ethtool.c
index 523c9ce..85173d6 100644
--- a/drivers/net/ethernet/cisco/enic/enic_ethtool.c
+++ b/drivers/net/ethernet/cisco/enic/enic_ethtool.c
@@ -379,6 +379,43 @@ static int enic_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
return ret;
}
+static int enic_get_tunable(struct net_device *dev,
+ const struct ethtool_tunable *tuna, void *data)
+{
+ struct enic *enic = netdev_priv(dev);
+ int ret = 0;
+
+ switch (tuna->id) {
+ case ETHTOOL_RX_COPYBREAK:
+ *(u32 *)data = enic->rx_copybreak;
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
+static int enic_set_tunable(struct net_device *dev,
+ const struct ethtool_tunable *tuna,
+ const void *data)
+{
+ struct enic *enic = netdev_priv(dev);
+ int ret = 0;
+
+ switch (tuna->id) {
+ case ETHTOOL_RX_COPYBREAK:
+ enic->rx_copybreak = *(u32 *)data;
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
static const struct ethtool_ops enic_ethtool_ops = {
.get_settings = enic_get_settings,
.get_drvinfo = enic_get_drvinfo,
@@ -391,6 +428,8 @@ static const struct ethtool_ops enic_ethtool_ops = {
.get_coalesce = enic_get_coalesce,
.set_coalesce = enic_set_coalesce,
.get_rxnfc = enic_get_rxnfc,
+ .get_tunable = enic_get_tunable,
+ .set_tunable = enic_set_tunable,
};
void enic_set_ethtool_ops(struct net_device *netdev)
diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
index c8832bc..929bfe7 100644
--- a/drivers/net/ethernet/cisco/enic/enic_main.c
+++ b/drivers/net/ethernet/cisco/enic/enic_main.c
@@ -66,6 +66,8 @@
#define PCI_DEVICE_ID_CISCO_VIC_ENET_DYN 0x0044 /* enet dynamic vnic */
#define PCI_DEVICE_ID_CISCO_VIC_ENET_VF 0x0071 /* enet SRIOV VF */
+#define RX_COPYBREAK_DEFAULT 256
+
/* Supported devices */
static const struct pci_device_id enic_id_table[] = {
{ PCI_VDEVICE(CISCO, PCI_DEVICE_ID_CISCO_VIC_ENET) },
@@ -924,6 +926,7 @@ static void enic_free_rq_buf(struct vnic_rq *rq, struct vnic_rq_buf *buf)
pci_unmap_single(enic->pdev, buf->dma_addr,
buf->len, PCI_DMA_FROMDEVICE);
dev_kfree_skb_any(buf->os_buf);
+ buf->os_buf = NULL;
}
static int enic_rq_alloc_buf(struct vnic_rq *rq)
@@ -934,7 +937,24 @@ static int enic_rq_alloc_buf(struct vnic_rq *rq)
unsigned int len = netdev->mtu + VLAN_ETH_HLEN;
unsigned int os_buf_index = 0;
dma_addr_t dma_addr;
+ struct vnic_rq_buf *buf = rq->to_use;
+
+ if (buf->os_buf) {
+ buf = buf->next;
+ rq->to_use = buf;
+ rq->ring.desc_avail--;
+ if ((buf->index & VNIC_RQ_RETURN_RATE) == 0) {
+ /* Adding write memory barrier prevents compiler and/or
+ * CPU reordering, thus avoiding descriptor posting
+ * before descriptor is initialized. Otherwise, hardware
+ * can read stale descriptor fields.
+ */
+ wmb();
+ iowrite32(buf->index, &rq->ctrl->posted_index);
+ }
+ return 0;
+ }
skb = netdev_alloc_skb_ip_align(netdev, len);
if (!skb)
return -ENOMEM;
@@ -957,6 +977,25 @@ static void enic_intr_update_pkt_size(struct vnic_rx_bytes_counter *pkt_size,
pkt_size->small_pkt_bytes_cnt += pkt_len;
}
+static bool enic_rxcopybreak(struct net_device *netdev, struct sk_buff **skb,
+ struct vnic_rq_buf *buf, u16 len)
+{
+ struct enic *enic = netdev_priv(netdev);
+ struct sk_buff *new_skb;
+
+ if (len > enic->rx_copybreak)
+ return false;
+ new_skb = netdev_alloc_skb_ip_align(netdev, len);
+ if (!new_skb)
+ return false;
+ pci_dma_sync_single_for_cpu(enic->pdev, buf->dma_addr, len,
+ DMA_FROM_DEVICE);
+ memcpy(new_skb->data, (*skb)->data, len);
+ *skb = new_skb;
+
+ return true;
+}
+
static void enic_rq_indicate_buf(struct vnic_rq *rq,
struct cq_desc *cq_desc, struct vnic_rq_buf *buf,
int skipped, void *opaque)
@@ -978,9 +1017,6 @@ static void enic_rq_indicate_buf(struct vnic_rq *rq,
return;
skb = buf->os_buf;
- prefetch(skb->data - NET_IP_ALIGN);
- pci_unmap_single(enic->pdev, buf->dma_addr,
- buf->len, PCI_DMA_FROMDEVICE);
cq_enet_rq_desc_dec((struct cq_enet_rq_desc *)cq_desc,
&type, &color, &q_number, &completed_index,
@@ -1011,6 +1047,13 @@ static void enic_rq_indicate_buf(struct vnic_rq *rq,
/* Good receive
*/
+ if (!enic_rxcopybreak(netdev, &skb, buf, bytes_written)) {
+ buf->os_buf = NULL;
+ pci_unmap_single(enic->pdev, buf->dma_addr, buf->len,
+ PCI_DMA_FROMDEVICE);
+ }
+ prefetch(skb->data - NET_IP_ALIGN);
+
skb_put(skb, bytes_written);
skb->protocol = eth_type_trans(skb, netdev);
skb_record_rx_queue(skb, q_number);
@@ -2531,6 +2574,7 @@ static int enic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
dev_err(dev, "Cannot register net device, aborting\n");
goto err_out_dev_deinit;
}
+ enic->rx_copybreak = RX_COPYBREAK_DEFAULT;
return 0;
diff --git a/drivers/net/ethernet/cisco/enic/vnic_dev.c b/drivers/net/ethernet/cisco/enic/vnic_dev.c
index 37472ce..62f7b7b 100644
--- a/drivers/net/ethernet/cisco/enic/vnic_dev.c
+++ b/drivers/net/ethernet/cisco/enic/vnic_dev.c
@@ -847,8 +847,7 @@ int vnic_dev_intr_coal_timer_info(struct vnic_dev *vdev)
*/
if ((err == ERR_ECMDUNKNOWN) ||
(!err && !(vdev->args[0] && vdev->args[1] && vdev->args[2]))) {
- pr_warning("Using default conversion factor for "
- "interrupt coalesce timer\n");
+ pr_warn("Using default conversion factor for interrupt coalesce timer\n");
vnic_dev_intr_coal_timer_info_default(vdev);
return 0;
}
diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
index 70089c2..f3ba840 100644
--- a/drivers/net/ethernet/davicom/dm9000.c
+++ b/drivers/net/ethernet/davicom/dm9000.c
@@ -1613,9 +1613,6 @@ dm9000_probe(struct platform_device *pdev)
/* from this point we assume that we have found a DM9000 */
- /* driver system function */
- ether_setup(ndev);
-
ndev->netdev_ops = &dm9000_netdev_ops;
ndev->watchdog_timeo = msecs_to_jiffies(watchdog);
ndev->ethtool_ops = &dm9000_ethtool_ops;
diff --git a/drivers/net/ethernet/dec/tulip/dmfe.c b/drivers/net/ethernet/dec/tulip/dmfe.c
index 322213d..c820560 100644
--- a/drivers/net/ethernet/dec/tulip/dmfe.c
+++ b/drivers/net/ethernet/dec/tulip/dmfe.c
@@ -328,10 +328,10 @@ static void allocate_rx_buffer(struct net_device *);
static void update_cr6(u32, void __iomem *);
static void send_filter_frame(struct DEVICE *);
static void dm9132_id_table(struct DEVICE *);
-static u16 phy_read(void __iomem *, u8, u8, u32);
-static void phy_write(void __iomem *, u8, u8, u16, u32);
-static void phy_write_1bit(void __iomem *, u32);
-static u16 phy_read_1bit(void __iomem *);
+static u16 dmfe_phy_read(void __iomem *, u8, u8, u32);
+static void dmfe_phy_write(void __iomem *, u8, u8, u16, u32);
+static void dmfe_phy_write_1bit(void __iomem *, u32);
+static u16 dmfe_phy_read_1bit(void __iomem *);
static u8 dmfe_sense_speed(struct dmfe_board_info *);
static void dmfe_process_mode(struct dmfe_board_info *);
static void dmfe_timer(unsigned long);
@@ -770,7 +770,7 @@ static int dmfe_stop(struct DEVICE *dev)
/* Reset & stop DM910X board */
dw32(DCR0, DM910X_RESET);
udelay(100);
- phy_write(ioaddr, db->phy_addr, 0, 0x8000, db->chip_id);
+ dmfe_phy_write(ioaddr, db->phy_addr, 0, 0x8000, db->chip_id);
/* free interrupt */
free_irq(db->pdev->irq, dev);
@@ -1154,7 +1154,7 @@ static void dmfe_timer(unsigned long data)
if (db->chip_type && (db->chip_id==PCI_DM9102_ID)) {
db->cr6_data &= ~0x40000;
update_cr6(db->cr6_data, ioaddr);
- phy_write(ioaddr, db->phy_addr, 0, 0x1000, db->chip_id);
+ dmfe_phy_write(ioaddr, db->phy_addr, 0, 0x1000, db->chip_id);
db->cr6_data |= 0x40000;
update_cr6(db->cr6_data, ioaddr);
db->timer.expires = DMFE_TIMER_WUT + HZ * 2;
@@ -1230,9 +1230,9 @@ static void dmfe_timer(unsigned long data)
*/
/* need a dummy read because of PHY's register latch*/
- phy_read (db->ioaddr, db->phy_addr, 1, db->chip_id);
- link_ok_phy = (phy_read (db->ioaddr,
- db->phy_addr, 1, db->chip_id) & 0x4) ? 1 : 0;
+ dmfe_phy_read (db->ioaddr, db->phy_addr, 1, db->chip_id);
+ link_ok_phy = (dmfe_phy_read (db->ioaddr,
+ db->phy_addr, 1, db->chip_id) & 0x4) ? 1 : 0;
if (link_ok_phy != link_ok) {
DMFE_DBUG (0, "PHY and chip report different link status", 0);
@@ -1247,8 +1247,8 @@ static void dmfe_timer(unsigned long data)
/* For Force 10/100M Half/Full mode: Enable Auto-Nego mode */
/* AUTO or force 1M Homerun/Longrun don't need */
if ( !(db->media_mode & 0x38) )
- phy_write(db->ioaddr, db->phy_addr,
- 0, 0x1000, db->chip_id);
+ dmfe_phy_write(db->ioaddr, db->phy_addr,
+ 0, 0x1000, db->chip_id);
/* AUTO mode, if INT phyxcer link failed, select EXT device */
if (db->media_mode & DMFE_AUTO) {
@@ -1649,16 +1649,16 @@ static u8 dmfe_sense_speed(struct dmfe_board_info *db)
/* CR6 bit18=0, select 10/100M */
update_cr6(db->cr6_data & ~0x40000, ioaddr);
- phy_mode = phy_read(db->ioaddr, db->phy_addr, 1, db->chip_id);
- phy_mode = phy_read(db->ioaddr, db->phy_addr, 1, db->chip_id);
+ phy_mode = dmfe_phy_read(db->ioaddr, db->phy_addr, 1, db->chip_id);
+ phy_mode = dmfe_phy_read(db->ioaddr, db->phy_addr, 1, db->chip_id);
if ( (phy_mode & 0x24) == 0x24 ) {
if (db->chip_id == PCI_DM9132_ID) /* DM9132 */
- phy_mode = phy_read(db->ioaddr,
- db->phy_addr, 7, db->chip_id) & 0xf000;
+ phy_mode = dmfe_phy_read(db->ioaddr,
+ db->phy_addr, 7, db->chip_id) & 0xf000;
else /* DM9102/DM9102A */
- phy_mode = phy_read(db->ioaddr,
- db->phy_addr, 17, db->chip_id) & 0xf000;
+ phy_mode = dmfe_phy_read(db->ioaddr,
+ db->phy_addr, 17, db->chip_id) & 0xf000;
switch (phy_mode) {
case 0x1000: db->op_mode = DMFE_10MHF; break;
case 0x2000: db->op_mode = DMFE_10MFD; break;
@@ -1695,15 +1695,15 @@ static void dmfe_set_phyxcer(struct dmfe_board_info *db)
/* DM9009 Chip: Phyxcer reg18 bit12=0 */
if (db->chip_id == PCI_DM9009_ID) {
- phy_reg = phy_read(db->ioaddr,
- db->phy_addr, 18, db->chip_id) & ~0x1000;
+ phy_reg = dmfe_phy_read(db->ioaddr,
+ db->phy_addr, 18, db->chip_id) & ~0x1000;
- phy_write(db->ioaddr,
- db->phy_addr, 18, phy_reg, db->chip_id);
+ dmfe_phy_write(db->ioaddr,
+ db->phy_addr, 18, phy_reg, db->chip_id);
}
/* Phyxcer capability setting */
- phy_reg = phy_read(db->ioaddr, db->phy_addr, 4, db->chip_id) & ~0x01e0;
+ phy_reg = dmfe_phy_read(db->ioaddr, db->phy_addr, 4, db->chip_id) & ~0x01e0;
if (db->media_mode & DMFE_AUTO) {
/* AUTO Mode */
@@ -1724,13 +1724,13 @@ static void dmfe_set_phyxcer(struct dmfe_board_info *db)
phy_reg|=db->PHY_reg4;
db->media_mode|=DMFE_AUTO;
}
- phy_write(db->ioaddr, db->phy_addr, 4, phy_reg, db->chip_id);
+ dmfe_phy_write(db->ioaddr, db->phy_addr, 4, phy_reg, db->chip_id);
/* Restart Auto-Negotiation */
if ( db->chip_type && (db->chip_id == PCI_DM9102_ID) )
- phy_write(db->ioaddr, db->phy_addr, 0, 0x1800, db->chip_id);
+ dmfe_phy_write(db->ioaddr, db->phy_addr, 0, 0x1800, db->chip_id);
if ( !db->chip_type )
- phy_write(db->ioaddr, db->phy_addr, 0, 0x1200, db->chip_id);
+ dmfe_phy_write(db->ioaddr, db->phy_addr, 0, 0x1200, db->chip_id);
}
@@ -1762,7 +1762,7 @@ static void dmfe_process_mode(struct dmfe_board_info *db)
/* 10/100M phyxcer force mode need */
if ( !(db->media_mode & 0x18)) {
/* Forece Mode */
- phy_reg = phy_read(db->ioaddr, db->phy_addr, 6, db->chip_id);
+ phy_reg = dmfe_phy_read(db->ioaddr, db->phy_addr, 6, db->chip_id);
if ( !(phy_reg & 0x1) ) {
/* parter without N-Way capability */
phy_reg = 0x0;
@@ -1772,12 +1772,12 @@ static void dmfe_process_mode(struct dmfe_board_info *db)
case DMFE_100MHF: phy_reg = 0x2000; break;
case DMFE_100MFD: phy_reg = 0x2100; break;
}
- phy_write(db->ioaddr,
- db->phy_addr, 0, phy_reg, db->chip_id);
+ dmfe_phy_write(db->ioaddr,
+ db->phy_addr, 0, phy_reg, db->chip_id);
if ( db->chip_type && (db->chip_id == PCI_DM9102_ID) )
mdelay(20);
- phy_write(db->ioaddr,
- db->phy_addr, 0, phy_reg, db->chip_id);
+ dmfe_phy_write(db->ioaddr,
+ db->phy_addr, 0, phy_reg, db->chip_id);
}
}
}
@@ -1787,8 +1787,8 @@ static void dmfe_process_mode(struct dmfe_board_info *db)
* Write a word to Phy register
*/
-static void phy_write(void __iomem *ioaddr, u8 phy_addr, u8 offset,
- u16 phy_data, u32 chip_id)
+static void dmfe_phy_write(void __iomem *ioaddr, u8 phy_addr, u8 offset,
+ u16 phy_data, u32 chip_id)
{
u16 i;
@@ -1799,34 +1799,34 @@ static void phy_write(void __iomem *ioaddr, u8 phy_addr, u8 offset,
/* Send 33 synchronization clock to Phy controller */
for (i = 0; i < 35; i++)
- phy_write_1bit(ioaddr, PHY_DATA_1);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_1);
/* Send start command(01) to Phy */
- phy_write_1bit(ioaddr, PHY_DATA_0);
- phy_write_1bit(ioaddr, PHY_DATA_1);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_0);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_1);
/* Send write command(01) to Phy */
- phy_write_1bit(ioaddr, PHY_DATA_0);
- phy_write_1bit(ioaddr, PHY_DATA_1);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_0);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_1);
/* Send Phy address */
for (i = 0x10; i > 0; i = i >> 1)
- phy_write_1bit(ioaddr,
- phy_addr & i ? PHY_DATA_1 : PHY_DATA_0);
+ dmfe_phy_write_1bit(ioaddr,
+ phy_addr & i ? PHY_DATA_1 : PHY_DATA_0);
/* Send register address */
for (i = 0x10; i > 0; i = i >> 1)
- phy_write_1bit(ioaddr,
- offset & i ? PHY_DATA_1 : PHY_DATA_0);
+ dmfe_phy_write_1bit(ioaddr,
+ offset & i ? PHY_DATA_1 : PHY_DATA_0);
/* written trasnition */
- phy_write_1bit(ioaddr, PHY_DATA_1);
- phy_write_1bit(ioaddr, PHY_DATA_0);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_1);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_0);
/* Write a word data to PHY controller */
for ( i = 0x8000; i > 0; i >>= 1)
- phy_write_1bit(ioaddr,
- phy_data & i ? PHY_DATA_1 : PHY_DATA_0);
+ dmfe_phy_write_1bit(ioaddr,
+ phy_data & i ? PHY_DATA_1 : PHY_DATA_0);
}
}
@@ -1835,7 +1835,7 @@ static void phy_write(void __iomem *ioaddr, u8 phy_addr, u8 offset,
* Read a word data from phy register
*/
-static u16 phy_read(void __iomem *ioaddr, u8 phy_addr, u8 offset, u32 chip_id)
+static u16 dmfe_phy_read(void __iomem *ioaddr, u8 phy_addr, u8 offset, u32 chip_id)
{
int i;
u16 phy_data;
@@ -1848,33 +1848,33 @@ static u16 phy_read(void __iomem *ioaddr, u8 phy_addr, u8 offset, u32 chip_id)
/* Send 33 synchronization clock to Phy controller */
for (i = 0; i < 35; i++)
- phy_write_1bit(ioaddr, PHY_DATA_1);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_1);
/* Send start command(01) to Phy */
- phy_write_1bit(ioaddr, PHY_DATA_0);
- phy_write_1bit(ioaddr, PHY_DATA_1);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_0);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_1);
/* Send read command(10) to Phy */
- phy_write_1bit(ioaddr, PHY_DATA_1);
- phy_write_1bit(ioaddr, PHY_DATA_0);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_1);
+ dmfe_phy_write_1bit(ioaddr, PHY_DATA_0);
/* Send Phy address */
for (i = 0x10; i > 0; i = i >> 1)
- phy_write_1bit(ioaddr,
- phy_addr & i ? PHY_DATA_1 : PHY_DATA_0);
+ dmfe_phy_write_1bit(ioaddr,
+ phy_addr & i ? PHY_DATA_1 : PHY_DATA_0);
/* Send register address */
for (i = 0x10; i > 0; i = i >> 1)
- phy_write_1bit(ioaddr,
- offset & i ? PHY_DATA_1 : PHY_DATA_0);
+ dmfe_phy_write_1bit(ioaddr,
+ offset & i ? PHY_DATA_1 : PHY_DATA_0);
/* Skip transition state */
- phy_read_1bit(ioaddr);
+ dmfe_phy_read_1bit(ioaddr);
/* read 16bit data */
for (phy_data = 0, i = 0; i < 16; i++) {
phy_data <<= 1;
- phy_data |= phy_read_1bit(ioaddr);
+ phy_data |= dmfe_phy_read_1bit(ioaddr);
}
}
@@ -1886,7 +1886,7 @@ static u16 phy_read(void __iomem *ioaddr, u8 phy_addr, u8 offset, u32 chip_id)
* Write one bit data to Phy Controller
*/
-static void phy_write_1bit(void __iomem *ioaddr, u32 phy_data)
+static void dmfe_phy_write_1bit(void __iomem *ioaddr, u32 phy_data)
{
dw32(DCR9, phy_data); /* MII Clock Low */
udelay(1);
@@ -1901,7 +1901,7 @@ static void phy_write_1bit(void __iomem *ioaddr, u32 phy_data)
* Read one bit phy data from PHY controller
*/
-static u16 phy_read_1bit(void __iomem *ioaddr)
+static u16 dmfe_phy_read_1bit(void __iomem *ioaddr)
{
u16 phy_data;
@@ -1995,11 +1995,11 @@ static void dmfe_parse_srom(struct dmfe_board_info * db)
/* Check DM9801 or DM9802 present or not */
db->HPNA_present = 0;
update_cr6(db->cr6_data | 0x40000, db->ioaddr);
- tmp_reg = phy_read(db->ioaddr, db->phy_addr, 3, db->chip_id);
+ tmp_reg = dmfe_phy_read(db->ioaddr, db->phy_addr, 3, db->chip_id);
if ( ( tmp_reg & 0xfff0 ) == 0xb900 ) {
/* DM9801 or DM9802 present */
db->HPNA_timer = 8;
- if ( phy_read(db->ioaddr, db->phy_addr, 31, db->chip_id) == 0x4404) {
+ if ( dmfe_phy_read(db->ioaddr, db->phy_addr, 31, db->chip_id) == 0x4404) {
/* DM9801 HomeRun */
db->HPNA_present = 1;
dmfe_program_DM9801(db, tmp_reg);
@@ -2025,29 +2025,29 @@ static void dmfe_program_DM9801(struct dmfe_board_info * db, int HPNA_rev)
switch(HPNA_rev) {
case 0xb900: /* DM9801 E3 */
db->HPNA_command |= 0x1000;
- reg25 = phy_read(db->ioaddr, db->phy_addr, 24, db->chip_id);
+ reg25 = dmfe_phy_read(db->ioaddr, db->phy_addr, 24, db->chip_id);
reg25 = ( (reg25 + HPNA_NoiseFloor) & 0xff) | 0xf000;
- reg17 = phy_read(db->ioaddr, db->phy_addr, 17, db->chip_id);
+ reg17 = dmfe_phy_read(db->ioaddr, db->phy_addr, 17, db->chip_id);
break;
case 0xb901: /* DM9801 E4 */
- reg25 = phy_read(db->ioaddr, db->phy_addr, 25, db->chip_id);
+ reg25 = dmfe_phy_read(db->ioaddr, db->phy_addr, 25, db->chip_id);
reg25 = (reg25 & 0xff00) + HPNA_NoiseFloor;
- reg17 = phy_read(db->ioaddr, db->phy_addr, 17, db->chip_id);
+ reg17 = dmfe_phy_read(db->ioaddr, db->phy_addr, 17, db->chip_id);
reg17 = (reg17 & 0xfff0) + HPNA_NoiseFloor + 3;
break;
case 0xb902: /* DM9801 E5 */
case 0xb903: /* DM9801 E6 */
default:
db->HPNA_command |= 0x1000;
- reg25 = phy_read(db->ioaddr, db->phy_addr, 25, db->chip_id);
+ reg25 = dmfe_phy_read(db->ioaddr, db->phy_addr, 25, db->chip_id);
reg25 = (reg25 & 0xff00) + HPNA_NoiseFloor - 5;
- reg17 = phy_read(db->ioaddr, db->phy_addr, 17, db->chip_id);
+ reg17 = dmfe_phy_read(db->ioaddr, db->phy_addr, 17, db->chip_id);
reg17 = (reg17 & 0xfff0) + HPNA_NoiseFloor;
break;
}
- phy_write(db->ioaddr, db->phy_addr, 16, db->HPNA_command, db->chip_id);
- phy_write(db->ioaddr, db->phy_addr, 17, reg17, db->chip_id);
- phy_write(db->ioaddr, db->phy_addr, 25, reg25, db->chip_id);
+ dmfe_phy_write(db->ioaddr, db->phy_addr, 16, db->HPNA_command, db->chip_id);
+ dmfe_phy_write(db->ioaddr, db->phy_addr, 17, reg17, db->chip_id);
+ dmfe_phy_write(db->ioaddr, db->phy_addr, 25, reg25, db->chip_id);
}
@@ -2060,10 +2060,10 @@ static void dmfe_program_DM9802(struct dmfe_board_info * db)
uint phy_reg;
if ( !HPNA_NoiseFloor ) HPNA_NoiseFloor = DM9802_NOISE_FLOOR;
- phy_write(db->ioaddr, db->phy_addr, 16, db->HPNA_command, db->chip_id);
- phy_reg = phy_read(db->ioaddr, db->phy_addr, 25, db->chip_id);
+ dmfe_phy_write(db->ioaddr, db->phy_addr, 16, db->HPNA_command, db->chip_id);
+ phy_reg = dmfe_phy_read(db->ioaddr, db->phy_addr, 25, db->chip_id);
phy_reg = ( phy_reg & 0xff00) + HPNA_NoiseFloor;
- phy_write(db->ioaddr, db->phy_addr, 25, phy_reg, db->chip_id);
+ dmfe_phy_write(db->ioaddr, db->phy_addr, 25, phy_reg, db->chip_id);
}
@@ -2077,7 +2077,7 @@ static void dmfe_HPNA_remote_cmd_chk(struct dmfe_board_info * db)
uint phy_reg;
/* Got remote device status */
- phy_reg = phy_read(db->ioaddr, db->phy_addr, 17, db->chip_id) & 0x60;
+ phy_reg = dmfe_phy_read(db->ioaddr, db->phy_addr, 17, db->chip_id) & 0x60;
switch(phy_reg) {
case 0x00: phy_reg = 0x0a00;break; /* LP/LS */
case 0x20: phy_reg = 0x0900;break; /* LP/HS */
@@ -2087,8 +2087,8 @@ static void dmfe_HPNA_remote_cmd_chk(struct dmfe_board_info * db)
/* Check remote device status match our setting ot not */
if ( phy_reg != (db->HPNA_command & 0x0f00) ) {
- phy_write(db->ioaddr, db->phy_addr, 16, db->HPNA_command,
- db->chip_id);
+ dmfe_phy_write(db->ioaddr, db->phy_addr, 16, db->HPNA_command,
+ db->chip_id);
db->HPNA_timer=8;
} else
db->HPNA_timer=600; /* Match, every 10 minutes, check */
diff --git a/drivers/net/ethernet/ec_bhf.c b/drivers/net/ethernet/ec_bhf.c
index 056b44b..d101750 100644
--- a/drivers/net/ethernet/ec_bhf.c
+++ b/drivers/net/ethernet/ec_bhf.c
@@ -1,5 +1,5 @@
/*
- * drivers/net/ethernet/beckhoff/ec_bhf.c
+ * drivers/net/ethernet/ec_bhf.c
*
* Copyright (C) 2014 Darek Marcinkiewicz <reksio@newterm.pl>
*
@@ -18,9 +18,6 @@
* Those can be found on Bechhoff CX50xx industrial PCs.
*/
-#if 0
-#define DEBUG
-#endif
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
@@ -74,6 +71,8 @@
#define DMA_WINDOW_SIZE_MASK 0xfffffffc
+#define ETHERCAT_MASTER_ID 0x14
+
static struct pci_device_id ids[] = {
{ PCI_DEVICE(0x15ec, 0x5000), },
{ 0, }
@@ -131,7 +130,6 @@ struct bhf_dma {
struct ec_bhf_priv {
struct net_device *net_dev;
-
struct pci_dev *dev;
void __iomem *io;
@@ -162,32 +160,6 @@ struct ec_bhf_priv {
#define PRIV_TO_DEV(priv) (&(priv)->dev->dev)
-#define ETHERCAT_MASTER_ID 0x14
-
-static void ec_bhf_print_status(struct ec_bhf_priv *priv)
-{
- struct device *dev = PRIV_TO_DEV(priv);
-
- dev_dbg(dev, "Frame error counter: %d\n",
- ioread8(priv->mac_io + MAC_FRAME_ERR_CNT));
- dev_dbg(dev, "RX error counter: %d\n",
- ioread8(priv->mac_io + MAC_RX_ERR_CNT));
- dev_dbg(dev, "CRC error counter: %d\n",
- ioread8(priv->mac_io + MAC_CRC_ERR_CNT));
- dev_dbg(dev, "TX frame counter: %d\n",
- ioread32(priv->mac_io + MAC_TX_FRAME_CNT));
- dev_dbg(dev, "RX frame counter: %d\n",
- ioread32(priv->mac_io + MAC_RX_FRAME_CNT));
- dev_dbg(dev, "TX fifo level: %d\n",
- ioread8(priv->mac_io + MAC_TX_FIFO_LVL));
- dev_dbg(dev, "Dropped frames: %d\n",
- ioread8(priv->mac_io + MAC_DROPPED_FRMS));
- dev_dbg(dev, "Connected with CCAT slot: %d\n",
- ioread8(priv->mac_io + MAC_CONNECTED_CCAT_FLAG));
- dev_dbg(dev, "Link status: %d\n",
- ioread8(priv->mii_io + MII_LINK_STATUS));
-}
-
static void ec_bhf_reset(struct ec_bhf_priv *priv)
{
iowrite8(0, priv->mac_io + MAC_FRAME_ERR_CNT);
@@ -210,8 +182,6 @@ static void ec_bhf_send_packet(struct ec_bhf_priv *priv, struct tx_desc *desc)
u32 addr = (u8 *)desc - priv->tx_buf.buf;
iowrite32((ALIGN(len, 8) << 24) | addr, priv->fifo_io + FIFO_TX_REG);
-
- dev_dbg(PRIV_TO_DEV(priv), "Done sending packet\n");
}
static int ec_bhf_desc_sent(struct tx_desc *desc)
@@ -244,7 +214,6 @@ static void ec_bhf_add_rx_desc(struct ec_bhf_priv *priv, struct rx_desc *desc)
static void ec_bhf_process_rx(struct ec_bhf_priv *priv)
{
struct rx_desc *desc = &priv->rx_descs[priv->rx_dnext];
- struct device *dev = PRIV_TO_DEV(priv);
while (ec_bhf_pkt_received(desc)) {
int pkt_size = (le16_to_cpu(desc->header.len) &
@@ -253,20 +222,16 @@ static void ec_bhf_process_rx(struct ec_bhf_priv *priv)
struct sk_buff *skb;
skb = netdev_alloc_skb_ip_align(priv->net_dev, pkt_size);
- dev_dbg(dev, "Received packet, size: %d\n", pkt_size);
-
if (skb) {
memcpy(skb_put(skb, pkt_size), data, pkt_size);
skb->protocol = eth_type_trans(skb, priv->net_dev);
- dev_dbg(dev, "Protocol type: %x\n", skb->protocol);
-
priv->stat_rx_bytes += pkt_size;
netif_rx(skb);
} else {
- dev_err_ratelimited(dev,
- "Couldn't allocate a skb_buff for a packet of size %u\n",
- pkt_size);
+ dev_err_ratelimited(PRIV_TO_DEV(priv),
+ "Couldn't allocate a skb_buff for a packet of size %u\n",
+ pkt_size);
}
desc->header.recv = 0;
@@ -276,7 +241,6 @@ static void ec_bhf_process_rx(struct ec_bhf_priv *priv)
priv->rx_dnext = (priv->rx_dnext + 1) % priv->rx_dcount;
desc = &priv->rx_descs[priv->rx_dnext];
}
-
}
static enum hrtimer_restart ec_bhf_timer_fun(struct hrtimer *timer)
@@ -299,14 +263,7 @@ static int ec_bhf_setup_offsets(struct ec_bhf_priv *priv)
unsigned block_count, i;
void __iomem *ec_info;
- dev_dbg(dev, "Info block:\n");
- dev_dbg(dev, "Type of function: %x\n", (unsigned)ioread16(priv->io));
- dev_dbg(dev, "Revision of function: %x\n",
- (unsigned)ioread16(priv->io + INFO_BLOCK_REV));
-
block_count = ioread8(priv->io + INFO_BLOCK_BLK_CNT);
- dev_dbg(dev, "Number of function blocks: %x\n", block_count);
-
for (i = 0; i < block_count; i++) {
u16 type = ioread16(priv->io + i * INFO_BLOCK_SIZE +
INFO_BLOCK_TYPE);
@@ -317,29 +274,17 @@ static int ec_bhf_setup_offsets(struct ec_bhf_priv *priv)
dev_err(dev, "EtherCAT master with DMA block not found\n");
return -ENODEV;
}
- dev_dbg(dev, "EtherCAT master with DMA block found at pos: %d\n", i);
ec_info = priv->io + i * INFO_BLOCK_SIZE;
- dev_dbg(dev, "EtherCAT master revision: %d\n",
- ioread16(ec_info + INFO_BLOCK_REV));
priv->tx_dma_chan = ioread8(ec_info + INFO_BLOCK_TX_CHAN);
- dev_dbg(dev, "EtherCAT master tx dma channel: %d\n",
- priv->tx_dma_chan);
-
priv->rx_dma_chan = ioread8(ec_info + INFO_BLOCK_RX_CHAN);
- dev_dbg(dev, "EtherCAT master rx dma channel: %d\n",
- priv->rx_dma_chan);
priv->ec_io = priv->io + ioread32(ec_info + INFO_BLOCK_OFFSET);
priv->mii_io = priv->ec_io + ioread32(priv->ec_io + EC_MII_OFFSET);
priv->fifo_io = priv->ec_io + ioread32(priv->ec_io + EC_FIFO_OFFSET);
priv->mac_io = priv->ec_io + ioread32(priv->ec_io + EC_MAC_OFFSET);
- dev_dbg(dev,
- "EtherCAT block addres: %p, fifo address: %p, mii address: %p, mac address: %p\n",
- priv->ec_io, priv->fifo_io, priv->mii_io, priv->mac_io);
-
return 0;
}
@@ -350,8 +295,6 @@ static netdev_tx_t ec_bhf_start_xmit(struct sk_buff *skb,
struct tx_desc *desc;
unsigned len;
- dev_dbg(PRIV_TO_DEV(priv), "Starting xmit\n");
-
desc = &priv->tx_descs[priv->tx_dnext];
skb_copy_and_csum_dev(skb, desc->data);
@@ -366,15 +309,12 @@ static netdev_tx_t ec_bhf_start_xmit(struct sk_buff *skb,
priv->tx_dnext = (priv->tx_dnext + 1) % priv->tx_dcount;
if (!ec_bhf_desc_sent(&priv->tx_descs[priv->tx_dnext])) {
- /* Make sure that update updates to tx_dnext are perceived
+ /* Make sure that updates to tx_dnext are perceived
* by timer routine.
*/
smp_wmb();
netif_stop_queue(net_dev);
-
- dev_dbg(PRIV_TO_DEV(priv), "Stopping netif queue\n");
- ec_bhf_print_status(priv);
}
priv->stat_tx_bytes += len;
@@ -397,7 +337,6 @@ static int ec_bhf_alloc_dma_mem(struct ec_bhf_priv *priv,
mask = ioread32(priv->dma_io + offset);
mask &= DMA_WINDOW_SIZE_MASK;
- dev_dbg(dev, "Read mask %x for channel %d\n", mask, channel);
/* We want to allocate a chunk of memory that is:
* - aligned to the mask we just read
@@ -408,12 +347,10 @@ static int ec_bhf_alloc_dma_mem(struct ec_bhf_priv *priv,
buf->len = min_t(int, ~mask + 1, size);
buf->alloc_len = 2 * buf->len;
- dev_dbg(dev, "Allocating %d bytes for channel %d",
- (int)buf->alloc_len, channel);
buf->alloc = dma_alloc_coherent(dev, buf->alloc_len, &buf->alloc_phys,
GFP_KERNEL);
if (buf->alloc == NULL) {
- dev_info(dev, "Failed to allocate buffer\n");
+ dev_err(dev, "Failed to allocate buffer\n");
return -ENOMEM;
}
@@ -422,8 +359,6 @@ static int ec_bhf_alloc_dma_mem(struct ec_bhf_priv *priv,
iowrite32(0, priv->dma_io + offset + 4);
iowrite32(buf->buf_phys, priv->dma_io + offset);
- dev_dbg(dev, "Buffer: %x and read from dev: %x",
- (unsigned)buf->buf_phys, ioread32(priv->dma_io + offset));
return 0;
}
@@ -433,7 +368,7 @@ static void ec_bhf_setup_tx_descs(struct ec_bhf_priv *priv)
int i = 0;
priv->tx_dcount = priv->tx_buf.len / sizeof(struct tx_desc);
- priv->tx_descs = (struct tx_desc *) priv->tx_buf.buf;
+ priv->tx_descs = (struct tx_desc *)priv->tx_buf.buf;
priv->tx_dnext = 0;
for (i = 0; i < priv->tx_dcount; i++)
@@ -445,7 +380,7 @@ static void ec_bhf_setup_rx_descs(struct ec_bhf_priv *priv)
int i;
priv->rx_dcount = priv->rx_buf.len / sizeof(struct rx_desc);
- priv->rx_descs = (struct rx_desc *) priv->rx_buf.buf;
+ priv->rx_descs = (struct rx_desc *)priv->rx_buf.buf;
priv->rx_dnext = 0;
for (i = 0; i < priv->rx_dcount; i++) {
@@ -469,8 +404,6 @@ static int ec_bhf_open(struct net_device *net_dev)
struct device *dev = PRIV_TO_DEV(priv);
int err = 0;
- dev_info(dev, "Opening device\n");
-
ec_bhf_reset(priv);
err = ec_bhf_alloc_dma_mem(priv, &priv->rx_buf, priv->rx_dma_chan,
@@ -481,20 +414,13 @@ static int ec_bhf_open(struct net_device *net_dev)
}
ec_bhf_setup_rx_descs(priv);
- dev_info(dev, "RX buffer allocated, address: %x\n",
- (unsigned)priv->rx_buf.buf_phys);
-
err = ec_bhf_alloc_dma_mem(priv, &priv->tx_buf, priv->tx_dma_chan,
FIFO_SIZE * sizeof(struct tx_desc));
if (err) {
dev_err(dev, "Failed to allocate tx buffer\n");
goto error_rx_free;
}
- dev_dbg(dev, "TX buffer allocated, addres: %x\n",
- (unsigned)priv->tx_buf.buf_phys);
-
iowrite8(0, priv->mii_io + MII_MAC_FILT_FLAG);
-
ec_bhf_setup_tx_descs(priv);
netif_start_queue(net_dev);
@@ -504,10 +430,6 @@ static int ec_bhf_open(struct net_device *net_dev)
hrtimer_start(&priv->hrtimer, ktime_set(0, polling_frequency),
HRTIMER_MODE_REL);
- dev_info(PRIV_TO_DEV(priv), "Device open\n");
-
- ec_bhf_print_status(priv);
-
return 0;
error_rx_free:
@@ -640,9 +562,6 @@ static int ec_bhf_probe(struct pci_dev *dev, const struct pci_device_id *id)
memcpy_fromio(net_dev->dev_addr, priv->mii_io + MII_MAC_ADDR, 6);
- dev_dbg(&dev->dev, "CX5020 Ethercat master address: %pM\n",
- net_dev->dev_addr);
-
err = register_netdev(net_dev);
if (err < 0)
goto err_free_net_dev;
diff --git a/drivers/net/ethernet/emulex/benet/be.h b/drivers/net/ethernet/emulex/benet/be.h
index 43e08d0..9a2d752 100644
--- a/drivers/net/ethernet/emulex/benet/be.h
+++ b/drivers/net/ethernet/emulex/benet/be.h
@@ -86,6 +86,8 @@ static inline char *nic_name(struct pci_dev *pdev)
#define BE_MAX_JUMBO_FRAME_SIZE 9018
#define BE_MIN_MTU 256
+#define BE_MAX_MTU (BE_MAX_JUMBO_FRAME_SIZE - \
+ (ETH_HLEN + ETH_FCS_LEN))
#define BE_NUM_VLANS_SUPPORTED 64
#define BE_MAX_EQD 128u
@@ -112,7 +114,6 @@ static inline char *nic_name(struct pci_dev *pdev)
#define MAX_ROCE_EQS 5
#define MAX_MSIX_VECTORS 32
#define MIN_MSIX_VECTORS 1
-#define BE_TX_BUDGET 256
#define BE_NAPI_WEIGHT 64
#define MAX_RX_POST BE_NAPI_WEIGHT /* Frags posted at a time */
#define RX_FRAGS_REFILL_WM (RX_Q_LEN - MAX_RX_POST)
@@ -198,7 +199,6 @@ struct be_eq_obj {
u8 idx; /* array index */
u8 msix_idx;
- u16 tx_budget;
u16 spurious_intr;
struct napi_struct napi;
struct be_adapter *adapter;
@@ -248,6 +248,13 @@ struct be_tx_stats {
ulong tx_jiffies;
u32 tx_stops;
u32 tx_drv_drops; /* pkts dropped by driver */
+ /* the error counters are described in be_ethtool.c */
+ u32 tx_hdr_parse_err;
+ u32 tx_dma_err;
+ u32 tx_tso_err;
+ u32 tx_spoof_check_err;
+ u32 tx_qinq_err;
+ u32 tx_internal_parity_err;
struct u64_stats_sync sync;
struct u64_stats_sync sync_compl;
};
@@ -316,6 +323,7 @@ struct be_rx_obj {
struct be_drv_stats {
u32 be_on_die_temperature;
u32 eth_red_drops;
+ u32 dma_map_errors;
u32 rx_drops_no_pbuf;
u32 rx_drops_no_txpb;
u32 rx_drops_no_erx_descr;
@@ -399,9 +407,9 @@ struct phy_info {
u16 auto_speeds_supported;
u16 fixed_speeds_supported;
int link_speed;
- u32 dac_cable_len;
u32 advertising;
u32 supported;
+ u8 cable_type;
};
struct be_resources {
@@ -613,6 +621,10 @@ extern const struct ethtool_ops be_ethtool_ops;
for (i = eqo->idx, rxo = &adapter->rx_obj[i]; i < adapter->num_rx_qs;\
i += adapter->num_evt_qs, rxo += adapter->num_evt_qs)
+#define for_all_tx_queues_on_eq(adapter, eqo, txo, i) \
+ for (i = eqo->idx, txo = &adapter->tx_obj[i]; i < adapter->num_tx_qs;\
+ i += adapter->num_evt_qs, txo += adapter->num_evt_qs)
+
#define is_mcc_eqo(eqo) (eqo->idx == 0)
#define mcc_eqo(adapter) (&adapter->eq_obj[0])
@@ -661,6 +673,18 @@ static inline u32 amap_get(void *ptr, u32 dw_offset, u32 mask, u32 offset)
amap_mask(sizeof(((_struct *)0)->field)), \
AMAP_BIT_OFFSET(_struct, field))
+#define GET_RX_COMPL_V0_BITS(field, ptr) \
+ AMAP_GET_BITS(struct amap_eth_rx_compl_v0, field, ptr)
+
+#define GET_RX_COMPL_V1_BITS(field, ptr) \
+ AMAP_GET_BITS(struct amap_eth_rx_compl_v1, field, ptr)
+
+#define GET_TX_COMPL_BITS(field, ptr) \
+ AMAP_GET_BITS(struct amap_eth_tx_compl, field, ptr)
+
+#define SET_TX_WRB_HDR_BITS(field, ptr, val) \
+ AMAP_SET_BITS(struct amap_eth_hdr_wrb, field, ptr, val)
+
#define be_dws_cpu_to_le(wrb, len) swap_dws(wrb, len)
#define be_dws_le_to_cpu(wrb, len) swap_dws(wrb, len)
static inline void swap_dws(void *wrb, int len)
diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
index 4370ec1..fead5c6 100644
--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
+++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
@@ -209,7 +209,6 @@ static int be_mcc_compl_process(struct be_adapter *adapter,
if (base_status != MCC_STATUS_SUCCESS &&
!be_skip_err_log(opcode, base_status, addl_status)) {
-
if (base_status == MCC_STATUS_UNAUTHORIZED_REQUEST) {
dev_warn(&adapter->pdev->dev,
"VF is not privileged to issue opcode %d-%d\n",
@@ -309,8 +308,6 @@ static void be_async_grp5_evt_process(struct be_adapter *adapter,
be_async_grp5_pvid_state_process(adapter, compl);
break;
default:
- dev_warn(&adapter->pdev->dev, "Unknown grp5 event 0x%x!\n",
- event_type);
break;
}
}
@@ -319,7 +316,7 @@ static void be_async_dbg_evt_process(struct be_adapter *adapter,
struct be_mcc_compl *cmp)
{
u8 event_type = 0;
- struct be_async_event_qnq *evt = (struct be_async_event_qnq *) cmp;
+ struct be_async_event_qnq *evt = (struct be_async_event_qnq *)cmp;
event_type = (cmp->flags >> ASYNC_EVENT_TYPE_SHIFT) &
ASYNC_EVENT_TYPE_MASK;
@@ -595,6 +592,7 @@ static int lancer_wait_ready(struct be_adapter *adapter)
static bool lancer_provisioning_error(struct be_adapter *adapter)
{
u32 sliport_status = 0, sliport_err1 = 0, sliport_err2 = 0;
+
sliport_status = ioread32(adapter->db + SLIPORT_STATUS_OFFSET);
if (sliport_status & SLIPORT_STATUS_ERR_MASK) {
sliport_err1 = ioread32(adapter->db + SLIPORT_ERROR1_OFFSET);
@@ -677,7 +675,6 @@ int be_fw_wait_ready(struct be_adapter *adapter)
return -1;
}
-
static inline struct be_sge *nonembedded_sgl(struct be_mcc_wrb *wrb)
{
return &wrb->payload.sgl[0];
@@ -924,6 +921,7 @@ int be_cmd_eq_create(struct be_adapter *adapter, struct be_eq_obj *eqo)
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_eq_create *resp = embedded_payload(wrb);
+
eqo->q.id = le16_to_cpu(resp->eq_id);
eqo->msix_idx =
(ver == 2) ? le16_to_cpu(resp->msix_idx) : eqo->idx;
@@ -958,7 +956,7 @@ int be_cmd_mac_addr_query(struct be_adapter *adapter, u8 *mac_addr,
if (permanent) {
req->permanent = 1;
} else {
- req->if_id = cpu_to_le16((u16) if_handle);
+ req->if_id = cpu_to_le16((u16)if_handle);
req->pmac_id = cpu_to_le32(pmac_id);
req->permanent = 0;
}
@@ -966,6 +964,7 @@ int be_cmd_mac_addr_query(struct be_adapter *adapter, u8 *mac_addr,
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_mac_query *resp = embedded_payload(wrb);
+
memcpy(mac_addr, resp->mac.addr, ETH_ALEN);
}
@@ -1002,6 +1001,7 @@ int be_cmd_pmac_add(struct be_adapter *adapter, u8 *mac_addr,
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_pmac_add *resp = embedded_payload(wrb);
+
*pmac_id = le32_to_cpu(resp->pmac_id);
}
@@ -1034,7 +1034,8 @@ int be_cmd_pmac_del(struct be_adapter *adapter, u32 if_id, int pmac_id, u32 dom)
req = embedded_payload(wrb);
be_wrb_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON,
- OPCODE_COMMON_NTWK_PMAC_DEL, sizeof(*req), wrb, NULL);
+ OPCODE_COMMON_NTWK_PMAC_DEL, sizeof(*req),
+ wrb, NULL);
req->hdr.domain = dom;
req->if_id = cpu_to_le32(if_id);
@@ -1106,6 +1107,7 @@ int be_cmd_cq_create(struct be_adapter *adapter, struct be_queue_info *cq,
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_cq_create *resp = embedded_payload(wrb);
+
cq->id = le16_to_cpu(resp->cq_id);
cq->created = true;
}
@@ -1118,6 +1120,7 @@ int be_cmd_cq_create(struct be_adapter *adapter, struct be_queue_info *cq,
static u32 be_encoded_q_len(int q_len)
{
u32 len_encoded = fls(q_len); /* log2(len) + 1 */
+
if (len_encoded == 16)
len_encoded = 0;
return len_encoded;
@@ -1173,6 +1176,7 @@ static int be_cmd_mccq_ext_create(struct be_adapter *adapter,
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_mcc_create *resp = embedded_payload(wrb);
+
mccq->id = le16_to_cpu(resp->id);
mccq->created = true;
}
@@ -1216,6 +1220,7 @@ static int be_cmd_mccq_org_create(struct be_adapter *adapter,
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_mcc_create *resp = embedded_payload(wrb);
+
mccq->id = le16_to_cpu(resp->id);
mccq->created = true;
}
@@ -1274,6 +1279,7 @@ int be_cmd_txq_create(struct be_adapter *adapter, struct be_tx_obj *txo)
status = be_cmd_notify_wait(adapter, &wrb);
if (!status) {
struct be_cmd_resp_eth_tx_create *resp = embedded_payload(&wrb);
+
txq->id = le16_to_cpu(resp->cid);
if (ver == 2)
txo->db_offset = le32_to_cpu(resp->db_offset);
@@ -1318,6 +1324,7 @@ int be_cmd_rxq_create(struct be_adapter *adapter,
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_eth_rx_create *resp = embedded_payload(wrb);
+
rxq->id = le16_to_cpu(resp->id);
rxq->created = true;
*rss_id = resp->rss_id;
@@ -1431,6 +1438,7 @@ int be_cmd_if_create(struct be_adapter *adapter, u32 cap_flags, u32 en_flags,
status = be_cmd_notify_wait(adapter, &wrb);
if (!status) {
struct be_cmd_resp_if_create *resp = embedded_payload(&wrb);
+
*if_handle = le32_to_cpu(resp->interface_id);
/* Hack to retrieve VF's pmac-id on BE3 */
@@ -1514,7 +1522,6 @@ err:
int lancer_cmd_get_pport_stats(struct be_adapter *adapter,
struct be_dma_mem *nonemb_cmd)
{
-
struct be_mcc_wrb *wrb;
struct lancer_cmd_req_pport_stats *req;
int status = 0;
@@ -1605,6 +1612,7 @@ int be_cmd_link_status_query(struct be_adapter *adapter, u16 *link_speed,
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_link_status *resp = embedded_payload(wrb);
+
if (link_speed) {
*link_speed = resp->link_speed ?
le16_to_cpu(resp->link_speed) * 10 :
@@ -1672,6 +1680,7 @@ int be_cmd_get_reg_len(struct be_adapter *adapter, u32 *log_size)
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_get_fat *resp = embedded_payload(wrb);
+
if (log_size && resp->log_size)
*log_size = le32_to_cpu(resp->log_size) -
sizeof(u32);
@@ -1681,17 +1690,17 @@ err:
return status;
}
-void be_cmd_get_regs(struct be_adapter *adapter, u32 buf_len, void *buf)
+int be_cmd_get_regs(struct be_adapter *adapter, u32 buf_len, void *buf)
{
struct be_dma_mem get_fat_cmd;
struct be_mcc_wrb *wrb;
struct be_cmd_req_get_fat *req;
u32 offset = 0, total_size, buf_size,
log_offset = sizeof(u32), payload_len;
- int status;
+ int status = 0;
if (buf_len == 0)
- return;
+ return -EIO;
total_size = buf_len;
@@ -1700,10 +1709,9 @@ void be_cmd_get_regs(struct be_adapter *adapter, u32 buf_len, void *buf)
get_fat_cmd.size,
&get_fat_cmd.dma);
if (!get_fat_cmd.va) {
- status = -ENOMEM;
dev_err(&adapter->pdev->dev,
- "Memory allocation failure while retrieving FAT data\n");
- return;
+ "Memory allocation failure while reading FAT data\n");
+ return -ENOMEM;
}
spin_lock_bh(&adapter->mcc_lock);
@@ -1732,6 +1740,7 @@ void be_cmd_get_regs(struct be_adapter *adapter, u32 buf_len, void *buf)
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_get_fat *resp = get_fat_cmd.va;
+
memcpy(buf + offset,
resp->data_buffer,
le32_to_cpu(resp->read_log_length));
@@ -1746,6 +1755,7 @@ err:
pci_free_consistent(adapter->pdev, get_fat_cmd.size,
get_fat_cmd.va, get_fat_cmd.dma);
spin_unlock_bh(&adapter->mcc_lock);
+ return status;
}
/* Uses synchronous mcc */
@@ -1771,8 +1781,11 @@ int be_cmd_get_fw_ver(struct be_adapter *adapter)
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_get_fw_version *resp = embedded_payload(wrb);
- strcpy(adapter->fw_ver, resp->firmware_version_string);
- strcpy(adapter->fw_on_flash, resp->fw_on_flash_version_string);
+
+ strlcpy(adapter->fw_ver, resp->firmware_version_string,
+ sizeof(adapter->fw_ver));
+ strlcpy(adapter->fw_on_flash, resp->fw_on_flash_version_string,
+ sizeof(adapter->fw_on_flash));
}
err:
spin_unlock_bh(&adapter->mcc_lock);
@@ -1782,8 +1795,8 @@ err:
/* set the EQ delay interval of an EQ to specified value
* Uses async mcc
*/
-int be_cmd_modify_eqd(struct be_adapter *adapter, struct be_set_eqd *set_eqd,
- int num)
+static int __be_cmd_modify_eqd(struct be_adapter *adapter,
+ struct be_set_eqd *set_eqd, int num)
{
struct be_mcc_wrb *wrb;
struct be_cmd_req_modify_eq_delay *req;
@@ -1816,6 +1829,25 @@ err:
return status;
}
+int be_cmd_modify_eqd(struct be_adapter *adapter, struct be_set_eqd *set_eqd,
+ int num)
+{
+ int num_eqs, i = 0;
+
+ if (lancer_chip(adapter) && num > 8) {
+ while (num) {
+ num_eqs = min(num, 8);
+ __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs);
+ i += num_eqs;
+ num -= num_eqs;
+ }
+ } else {
+ __be_cmd_modify_eqd(adapter, set_eqd, num);
+ }
+
+ return 0;
+}
+
/* Uses sycnhronous mcc */
int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array,
u32 num)
@@ -1879,8 +1911,8 @@ int be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 value)
BE_IF_FLAGS_VLAN_PROMISCUOUS |
BE_IF_FLAGS_MCAST_PROMISCUOUS);
} else if (flags & IFF_ALLMULTI) {
- req->if_flags_mask = req->if_flags =
- cpu_to_le32(BE_IF_FLAGS_MCAST_PROMISCUOUS);
+ req->if_flags_mask = cpu_to_le32(BE_IF_FLAGS_MCAST_PROMISCUOUS);
+ req->if_flags = cpu_to_le32(BE_IF_FLAGS_MCAST_PROMISCUOUS);
} else if (flags & BE_FLAGS_VLAN_PROMISC) {
req->if_flags_mask = cpu_to_le32(BE_IF_FLAGS_VLAN_PROMISCUOUS);
@@ -1891,8 +1923,8 @@ int be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 value)
struct netdev_hw_addr *ha;
int i = 0;
- req->if_flags_mask = req->if_flags =
- cpu_to_le32(BE_IF_FLAGS_MULTICAST);
+ req->if_flags_mask = cpu_to_le32(BE_IF_FLAGS_MULTICAST);
+ req->if_flags = cpu_to_le32(BE_IF_FLAGS_MULTICAST);
/* Reset mcast promisc mode if already set by setting mask
* and not setting flags field
@@ -1947,6 +1979,7 @@ int be_cmd_set_flow_control(struct be_adapter *adapter, u32 tx_fc, u32 rx_fc)
OPCODE_COMMON_SET_FLOW_CONTROL, sizeof(*req),
wrb, NULL);
+ req->hdr.version = 1;
req->tx_flow_control = cpu_to_le16((u16)tx_fc);
req->rx_flow_control = cpu_to_le16((u16)rx_fc);
@@ -1954,6 +1987,10 @@ int be_cmd_set_flow_control(struct be_adapter *adapter, u32 tx_fc, u32 rx_fc)
err:
spin_unlock_bh(&adapter->mcc_lock);
+
+ if (base_status(status) == MCC_STATUS_FEATURE_NOT_SUPPORTED)
+ return -EOPNOTSUPP;
+
return status;
}
@@ -1985,6 +2022,7 @@ int be_cmd_get_flow_control(struct be_adapter *adapter, u32 *tx_fc, u32 *rx_fc)
if (!status) {
struct be_cmd_resp_get_flow_control *resp =
embedded_payload(wrb);
+
*tx_fc = le16_to_cpu(resp->tx_flow_control);
*rx_fc = le16_to_cpu(resp->rx_flow_control);
}
@@ -2014,10 +2052,14 @@ int be_cmd_query_fw_cfg(struct be_adapter *adapter)
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_query_fw_cfg *resp = embedded_payload(wrb);
+
adapter->port_num = le32_to_cpu(resp->phys_port);
adapter->function_mode = le32_to_cpu(resp->function_mode);
adapter->function_caps = le32_to_cpu(resp->function_caps);
adapter->asic_rev = le32_to_cpu(resp->asic_revision) & 0xFF;
+ dev_info(&adapter->pdev->dev,
+ "FW config: function_mode=0x%x, function_caps=0x%x\n",
+ adapter->function_mode, adapter->function_caps);
}
mutex_unlock(&adapter->mbox_lock);
@@ -2159,6 +2201,7 @@ int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num, u32 *state)
if (!status) {
struct be_cmd_resp_get_beacon_state *resp =
embedded_payload(wrb);
+
*state = resp->beacon_state;
}
@@ -2167,6 +2210,53 @@ err:
return status;
}
+/* Uses sync mcc */
+int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+ u8 page_num, u8 *data)
+{
+ struct be_dma_mem cmd;
+ struct be_mcc_wrb *wrb;
+ struct be_cmd_req_port_type *req;
+ int status;
+
+ if (page_num > TR_PAGE_A2)
+ return -EINVAL;
+
+ cmd.size = sizeof(struct be_cmd_resp_port_type);
+ cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
+ if (!cmd.va) {
+ dev_err(&adapter->pdev->dev, "Memory allocation failed\n");
+ return -ENOMEM;
+ }
+ memset(cmd.va, 0, cmd.size);
+
+ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+ status = -EBUSY;
+ goto err;
+ }
+ req = cmd.va;
+
+ be_wrb_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON,
+ OPCODE_COMMON_READ_TRANSRECV_DATA,
+ cmd.size, wrb, &cmd);
+
+ req->port = cpu_to_le32(adapter->hba_port_num);
+ req->page_num = cpu_to_le32(page_num);
+ status = be_mcc_notify_wait(adapter);
+ if (!status) {
+ struct be_cmd_resp_port_type *resp = cmd.va;
+
+ memcpy(data, resp->page_data, PAGE_DATA_LEN);
+ }
+err:
+ spin_unlock_bh(&adapter->mcc_lock);
+ pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
+ return status;
+}
+
int lancer_cmd_write_object(struct be_adapter *adapter, struct be_dma_mem *cmd,
u32 data_size, u32 data_offset,
const char *obj_name, u32 *data_written,
@@ -2207,7 +2297,7 @@ int lancer_cmd_write_object(struct be_adapter *adapter, struct be_dma_mem *cmd,
be_dws_cpu_to_le(ctxt, sizeof(req->context));
req->write_offset = cpu_to_le32(data_offset);
- strcpy(req->object_name, obj_name);
+ strlcpy(req->object_name, obj_name, sizeof(req->object_name));
req->descriptor_count = cpu_to_le32(1);
req->buf_len = cpu_to_le32(data_size);
req->addr_low = cpu_to_le32((cmd->dma +
@@ -2240,6 +2330,31 @@ err_unlock:
return status;
}
+int be_cmd_query_cable_type(struct be_adapter *adapter)
+{
+ u8 page_data[PAGE_DATA_LEN];
+ int status;
+
+ status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
+ page_data);
+ if (!status) {
+ switch (adapter->phy.interface_type) {
+ case PHY_TYPE_QSFP:
+ adapter->phy.cable_type =
+ page_data[QSFP_PLUS_CABLE_TYPE_OFFSET];
+ break;
+ case PHY_TYPE_SFP_PLUS_10GB:
+ adapter->phy.cable_type =
+ page_data[SFP_PLUS_CABLE_TYPE_OFFSET];
+ break;
+ default:
+ adapter->phy.cable_type = 0;
+ break;
+ }
+ }
+ return status;
+}
+
int lancer_cmd_delete_object(struct be_adapter *adapter, const char *obj_name)
{
struct lancer_cmd_req_delete_object *req;
@@ -2260,7 +2375,7 @@ int lancer_cmd_delete_object(struct be_adapter *adapter, const char *obj_name)
OPCODE_COMMON_DELETE_OBJECT,
sizeof(*req), wrb, NULL);
- strcpy(req->object_name, obj_name);
+ strlcpy(req->object_name, obj_name, sizeof(req->object_name));
status = be_mcc_notify_wait(adapter);
err:
@@ -2357,7 +2472,7 @@ err_unlock:
}
int be_cmd_get_flash_crc(struct be_adapter *adapter, u8 *flashed_crc,
- u16 optype, int offset)
+ u16 optype, int offset)
{
struct be_mcc_wrb *wrb;
struct be_cmd_read_flash_crc *req;
@@ -2528,9 +2643,10 @@ int be_cmd_ddr_dma_test(struct be_adapter *adapter, u64 pattern,
if (!status) {
struct be_cmd_resp_ddrdma_test *resp;
+
resp = cmd->va;
if ((memcmp(resp->rcv_buff, req->snd_buff, byte_cnt) != 0) ||
- resp->snd_err) {
+ resp->snd_err) {
status = -1;
}
}
@@ -2603,6 +2719,7 @@ int be_cmd_get_phy_info(struct be_adapter *adapter)
if (!status) {
struct be_phy_info *resp_phy_info =
cmd.va + sizeof(struct be_cmd_req_hdr);
+
adapter->phy.phy_type = le16_to_cpu(resp_phy_info->phy_type);
adapter->phy.interface_type =
le16_to_cpu(resp_phy_info->interface_type);
@@ -2732,6 +2849,7 @@ int be_cmd_req_native_mode(struct be_adapter *adapter)
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_set_func_cap *resp = embedded_payload(wrb);
+
adapter->be3_native = le32_to_cpu(resp->cap_flags) &
CAPABILITY_BE3_NATIVE_ERX_API;
if (!adapter->be3_native)
@@ -2771,6 +2889,7 @@ int be_cmd_get_fn_privileges(struct be_adapter *adapter, u32 *privilege,
if (!status) {
struct be_cmd_resp_get_fn_privileges *resp =
embedded_payload(wrb);
+
*privilege = le32_to_cpu(resp->privilege_mask);
/* In UMC mode FW does not return right privileges.
@@ -2918,7 +3037,6 @@ out:
int be_cmd_get_active_mac(struct be_adapter *adapter, u32 curr_pmac_id,
u8 *mac, u32 if_handle, bool active, u32 domain)
{
-
if (!active)
be_cmd_get_mac_from_list(adapter, mac, &active, &curr_pmac_id,
if_handle, domain);
@@ -3102,6 +3220,7 @@ int be_cmd_get_hsw_config(struct be_adapter *adapter, u16 *pvid,
if (!status) {
struct be_cmd_resp_get_hsw_config *resp =
embedded_payload(wrb);
+
be_dws_le_to_cpu(&resp->context, sizeof(resp->context));
vid = AMAP_GET_BITS(struct amap_get_hsw_resp_context,
pvid, &resp->context);
@@ -3161,7 +3280,8 @@ int be_cmd_get_acpi_wol_cap(struct be_adapter *adapter)
status = be_mbox_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_acpi_wol_magic_config_v1 *resp;
- resp = (struct be_cmd_resp_acpi_wol_magic_config_v1 *) cmd.va;
+
+ resp = (struct be_cmd_resp_acpi_wol_magic_config_v1 *)cmd.va;
adapter->wol_cap = resp->wol_settings;
if (adapter->wol_cap & BE_WOL_CAP)
@@ -3197,6 +3317,7 @@ int be_cmd_set_fw_log_level(struct be_adapter *adapter, u32 level)
(extfat_cmd.va + sizeof(struct be_cmd_resp_hdr));
for (i = 0; i < le32_to_cpu(cfgs->num_modules); i++) {
u32 num_modes = le32_to_cpu(cfgs->module[i].num_modes);
+
for (j = 0; j < num_modes; j++) {
if (cfgs->module[i].trace_lvl[j].mode == MODE_UART)
cfgs->module[i].trace_lvl[j].dbg_lvl =
@@ -3233,6 +3354,7 @@ int be_cmd_get_fw_log_level(struct be_adapter *adapter)
if (!status) {
cfgs = (struct be_fat_conf_params *)(extfat_cmd.va +
sizeof(struct be_cmd_resp_hdr));
+
for (j = 0; j < le32_to_cpu(cfgs->module[0].num_modes); j++) {
if (cfgs->module[0].trace_lvl[j].mode == MODE_UART)
level = cfgs->module[0].trace_lvl[j].dbg_lvl;
@@ -3329,6 +3451,7 @@ int be_cmd_query_port_name(struct be_adapter *adapter, u8 *port_name)
status = be_mcc_notify_wait(adapter);
if (!status) {
struct be_cmd_resp_get_port_name *resp = embedded_payload(wrb);
+
*port_name = resp->port_name[adapter->hba_port_num];
} else {
*port_name = adapter->hba_port_num + '0';
@@ -3952,6 +4075,7 @@ int be_cmd_get_active_profile(struct be_adapter *adapter, u16 *profile_id)
if (!status) {
struct be_cmd_resp_get_active_profile *resp =
embedded_payload(wrb);
+
*profile_id = le16_to_cpu(resp->active_profile_id);
}
@@ -4004,7 +4128,7 @@ int be_roce_mcc_cmd(void *netdev_handle, void *wrb_payload,
{
struct be_adapter *adapter = netdev_priv(netdev_handle);
struct be_mcc_wrb *wrb;
- struct be_cmd_req_hdr *hdr = (struct be_cmd_req_hdr *) wrb_payload;
+ struct be_cmd_req_hdr *hdr = (struct be_cmd_req_hdr *)wrb_payload;
struct be_cmd_req_hdr *req;
struct be_cmd_resp_hdr *resp;
int status;
diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.h b/drivers/net/ethernet/emulex/benet/be_cmds.h
index 5284b82..eb5085d 100644
--- a/drivers/net/ethernet/emulex/benet/be_cmds.h
+++ b/drivers/net/ethernet/emulex/benet/be_cmds.h
@@ -57,7 +57,8 @@ enum mcc_base_status {
MCC_STATUS_ILLEGAL_FIELD = 3,
MCC_STATUS_INSUFFICIENT_BUFFER = 4,
MCC_STATUS_UNAUTHORIZED_REQUEST = 5,
- MCC_STATUS_NOT_SUPPORTED = 66
+ MCC_STATUS_NOT_SUPPORTED = 66,
+ MCC_STATUS_FEATURE_NOT_SUPPORTED = 68
};
/* Additional status */
@@ -1004,8 +1005,8 @@ struct be_cmd_resp_link_status {
/* Identifies the type of port attached to NIC */
struct be_cmd_req_port_type {
struct be_cmd_req_hdr hdr;
- u32 page_num;
- u32 port;
+ __le32 page_num;
+ __le32 port;
};
enum {
@@ -1013,28 +1014,23 @@ enum {
TR_PAGE_A2 = 0xa2
};
+/* From SFF-8436 QSFP+ spec */
+#define QSFP_PLUS_CABLE_TYPE_OFFSET 0x83
+#define QSFP_PLUS_CR4_CABLE 0x8
+#define QSFP_PLUS_SR4_CABLE 0x4
+#define QSFP_PLUS_LR4_CABLE 0x2
+
+/* From SFF-8472 spec */
+#define SFP_PLUS_SFF_8472_COMP 0x5E
+#define SFP_PLUS_CABLE_TYPE_OFFSET 0x8
+#define SFP_PLUS_COPPER_CABLE 0x4
+
+#define PAGE_DATA_LEN 256
struct be_cmd_resp_port_type {
struct be_cmd_resp_hdr hdr;
u32 page_num;
u32 port;
- struct data {
- u8 identifier;
- u8 identifier_ext;
- u8 connector;
- u8 transceiver[8];
- u8 rsvd0[3];
- u8 length_km;
- u8 length_hm;
- u8 length_om1;
- u8 length_om2;
- u8 length_cu;
- u8 length_cu_m;
- u8 vendor_name[16];
- u8 rsvd;
- u8 vendor_oui[3];
- u8 vendor_pn[16];
- u8 vendor_rev[4];
- } data;
+ u8 page_data[PAGE_DATA_LEN];
};
/******************** Get FW Version *******************/
@@ -1367,6 +1363,9 @@ enum {
PHY_TYPE_BASET_1GB,
PHY_TYPE_BASEX_1GB,
PHY_TYPE_SGMII,
+ PHY_TYPE_QSFP,
+ PHY_TYPE_KR4_40GB,
+ PHY_TYPE_KR2_20GB,
PHY_TYPE_DISABLED = 255
};
@@ -1375,6 +1374,8 @@ enum {
#define BE_SUPPORTED_SPEED_100MBPS 2
#define BE_SUPPORTED_SPEED_1GBPS 4
#define BE_SUPPORTED_SPEED_10GBPS 8
+#define BE_SUPPORTED_SPEED_20GBPS 0x10
+#define BE_SUPPORTED_SPEED_40GBPS 0x20
#define BE_AN_EN 0x2
#define BE_PAUSE_SYM_EN 0x80
@@ -2066,6 +2067,9 @@ int be_cmd_set_beacon_state(struct be_adapter *adapter, u8 port_num, u8 beacon,
u8 status, u8 state);
int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num,
u32 *state);
+int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+ u8 page_num, u8 *data);
+int be_cmd_query_cable_type(struct be_adapter *adapter);
int be_cmd_write_flashrom(struct be_adapter *adapter, struct be_dma_mem *cmd,
u32 flash_oper, u32 flash_opcode, u32 buf_size);
int lancer_cmd_write_object(struct be_adapter *adapter, struct be_dma_mem *cmd,
@@ -2101,7 +2105,7 @@ int be_cmd_get_die_temperature(struct be_adapter *adapter);
int be_cmd_get_cntl_attributes(struct be_adapter *adapter);
int be_cmd_req_native_mode(struct be_adapter *adapter);
int be_cmd_get_reg_len(struct be_adapter *adapter, u32 *log_size);
-void be_cmd_get_regs(struct be_adapter *adapter, u32 buf_len, void *buf);
+int be_cmd_get_regs(struct be_adapter *adapter, u32 buf_len, void *buf);
int be_cmd_get_fn_privileges(struct be_adapter *adapter, u32 *privilege,
u32 domain);
int be_cmd_set_fn_privileges(struct be_adapter *adapter, u32 privileges,
diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c
index 0cd3311..e42a791 100644
--- a/drivers/net/ethernet/emulex/benet/be_ethtool.c
+++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c
@@ -78,6 +78,11 @@ static const struct be_ethtool_stat et_stats[] = {
* fifo must never overflow.
*/
{DRVSTAT_INFO(rxpp_fifo_overflow_drop)},
+ /* Received packets dropped when the RX block runs out of space in
+ * one of its input FIFOs. This could happen due a long burst of
+ * minimum-sized (64b) frames in the receive path.
+ * This counter may also be erroneously incremented rarely.
+ */
{DRVSTAT_INFO(rx_input_fifo_overflow_drop)},
{DRVSTAT_INFO(rx_ip_checksum_errs)},
{DRVSTAT_INFO(rx_tcp_checksum_errs)},
@@ -114,6 +119,8 @@ static const struct be_ethtool_stat et_stats[] = {
* is more than 9018 bytes
*/
{DRVSTAT_INFO(rx_drops_mtu)},
+ /* Number of dma mapping errors */
+ {DRVSTAT_INFO(dma_map_errors)},
/* Number of packets dropped due to random early drop function */
{DRVSTAT_INFO(eth_red_drops)},
{DRVSTAT_INFO(be_on_die_temperature)},
@@ -123,6 +130,7 @@ static const struct be_ethtool_stat et_stats[] = {
{DRVSTAT_INFO(roce_drops_payload_len)},
{DRVSTAT_INFO(roce_drops_crc)}
};
+
#define ETHTOOL_STATS_NUM ARRAY_SIZE(et_stats)
/* Stats related to multi RX queues: get_stats routine assumes bytes, pkts
@@ -145,6 +153,7 @@ static const struct be_ethtool_stat et_rx_stats[] = {
*/
{DRVSTAT_RX_INFO(rx_drops_no_frags)}
};
+
#define ETHTOOL_RXSTATS_NUM (ARRAY_SIZE(et_rx_stats))
/* Stats related to multi TX queues: get_stats routine assumes compl is the
@@ -152,6 +161,34 @@ static const struct be_ethtool_stat et_rx_stats[] = {
*/
static const struct be_ethtool_stat et_tx_stats[] = {
{DRVSTAT_TX_INFO(tx_compl)}, /* If moving this member see above note */
+ /* This counter is incremented when the HW encounters an error while
+ * parsing the packet header of an outgoing TX request. This counter is
+ * applicable only for BE2, BE3 and Skyhawk based adapters.
+ */
+ {DRVSTAT_TX_INFO(tx_hdr_parse_err)},
+ /* This counter is incremented when an error occurs in the DMA
+ * operation associated with the TX request from the host to the device.
+ */
+ {DRVSTAT_TX_INFO(tx_dma_err)},
+ /* This counter is incremented when MAC or VLAN spoof checking is
+ * enabled on the interface and the TX request fails the spoof check
+ * in HW.
+ */
+ {DRVSTAT_TX_INFO(tx_spoof_check_err)},
+ /* This counter is incremented when the HW encounters an error while
+ * performing TSO offload. This counter is applicable only for Lancer
+ * adapters.
+ */
+ {DRVSTAT_TX_INFO(tx_tso_err)},
+ /* This counter is incremented when the HW detects Q-in-Q style VLAN
+ * tagging in a packet and such tagging is not expected on the outgoing
+ * interface. This counter is applicable only for Lancer adapters.
+ */
+ {DRVSTAT_TX_INFO(tx_qinq_err)},
+ /* This counter is incremented when the HW detects parity errors in the
+ * packet data. This counter is applicable only for Lancer adapters.
+ */
+ {DRVSTAT_TX_INFO(tx_internal_parity_err)},
{DRVSTAT_TX_INFO(tx_bytes)},
{DRVSTAT_TX_INFO(tx_pkts)},
/* Number of skbs queued for trasmission by the driver */
@@ -165,6 +202,7 @@ static const struct be_ethtool_stat et_tx_stats[] = {
/* Pkts dropped in the driver's transmit path */
{DRVSTAT_TX_INFO(tx_drv_drops)}
};
+
#define ETHTOOL_TXSTATS_NUM (ARRAY_SIZE(et_tx_stats))
static const char et_self_tests[][ETH_GSTRING_LEN] = {
@@ -239,7 +277,7 @@ static int lancer_cmd_read_file(struct be_adapter *adapter, u8 *file_name,
while ((total_read_len < buf_len) && !eof) {
chunk_size = min_t(u32, (buf_len - total_read_len),
- LANCER_READ_FILE_CHUNK);
+ LANCER_READ_FILE_CHUNK);
chunk_size = ALIGN(chunk_size, 4);
status = lancer_cmd_read_object(adapter, &read_cmd, chunk_size,
total_read_len, file_name,
@@ -298,7 +336,6 @@ static int be_get_coalesce(struct net_device *netdev,
struct be_adapter *adapter = netdev_priv(netdev);
struct be_aic_obj *aic = &adapter->aic_obj[0];
-
et->rx_coalesce_usecs = aic->prev_eqd;
et->rx_coalesce_usecs_high = aic->max_eqd;
et->rx_coalesce_usecs_low = aic->min_eqd;
@@ -440,18 +477,27 @@ static int be_get_sset_count(struct net_device *netdev, int stringset)
}
}
-static u32 be_get_port_type(u32 phy_type, u32 dac_cable_len)
+static u32 be_get_port_type(struct be_adapter *adapter)
{
u32 port;
- switch (phy_type) {
+ switch (adapter->phy.interface_type) {
case PHY_TYPE_BASET_1GB:
case PHY_TYPE_BASEX_1GB:
case PHY_TYPE_SGMII:
port = PORT_TP;
break;
case PHY_TYPE_SFP_PLUS_10GB:
- port = dac_cable_len ? PORT_DA : PORT_FIBRE;
+ if (adapter->phy.cable_type & SFP_PLUS_COPPER_CABLE)
+ port = PORT_DA;
+ else
+ port = PORT_FIBRE;
+ break;
+ case PHY_TYPE_QSFP:
+ if (adapter->phy.cable_type & QSFP_PLUS_CR4_CABLE)
+ port = PORT_DA;
+ else
+ port = PORT_FIBRE;
break;
case PHY_TYPE_XFP_10GB:
case PHY_TYPE_SFP_1GB:
@@ -467,11 +513,11 @@ static u32 be_get_port_type(u32 phy_type, u32 dac_cable_len)
return port;
}
-static u32 convert_to_et_setting(u32 if_type, u32 if_speeds)
+static u32 convert_to_et_setting(struct be_adapter *adapter, u32 if_speeds)
{
u32 val = 0;
- switch (if_type) {
+ switch (adapter->phy.interface_type) {
case PHY_TYPE_BASET_1GB:
case PHY_TYPE_BASEX_1GB:
case PHY_TYPE_SGMII:
@@ -490,10 +536,38 @@ static u32 convert_to_et_setting(u32 if_type, u32 if_speeds)
if (if_speeds & BE_SUPPORTED_SPEED_10GBPS)
val |= SUPPORTED_10000baseKX4_Full;
break;
+ case PHY_TYPE_KR2_20GB:
+ val |= SUPPORTED_Backplane;
+ if (if_speeds & BE_SUPPORTED_SPEED_10GBPS)
+ val |= SUPPORTED_10000baseKR_Full;
+ if (if_speeds & BE_SUPPORTED_SPEED_20GBPS)
+ val |= SUPPORTED_20000baseKR2_Full;
+ break;
case PHY_TYPE_KR_10GB:
val |= SUPPORTED_Backplane |
SUPPORTED_10000baseKR_Full;
break;
+ case PHY_TYPE_KR4_40GB:
+ val |= SUPPORTED_Backplane;
+ if (if_speeds & BE_SUPPORTED_SPEED_10GBPS)
+ val |= SUPPORTED_10000baseKR_Full;
+ if (if_speeds & BE_SUPPORTED_SPEED_40GBPS)
+ val |= SUPPORTED_40000baseKR4_Full;
+ break;
+ case PHY_TYPE_QSFP:
+ if (if_speeds & BE_SUPPORTED_SPEED_40GBPS) {
+ switch (adapter->phy.cable_type) {
+ case QSFP_PLUS_CR4_CABLE:
+ val |= SUPPORTED_40000baseCR4_Full;
+ break;
+ case QSFP_PLUS_LR4_CABLE:
+ val |= SUPPORTED_40000baseLR4_Full;
+ break;
+ default:
+ val |= SUPPORTED_40000baseSR4_Full;
+ break;
+ }
+ }
case PHY_TYPE_SFP_PLUS_10GB:
case PHY_TYPE_XFP_10GB:
case PHY_TYPE_SFP_1GB:
@@ -534,8 +608,6 @@ static int be_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd)
int status;
u32 auto_speeds;
u32 fixed_speeds;
- u32 dac_cable_len;
- u16 interface_type;
if (adapter->phy.link_speed < 0) {
status = be_cmd_link_status_query(adapter, &link_speed,
@@ -546,21 +618,19 @@ static int be_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd)
status = be_cmd_get_phy_info(adapter);
if (!status) {
- interface_type = adapter->phy.interface_type;
auto_speeds = adapter->phy.auto_speeds_supported;
fixed_speeds = adapter->phy.fixed_speeds_supported;
- dac_cable_len = adapter->phy.dac_cable_len;
+
+ be_cmd_query_cable_type(adapter);
ecmd->supported =
- convert_to_et_setting(interface_type,
+ convert_to_et_setting(adapter,
auto_speeds |
fixed_speeds);
ecmd->advertising =
- convert_to_et_setting(interface_type,
- auto_speeds);
+ convert_to_et_setting(adapter, auto_speeds);
- ecmd->port = be_get_port_type(interface_type,
- dac_cable_len);
+ ecmd->port = be_get_port_type(adapter);
if (adapter->phy.auto_speeds_supported) {
ecmd->supported |= SUPPORTED_Autoneg;
@@ -614,8 +684,10 @@ static void be_get_ringparam(struct net_device *netdev,
{
struct be_adapter *adapter = netdev_priv(netdev);
- ring->rx_max_pending = ring->rx_pending = adapter->rx_obj[0].q.len;
- ring->tx_max_pending = ring->tx_pending = adapter->tx_obj[0].q.len;
+ ring->rx_max_pending = adapter->rx_obj[0].q.len;
+ ring->rx_pending = adapter->rx_obj[0].q.len;
+ ring->tx_max_pending = adapter->tx_obj[0].q.len;
+ ring->tx_pending = adapter->tx_obj[0].q.len;
}
static void
@@ -641,7 +713,7 @@ be_set_pauseparam(struct net_device *netdev, struct ethtool_pauseparam *ecmd)
status = be_cmd_set_flow_control(adapter,
adapter->tx_fc, adapter->rx_fc);
if (status)
- dev_warn(&adapter->pdev->dev, "Pause param set failed.\n");
+ dev_warn(&adapter->pdev->dev, "Pause param set failed\n");
return be_cmd_status(status);
}
@@ -907,8 +979,6 @@ static void be_set_msg_level(struct net_device *netdev, u32 level)
FW_LOG_LEVEL_DEFAULT :
FW_LOG_LEVEL_FATAL);
adapter->msg_enable = level;
-
- return;
}
static u64 be_get_rss_hash_opts(struct be_adapter *adapter, u64 flow_type)
@@ -1127,6 +1197,7 @@ static int be_set_rxfh(struct net_device *netdev, const u32 *indir,
if (indir) {
struct be_rx_obj *rxo;
+
for (i = 0; i < RSS_INDIR_TABLE_LEN; i++) {
j = indir[i];
rxo = &adapter->rx_obj[j];
@@ -1142,8 +1213,8 @@ static int be_set_rxfh(struct net_device *netdev, const u32 *indir,
hkey = adapter->rss_info.rss_hkey;
rc = be_cmd_rss_config(adapter, rsstable,
- adapter->rss_info.rss_flags,
- RSS_INDIR_TABLE_LEN, hkey);
+ adapter->rss_info.rss_flags,
+ RSS_INDIR_TABLE_LEN, hkey);
if (rc) {
adapter->rss_info.rss_flags = RSS_ENABLE_NONE;
return -EIO;
@@ -1154,6 +1225,58 @@ static int be_set_rxfh(struct net_device *netdev, const u32 *indir,
return 0;
}
+static int be_get_module_info(struct net_device *netdev,
+ struct ethtool_modinfo *modinfo)
+{
+ struct be_adapter *adapter = netdev_priv(netdev);
+ u8 page_data[PAGE_DATA_LEN];
+ int status;
+
+ if (!check_privilege(adapter, MAX_PRIVILEGES))
+ return -EOPNOTSUPP;
+
+ status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
+ page_data);
+ if (!status) {
+ if (!page_data[SFP_PLUS_SFF_8472_COMP]) {
+ modinfo->type = ETH_MODULE_SFF_8079;
+ modinfo->eeprom_len = PAGE_DATA_LEN;
+ } else {
+ modinfo->type = ETH_MODULE_SFF_8472;
+ modinfo->eeprom_len = 2 * PAGE_DATA_LEN;
+ }
+ }
+ return be_cmd_status(status);
+}
+
+static int be_get_module_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *data)
+{
+ struct be_adapter *adapter = netdev_priv(netdev);
+ int status;
+
+ if (!check_privilege(adapter, MAX_PRIVILEGES))
+ return -EOPNOTSUPP;
+
+ status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0,
+ data);
+ if (status)
+ goto err;
+
+ if (eeprom->offset + eeprom->len > PAGE_DATA_LEN) {
+ status = be_cmd_read_port_transceiver_data(adapter,
+ TR_PAGE_A2,
+ data +
+ PAGE_DATA_LEN);
+ if (status)
+ goto err;
+ }
+ if (eeprom->offset)
+ memcpy(data, data + eeprom->offset, eeprom->len);
+err:
+ return be_cmd_status(status);
+}
+
const struct ethtool_ops be_ethtool_ops = {
.get_settings = be_get_settings,
.get_drvinfo = be_get_drvinfo,
@@ -1185,5 +1308,7 @@ const struct ethtool_ops be_ethtool_ops = {
.get_rxfh = be_get_rxfh,
.set_rxfh = be_set_rxfh,
.get_channels = be_get_channels,
- .set_channels = be_set_channels
+ .set_channels = be_set_channels,
+ .get_module_info = be_get_module_info,
+ .get_module_eeprom = be_get_module_eeprom
};
diff --git a/drivers/net/ethernet/emulex/benet/be_hw.h b/drivers/net/ethernet/emulex/benet/be_hw.h
index 8840c64..295ee08 100644
--- a/drivers/net/ethernet/emulex/benet/be_hw.h
+++ b/drivers/net/ethernet/emulex/benet/be_hw.h
@@ -315,6 +315,18 @@ struct be_eth_hdr_wrb {
u32 dw[4];
};
+/********* Tx Compl Status Encoding *********/
+#define BE_TX_COMP_HDR_PARSE_ERR 0x2
+#define BE_TX_COMP_NDMA_ERR 0x3
+#define BE_TX_COMP_ACL_ERR 0x5
+
+#define LANCER_TX_COMP_LSO_ERR 0x1
+#define LANCER_TX_COMP_HSW_DROP_MAC_ERR 0x3
+#define LANCER_TX_COMP_HSW_DROP_VLAN_ERR 0x5
+#define LANCER_TX_COMP_QINQ_ERR 0x7
+#define LANCER_TX_COMP_PARITY_ERR 0xb
+#define LANCER_TX_COMP_DMA_ERR 0xd
+
/* TX Compl Queue Descriptor */
/* Pseudo amap definition for eth_tx_compl in which each bit of the
diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
index 93ff8ef..9a18e79 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -86,6 +86,7 @@ static const char * const ue_status_low_desc[] = {
"JTAG ",
"MPU_INTPEND "
};
+
/* UE Status High CSR */
static const char * const ue_status_hi_desc[] = {
"LPCMEMHOST",
@@ -122,10 +123,10 @@ static const char * const ue_status_hi_desc[] = {
"Unknown"
};
-
static void be_queue_free(struct be_adapter *adapter, struct be_queue_info *q)
{
struct be_dma_mem *mem = &q->dma_mem;
+
if (mem->va) {
dma_free_coherent(&adapter->pdev->dev, mem->size, mem->va,
mem->dma);
@@ -187,6 +188,7 @@ static void be_intr_set(struct be_adapter *adapter, bool enable)
static void be_rxq_notify(struct be_adapter *adapter, u16 qid, u16 posted)
{
u32 val = 0;
+
val |= qid & DB_RQ_RING_ID_MASK;
val |= posted << DB_RQ_NUM_POSTED_SHIFT;
@@ -198,6 +200,7 @@ static void be_txq_notify(struct be_adapter *adapter, struct be_tx_obj *txo,
u16 posted)
{
u32 val = 0;
+
val |= txo->q.id & DB_TXULP_RING_ID_MASK;
val |= (posted & DB_TXULP_NUM_POSTED_MASK) << DB_TXULP_NUM_POSTED_SHIFT;
@@ -209,6 +212,7 @@ static void be_eq_notify(struct be_adapter *adapter, u16 qid,
bool arm, bool clear_int, u16 num_popped)
{
u32 val = 0;
+
val |= qid & DB_EQ_RING_ID_MASK;
val |= ((qid & DB_EQ_RING_ID_EXT_MASK) << DB_EQ_RING_ID_EXT_MASK_SHIFT);
@@ -227,6 +231,7 @@ static void be_eq_notify(struct be_adapter *adapter, u16 qid,
void be_cq_notify(struct be_adapter *adapter, u16 qid, bool arm, u16 num_popped)
{
u32 val = 0;
+
val |= qid & DB_CQ_RING_ID_MASK;
val |= ((qid & DB_CQ_RING_ID_EXT_MASK) <<
DB_CQ_RING_ID_EXT_MASK_SHIFT);
@@ -488,7 +493,6 @@ static void populate_be_v2_stats(struct be_adapter *adapter)
static void populate_lancer_stats(struct be_adapter *adapter)
{
-
struct be_drv_stats *drvs = &adapter->drv_stats;
struct lancer_pport_stats *pport_stats = pport_stats_from_cmd(adapter);
@@ -588,6 +592,7 @@ static struct rtnl_link_stats64 *be_get_stats64(struct net_device *netdev,
for_all_rx_queues(adapter, rxo, i) {
const struct be_rx_stats *rx_stats = rx_stats(rxo);
+
do {
start = u64_stats_fetch_begin_irq(&rx_stats->sync);
pkts = rx_stats(rxo)->rx_pkts;
@@ -602,6 +607,7 @@ static struct rtnl_link_stats64 *be_get_stats64(struct net_device *netdev,
for_all_tx_queues(adapter, txo, i) {
const struct be_tx_stats *tx_stats = tx_stats(txo);
+
do {
start = u64_stats_fetch_begin_irq(&tx_stats->sync);
pkts = tx_stats(txo)->tx_pkts;
@@ -738,38 +744,37 @@ static void wrb_fill_hdr(struct be_adapter *adapter, struct be_eth_hdr_wrb *hdr,
memset(hdr, 0, sizeof(*hdr));
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, crc, hdr, 1);
+ SET_TX_WRB_HDR_BITS(crc, hdr, 1);
if (skb_is_gso(skb)) {
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, lso, hdr, 1);
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, lso_mss,
- hdr, skb_shinfo(skb)->gso_size);
+ SET_TX_WRB_HDR_BITS(lso, hdr, 1);
+ SET_TX_WRB_HDR_BITS(lso_mss, hdr, skb_shinfo(skb)->gso_size);
if (skb_is_gso_v6(skb) && !lancer_chip(adapter))
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, lso6, hdr, 1);
+ SET_TX_WRB_HDR_BITS(lso6, hdr, 1);
} else if (skb->ip_summed == CHECKSUM_PARTIAL) {
if (skb->encapsulation) {
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, ipcs, hdr, 1);
+ SET_TX_WRB_HDR_BITS(ipcs, hdr, 1);
proto = skb_inner_ip_proto(skb);
} else {
proto = skb_ip_proto(skb);
}
if (proto == IPPROTO_TCP)
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, tcpcs, hdr, 1);
+ SET_TX_WRB_HDR_BITS(tcpcs, hdr, 1);
else if (proto == IPPROTO_UDP)
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, udpcs, hdr, 1);
+ SET_TX_WRB_HDR_BITS(udpcs, hdr, 1);
}
if (vlan_tx_tag_present(skb)) {
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, vlan, hdr, 1);
+ SET_TX_WRB_HDR_BITS(vlan, hdr, 1);
vlan_tag = be_get_tx_vlan_tag(adapter, skb);
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, vlan_tag, hdr, vlan_tag);
+ SET_TX_WRB_HDR_BITS(vlan_tag, hdr, vlan_tag);
}
/* To skip HW VLAN tagging: evt = 1, compl = 0 */
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, complete, hdr, !skip_hw_vlan);
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, event, hdr, 1);
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, num_wrb, hdr, wrb_cnt);
- AMAP_SET_BITS(struct amap_eth_hdr_wrb, len, hdr, len);
+ SET_TX_WRB_HDR_BITS(complete, hdr, !skip_hw_vlan);
+ SET_TX_WRB_HDR_BITS(event, hdr, 1);
+ SET_TX_WRB_HDR_BITS(num_wrb, hdr, wrb_cnt);
+ SET_TX_WRB_HDR_BITS(len, hdr, len);
}
static void unmap_tx_frag(struct device *dev, struct be_eth_wrb *wrb,
@@ -808,6 +813,7 @@ static int make_tx_wrbs(struct be_adapter *adapter, struct be_queue_info *txq,
if (skb->len > skb->data_len) {
int len = skb_headlen(skb);
+
busaddr = dma_map_single(dev, skb->data, len, DMA_TO_DEVICE);
if (dma_mapping_error(dev, busaddr))
goto dma_err;
@@ -821,6 +827,7 @@ static int make_tx_wrbs(struct be_adapter *adapter, struct be_queue_info *txq,
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
const struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];
+
busaddr = skb_frag_dma_map(dev, frag, 0,
skb_frag_size(frag), DMA_TO_DEVICE);
if (dma_mapping_error(dev, busaddr))
@@ -850,6 +857,7 @@ dma_err:
unmap_tx_frag(dev, wrb, map_single);
map_single = false;
copied -= wrb->frag_len;
+ adapter->drv_stats.dma_map_errors++;
queue_head_inc(txq);
}
return 0;
@@ -910,7 +918,7 @@ static bool be_ipv6_exthdr_check(struct sk_buff *skb)
if (ip6h->nexthdr != NEXTHDR_TCP &&
ip6h->nexthdr != NEXTHDR_UDP) {
struct ipv6_opt_hdr *ehdr =
- (struct ipv6_opt_hdr *) (skb->data + offset);
+ (struct ipv6_opt_hdr *)(skb->data + offset);
/* offending pkt: 2nd byte following IPv6 hdr is 0xff */
if (ehdr->hdrlen == 0xff)
@@ -974,8 +982,8 @@ static struct sk_buff *be_lancer_xmit_workarounds(struct be_adapter *adapter,
* skip HW tagging is not enabled by FW.
*/
if (unlikely(be_ipv6_tx_stall_chk(adapter, skb) &&
- (adapter->pvid || adapter->qnq_vid) &&
- !qnq_async_evt_rcvd(adapter)))
+ (adapter->pvid || adapter->qnq_vid) &&
+ !qnq_async_evt_rcvd(adapter)))
goto tx_drop;
/* Manual VLAN tag insertion to prevent:
@@ -1073,15 +1081,15 @@ static netdev_tx_t be_xmit(struct sk_buff *skb, struct net_device *netdev)
static int be_change_mtu(struct net_device *netdev, int new_mtu)
{
struct be_adapter *adapter = netdev_priv(netdev);
- if (new_mtu < BE_MIN_MTU ||
- new_mtu > (BE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN))) {
- dev_info(&adapter->pdev->dev,
- "MTU must be between %d and %d bytes\n",
- BE_MIN_MTU,
- (BE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN)));
+ struct device *dev = &adapter->pdev->dev;
+
+ if (new_mtu < BE_MIN_MTU || new_mtu > BE_MAX_MTU) {
+ dev_info(dev, "MTU must be between %d and %d bytes\n",
+ BE_MIN_MTU, BE_MAX_MTU);
return -EINVAL;
}
- dev_info(&adapter->pdev->dev, "MTU changed from %d to %d bytes\n",
+
+ dev_info(dev, "MTU changed from %d to %d bytes\n",
netdev->mtu, new_mtu);
netdev->mtu = new_mtu;
return 0;
@@ -1093,6 +1101,7 @@ static int be_change_mtu(struct net_device *netdev, int new_mtu)
*/
static int be_vid_config(struct be_adapter *adapter)
{
+ struct device *dev = &adapter->pdev->dev;
u16 vids[BE_NUM_VLANS_SUPPORTED];
u16 num = 0, i = 0;
int status = 0;
@@ -1114,16 +1123,15 @@ static int be_vid_config(struct be_adapter *adapter)
if (addl_status(status) ==
MCC_ADDL_STATUS_INSUFFICIENT_RESOURCES)
goto set_vlan_promisc;
- dev_err(&adapter->pdev->dev,
- "Setting HW VLAN filtering failed.\n");
+ dev_err(dev, "Setting HW VLAN filtering failed\n");
} else {
if (adapter->flags & BE_FLAGS_VLAN_PROMISC) {
/* hw VLAN filtering re-enabled. */
status = be_cmd_rx_filter(adapter,
BE_FLAGS_VLAN_PROMISC, OFF);
if (!status) {
- dev_info(&adapter->pdev->dev,
- "Disabling VLAN Promiscuous mode.\n");
+ dev_info(dev,
+ "Disabling VLAN Promiscuous mode\n");
adapter->flags &= ~BE_FLAGS_VLAN_PROMISC;
}
}
@@ -1137,11 +1145,10 @@ set_vlan_promisc:
status = be_cmd_rx_filter(adapter, BE_FLAGS_VLAN_PROMISC, ON);
if (!status) {
- dev_info(&adapter->pdev->dev, "Enable VLAN Promiscuous mode\n");
+ dev_info(dev, "Enable VLAN Promiscuous mode\n");
adapter->flags |= BE_FLAGS_VLAN_PROMISC;
} else
- dev_err(&adapter->pdev->dev,
- "Failed to enable VLAN Promiscuous mode.\n");
+ dev_err(dev, "Failed to enable VLAN Promiscuous mode\n");
return status;
}
@@ -1417,6 +1424,7 @@ err:
max_tx_rate, vf);
return be_cmd_status(status);
}
+
static int be_set_vf_link_state(struct net_device *netdev, int vf,
int link_state)
{
@@ -1482,7 +1490,6 @@ static void be_eqd_update(struct be_adapter *adapter)
tx_pkts = txo->stats.tx_reqs;
} while (u64_stats_fetch_retry_irq(&txo->stats.sync, start));
-
/* Skip, if wrapped around or first calculation */
now = jiffies;
if (!aic->jiffies || time_before(now, aic->jiffies) ||
@@ -1683,7 +1690,7 @@ static void be_rx_compl_process(struct be_rx_obj *rxo, struct napi_struct *napi,
if (netdev->features & NETIF_F_RXHASH)
skb_set_hash(skb, rxcp->rss_hash, PKT_HASH_TYPE_L3);
- skb->encapsulation = rxcp->tunneled;
+ skb->csum_level = rxcp->tunneled;
skb_mark_napi_id(skb, napi);
if (rxcp->vlanf)
@@ -1741,7 +1748,7 @@ static void be_rx_compl_process_gro(struct be_rx_obj *rxo,
if (adapter->netdev->features & NETIF_F_RXHASH)
skb_set_hash(skb, rxcp->rss_hash, PKT_HASH_TYPE_L3);
- skb->encapsulation = rxcp->tunneled;
+ skb->csum_level = rxcp->tunneled;
skb_mark_napi_id(skb, napi);
if (rxcp->vlanf)
@@ -1753,65 +1760,46 @@ static void be_rx_compl_process_gro(struct be_rx_obj *rxo,
static void be_parse_rx_compl_v1(struct be_eth_rx_compl *compl,
struct be_rx_compl_info *rxcp)
{
- rxcp->pkt_size =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v1, pktsize, compl);
- rxcp->vlanf = AMAP_GET_BITS(struct amap_eth_rx_compl_v1, vtp, compl);
- rxcp->err = AMAP_GET_BITS(struct amap_eth_rx_compl_v1, err, compl);
- rxcp->tcpf = AMAP_GET_BITS(struct amap_eth_rx_compl_v1, tcpf, compl);
- rxcp->udpf = AMAP_GET_BITS(struct amap_eth_rx_compl_v1, udpf, compl);
- rxcp->ip_csum =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v1, ipcksm, compl);
- rxcp->l4_csum =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v1, l4_cksm, compl);
- rxcp->ipv6 =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v1, ip_version, compl);
- rxcp->num_rcvd =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v1, numfrags, compl);
- rxcp->pkt_type =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v1, cast_enc, compl);
- rxcp->rss_hash =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v1, rsshash, compl);
+ rxcp->pkt_size = GET_RX_COMPL_V1_BITS(pktsize, compl);
+ rxcp->vlanf = GET_RX_COMPL_V1_BITS(vtp, compl);
+ rxcp->err = GET_RX_COMPL_V1_BITS(err, compl);
+ rxcp->tcpf = GET_RX_COMPL_V1_BITS(tcpf, compl);
+ rxcp->udpf = GET_RX_COMPL_V1_BITS(udpf, compl);
+ rxcp->ip_csum = GET_RX_COMPL_V1_BITS(ipcksm, compl);
+ rxcp->l4_csum = GET_RX_COMPL_V1_BITS(l4_cksm, compl);
+ rxcp->ipv6 = GET_RX_COMPL_V1_BITS(ip_version, compl);
+ rxcp->num_rcvd = GET_RX_COMPL_V1_BITS(numfrags, compl);
+ rxcp->pkt_type = GET_RX_COMPL_V1_BITS(cast_enc, compl);
+ rxcp->rss_hash = GET_RX_COMPL_V1_BITS(rsshash, compl);
if (rxcp->vlanf) {
- rxcp->qnq = AMAP_GET_BITS(struct amap_eth_rx_compl_v1, qnq,
- compl);
- rxcp->vlan_tag = AMAP_GET_BITS(struct amap_eth_rx_compl_v1,
- vlan_tag, compl);
+ rxcp->qnq = GET_RX_COMPL_V1_BITS(qnq, compl);
+ rxcp->vlan_tag = GET_RX_COMPL_V1_BITS(vlan_tag, compl);
}
- rxcp->port = AMAP_GET_BITS(struct amap_eth_rx_compl_v1, port, compl);
+ rxcp->port = GET_RX_COMPL_V1_BITS(port, compl);
rxcp->tunneled =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v1, tunneled, compl);
+ GET_RX_COMPL_V1_BITS(tunneled, compl);
}
static void be_parse_rx_compl_v0(struct be_eth_rx_compl *compl,
struct be_rx_compl_info *rxcp)
{
- rxcp->pkt_size =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v0, pktsize, compl);
- rxcp->vlanf = AMAP_GET_BITS(struct amap_eth_rx_compl_v0, vtp, compl);
- rxcp->err = AMAP_GET_BITS(struct amap_eth_rx_compl_v0, err, compl);
- rxcp->tcpf = AMAP_GET_BITS(struct amap_eth_rx_compl_v0, tcpf, compl);
- rxcp->udpf = AMAP_GET_BITS(struct amap_eth_rx_compl_v0, udpf, compl);
- rxcp->ip_csum =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v0, ipcksm, compl);
- rxcp->l4_csum =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v0, l4_cksm, compl);
- rxcp->ipv6 =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v0, ip_version, compl);
- rxcp->num_rcvd =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v0, numfrags, compl);
- rxcp->pkt_type =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v0, cast_enc, compl);
- rxcp->rss_hash =
- AMAP_GET_BITS(struct amap_eth_rx_compl_v0, rsshash, compl);
+ rxcp->pkt_size = GET_RX_COMPL_V0_BITS(pktsize, compl);
+ rxcp->vlanf = GET_RX_COMPL_V0_BITS(vtp, compl);
+ rxcp->err = GET_RX_COMPL_V0_BITS(err, compl);
+ rxcp->tcpf = GET_RX_COMPL_V0_BITS(tcpf, compl);
+ rxcp->udpf = GET_RX_COMPL_V0_BITS(udpf, compl);
+ rxcp->ip_csum = GET_RX_COMPL_V0_BITS(ipcksm, compl);
+ rxcp->l4_csum = GET_RX_COMPL_V0_BITS(l4_cksm, compl);
+ rxcp->ipv6 = GET_RX_COMPL_V0_BITS(ip_version, compl);
+ rxcp->num_rcvd = GET_RX_COMPL_V0_BITS(numfrags, compl);
+ rxcp->pkt_type = GET_RX_COMPL_V0_BITS(cast_enc, compl);
+ rxcp->rss_hash = GET_RX_COMPL_V0_BITS(rsshash, compl);
if (rxcp->vlanf) {
- rxcp->qnq = AMAP_GET_BITS(struct amap_eth_rx_compl_v0, qnq,
- compl);
- rxcp->vlan_tag = AMAP_GET_BITS(struct amap_eth_rx_compl_v0,
- vlan_tag, compl);
+ rxcp->qnq = GET_RX_COMPL_V0_BITS(qnq, compl);
+ rxcp->vlan_tag = GET_RX_COMPL_V0_BITS(vlan_tag, compl);
}
- rxcp->port = AMAP_GET_BITS(struct amap_eth_rx_compl_v0, port, compl);
- rxcp->ip_frag = AMAP_GET_BITS(struct amap_eth_rx_compl_v0,
- ip_frag, compl);
+ rxcp->port = GET_RX_COMPL_V0_BITS(port, compl);
+ rxcp->ip_frag = GET_RX_COMPL_V0_BITS(ip_frag, compl);
}
static struct be_rx_compl_info *be_rx_compl_get(struct be_rx_obj *rxo)
@@ -1872,7 +1860,7 @@ static inline struct page *be_alloc_pages(u32 size, gfp_t gfp)
* Allocate a page, split it to fragments of size rx_frag_size and post as
* receive buffers to BE
*/
-static void be_post_rx_frags(struct be_rx_obj *rxo, gfp_t gfp)
+static void be_post_rx_frags(struct be_rx_obj *rxo, gfp_t gfp, u32 frags_needed)
{
struct be_adapter *adapter = rxo->adapter;
struct be_rx_page_info *page_info = NULL, *prev_page_info = NULL;
@@ -1881,10 +1869,10 @@ static void be_post_rx_frags(struct be_rx_obj *rxo, gfp_t gfp)
struct device *dev = &adapter->pdev->dev;
struct be_eth_rx_d *rxd;
u64 page_dmaaddr = 0, frag_dmaaddr;
- u32 posted, page_offset = 0;
+ u32 posted, page_offset = 0, notify = 0;
page_info = &rxo->page_info_tbl[rxq->head];
- for (posted = 0; posted < MAX_RX_POST && !page_info->page; posted++) {
+ for (posted = 0; posted < frags_needed && !page_info->page; posted++) {
if (!pagep) {
pagep = be_alloc_pages(adapter->big_page_size, gfp);
if (unlikely(!pagep)) {
@@ -1897,7 +1885,7 @@ static void be_post_rx_frags(struct be_rx_obj *rxo, gfp_t gfp)
if (dma_mapping_error(dev, page_dmaaddr)) {
put_page(pagep);
pagep = NULL;
- rx_stats(rxo)->rx_post_fail++;
+ adapter->drv_stats.dma_map_errors++;
break;
}
page_offset = 0;
@@ -1940,7 +1928,11 @@ static void be_post_rx_frags(struct be_rx_obj *rxo, gfp_t gfp)
atomic_add(posted, &rxq->used);
if (rxo->rx_post_starved)
rxo->rx_post_starved = false;
- be_rxq_notify(adapter, rxq->id, posted);
+ do {
+ notify = min(256u, posted);
+ be_rxq_notify(adapter, rxq->id, notify);
+ posted -= notify;
+ } while (posted);
} else if (atomic_read(&rxq->used) == 0) {
/* Let be_worker replenish when memory is available */
rxo->rx_post_starved = true;
@@ -1991,7 +1983,7 @@ static u16 be_tx_compl_process(struct be_adapter *adapter,
queue_tail_inc(txq);
} while (cur_index != last_index);
- dev_kfree_skb_any(sent_skb);
+ dev_consume_skb_any(sent_skb);
return num_wrbs;
}
@@ -2069,7 +2061,8 @@ static void be_rx_cq_clean(struct be_rx_obj *rxo)
memset(page_info, 0, sizeof(*page_info));
}
BUG_ON(atomic_read(&rxq->used));
- rxq->tail = rxq->head = 0;
+ rxq->tail = 0;
+ rxq->head = 0;
}
static void be_tx_compl_clean(struct be_adapter *adapter)
@@ -2091,9 +2084,7 @@ static void be_tx_compl_clean(struct be_adapter *adapter)
num_wrbs = 0;
txq = &txo->q;
while ((txcp = be_tx_compl_get(&txo->cq))) {
- end_idx =
- AMAP_GET_BITS(struct amap_eth_tx_compl,
- wrb_index, txcp);
+ end_idx = GET_TX_COMPL_BITS(wrb_index, txcp);
num_wrbs += be_tx_compl_process(adapter, txo,
end_idx);
cmpl++;
@@ -2164,7 +2155,6 @@ static int be_evt_queues_create(struct be_adapter *adapter)
napi_hash_add(&eqo->napi);
aic = &adapter->aic_obj[i];
eqo->adapter = adapter;
- eqo->tx_budget = BE_TX_BUDGET;
eqo->idx = i;
aic->max_eqd = BE_MAX_EQD;
aic->enable = true;
@@ -2394,6 +2384,7 @@ static int be_process_rx(struct be_rx_obj *rxo, struct napi_struct *napi,
struct be_queue_info *rx_cq = &rxo->cq;
struct be_rx_compl_info *rxcp;
u32 work_done;
+ u32 frags_consumed = 0;
for (work_done = 0; work_done < budget; work_done++) {
rxcp = be_rx_compl_get(rxo);
@@ -2426,6 +2417,7 @@ static int be_process_rx(struct be_rx_obj *rxo, struct napi_struct *napi,
be_rx_compl_process(rxo, napi, rxcp);
loop_continue:
+ frags_consumed += rxcp->num_rcvd;
be_rx_stats_update(rxo, rxcp);
}
@@ -2437,26 +2429,71 @@ loop_continue:
*/
if (atomic_read(&rxo->q.used) < RX_FRAGS_REFILL_WM &&
!rxo->rx_post_starved)
- be_post_rx_frags(rxo, GFP_ATOMIC);
+ be_post_rx_frags(rxo, GFP_ATOMIC,
+ max_t(u32, MAX_RX_POST,
+ frags_consumed));
}
return work_done;
}
-static bool be_process_tx(struct be_adapter *adapter, struct be_tx_obj *txo,
- int budget, int idx)
+static inline void be_update_tx_err(struct be_tx_obj *txo, u32 status)
+{
+ switch (status) {
+ case BE_TX_COMP_HDR_PARSE_ERR:
+ tx_stats(txo)->tx_hdr_parse_err++;
+ break;
+ case BE_TX_COMP_NDMA_ERR:
+ tx_stats(txo)->tx_dma_err++;
+ break;
+ case BE_TX_COMP_ACL_ERR:
+ tx_stats(txo)->tx_spoof_check_err++;
+ break;
+ }
+}
+
+static inline void lancer_update_tx_err(struct be_tx_obj *txo, u32 status)
+{
+ switch (status) {
+ case LANCER_TX_COMP_LSO_ERR:
+ tx_stats(txo)->tx_tso_err++;
+ break;
+ case LANCER_TX_COMP_HSW_DROP_MAC_ERR:
+ case LANCER_TX_COMP_HSW_DROP_VLAN_ERR:
+ tx_stats(txo)->tx_spoof_check_err++;
+ break;
+ case LANCER_TX_COMP_QINQ_ERR:
+ tx_stats(txo)->tx_qinq_err++;
+ break;
+ case LANCER_TX_COMP_PARITY_ERR:
+ tx_stats(txo)->tx_internal_parity_err++;
+ break;
+ case LANCER_TX_COMP_DMA_ERR:
+ tx_stats(txo)->tx_dma_err++;
+ break;
+ }
+}
+
+static void be_process_tx(struct be_adapter *adapter, struct be_tx_obj *txo,
+ int idx)
{
struct be_eth_tx_compl *txcp;
- int num_wrbs = 0, work_done;
+ int num_wrbs = 0, work_done = 0;
+ u32 compl_status;
+ u16 last_idx;
- for (work_done = 0; work_done < budget; work_done++) {
- txcp = be_tx_compl_get(&txo->cq);
- if (!txcp)
- break;
- num_wrbs += be_tx_compl_process(adapter, txo,
- AMAP_GET_BITS(struct
- amap_eth_tx_compl,
- wrb_index, txcp));
+ while ((txcp = be_tx_compl_get(&txo->cq))) {
+ last_idx = GET_TX_COMPL_BITS(wrb_index, txcp);
+ num_wrbs += be_tx_compl_process(adapter, txo, last_idx);
+ work_done++;
+
+ compl_status = GET_TX_COMPL_BITS(status, txcp);
+ if (compl_status) {
+ if (lancer_chip(adapter))
+ lancer_update_tx_err(txo, compl_status);
+ else
+ be_update_tx_err(txo, compl_status);
+ }
}
if (work_done) {
@@ -2474,7 +2511,6 @@ static bool be_process_tx(struct be_adapter *adapter, struct be_tx_obj *txo,
tx_stats(txo)->tx_compl += work_done;
u64_stats_update_end(&tx_stats(txo)->sync_compl);
}
- return (work_done < budget); /* Done */
}
int be_poll(struct napi_struct *napi, int budget)
@@ -2483,17 +2519,12 @@ int be_poll(struct napi_struct *napi, int budget)
struct be_adapter *adapter = eqo->adapter;
int max_work = 0, work, i, num_evts;
struct be_rx_obj *rxo;
- bool tx_done;
+ struct be_tx_obj *txo;
num_evts = events_get(eqo);
- /* Process all TXQs serviced by this EQ */
- for (i = eqo->idx; i < adapter->num_tx_qs; i += adapter->num_evt_qs) {
- tx_done = be_process_tx(adapter, &adapter->tx_obj[i],
- eqo->tx_budget, i);
- if (!tx_done)
- max_work = budget;
- }
+ for_all_tx_queues_on_eq(adapter, eqo, txo, i)
+ be_process_tx(adapter, txo, i);
if (be_lock_napi(eqo)) {
/* This loop will iterate twice for EQ0 in which
@@ -2882,7 +2913,7 @@ static int be_rx_qs_create(struct be_adapter *adapter)
/* First time posting */
for_all_rx_queues(adapter, rxo, i)
- be_post_rx_frags(rxo, GFP_KERNEL);
+ be_post_rx_frags(rxo, GFP_KERNEL, MAX_RX_POST);
return 0;
}
@@ -3309,10 +3340,20 @@ static void BEx_get_resources(struct be_adapter *adapter,
*/
if (BE2_chip(adapter) || use_sriov || (adapter->port_num > 1) ||
!be_physfn(adapter) || (be_is_mc(adapter) &&
- !(adapter->function_caps & BE_FUNCTION_CAPS_RSS)))
+ !(adapter->function_caps & BE_FUNCTION_CAPS_RSS))) {
res->max_tx_qs = 1;
- else
+ } else if (adapter->function_caps & BE_FUNCTION_CAPS_SUPER_NIC) {
+ struct be_resources super_nic_res = {0};
+
+ /* On a SuperNIC profile, the driver needs to use the
+ * GET_PROFILE_CONFIG cmd to query the per-function TXQ limits
+ */
+ be_cmd_get_profile_config(adapter, &super_nic_res, 0);
+ /* Some old versions of BE3 FW don't report max_tx_qs value */
+ res->max_tx_qs = super_nic_res.max_tx_qs ? : BE3_MAX_TX_QS;
+ } else {
res->max_tx_qs = BE3_MAX_TX_QS;
+ }
if ((adapter->function_caps & BE_FUNCTION_CAPS_RSS) &&
!use_sriov && be_physfn(adapter))
@@ -3362,7 +3403,7 @@ static int be_get_sriov_config(struct be_adapter *adapter)
if (!be_max_vfs(adapter)) {
if (num_vfs)
- dev_warn(dev, "device doesn't support SRIOV\n");
+ dev_warn(dev, "SRIOV is disabled. Ignoring num_vfs\n");
adapter->num_vfs = 0;
return 0;
}
@@ -3413,16 +3454,16 @@ static int be_get_resources(struct be_adapter *adapter)
if (be_roce_supported(adapter))
res.max_evt_qs /= 2;
adapter->res = res;
-
- dev_info(dev, "Max: txqs %d, rxqs %d, rss %d, eqs %d, vfs %d\n",
- be_max_txqs(adapter), be_max_rxqs(adapter),
- be_max_rss(adapter), be_max_eqs(adapter),
- be_max_vfs(adapter));
- dev_info(dev, "Max: uc-macs %d, mc-macs %d, vlans %d\n",
- be_max_uc(adapter), be_max_mc(adapter),
- be_max_vlans(adapter));
}
+ dev_info(dev, "Max: txqs %d, rxqs %d, rss %d, eqs %d, vfs %d\n",
+ be_max_txqs(adapter), be_max_rxqs(adapter),
+ be_max_rss(adapter), be_max_eqs(adapter),
+ be_max_vfs(adapter));
+ dev_info(dev, "Max: uc-macs %d, mc-macs %d, vlans %d\n",
+ be_max_uc(adapter), be_max_mc(adapter),
+ be_max_vlans(adapter));
+
return 0;
}
@@ -3633,9 +3674,10 @@ static int be_setup(struct be_adapter *adapter)
goto err;
be_cmd_get_fw_ver(adapter);
+ dev_info(dev, "FW version is %s\n", adapter->fw_ver);
if (BE2_chip(adapter) && fw_major_num(adapter->fw_ver) < 4) {
- dev_err(dev, "Firmware on card is old(%s), IRQs may not work.",
+ dev_err(dev, "Firmware on card is old(%s), IRQs may not work",
adapter->fw_ver);
dev_err(dev, "Please upgrade firmware to version >= 4.0\n");
}
@@ -3683,8 +3725,6 @@ static void be_netpoll(struct net_device *netdev)
be_eq_notify(eqo->adapter, eqo->q.id, false, true, 0);
napi_schedule(&eqo->napi);
}
-
- return;
}
#endif
@@ -4052,6 +4092,7 @@ static int lancer_fw_download(struct be_adapter *adapter,
{
#define LANCER_FW_DOWNLOAD_CHUNK (32 * 1024)
#define LANCER_FW_DOWNLOAD_LOCATION "/prg"
+ struct device *dev = &adapter->pdev->dev;
struct be_dma_mem flash_cmd;
const u8 *data_ptr = NULL;
u8 *dest_image_ptr = NULL;
@@ -4064,21 +4105,16 @@ static int lancer_fw_download(struct be_adapter *adapter,
u8 change_status;
if (!IS_ALIGNED(fw->size, sizeof(u32))) {
- dev_err(&adapter->pdev->dev,
- "FW Image not properly aligned. "
- "Length must be 4 byte aligned.\n");
- status = -EINVAL;
- goto lancer_fw_exit;
+ dev_err(dev, "FW image size should be multiple of 4\n");
+ return -EINVAL;
}
flash_cmd.size = sizeof(struct lancer_cmd_req_write_object)
+ LANCER_FW_DOWNLOAD_CHUNK;
- flash_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, flash_cmd.size,
+ flash_cmd.va = dma_alloc_coherent(dev, flash_cmd.size,
&flash_cmd.dma, GFP_KERNEL);
- if (!flash_cmd.va) {
- status = -ENOMEM;
- goto lancer_fw_exit;
- }
+ if (!flash_cmd.va)
+ return -ENOMEM;
dest_image_ptr = flash_cmd.va +
sizeof(struct lancer_cmd_req_write_object);
@@ -4113,35 +4149,27 @@ static int lancer_fw_download(struct be_adapter *adapter,
&add_status);
}
- dma_free_coherent(&adapter->pdev->dev, flash_cmd.size, flash_cmd.va,
- flash_cmd.dma);
+ dma_free_coherent(dev, flash_cmd.size, flash_cmd.va, flash_cmd.dma);
if (status) {
- dev_err(&adapter->pdev->dev,
- "Firmware load error. "
- "Status code: 0x%x Additional Status: 0x%x\n",
- status, add_status);
- goto lancer_fw_exit;
+ dev_err(dev, "Firmware load error\n");
+ return be_cmd_status(status);
}
+ dev_info(dev, "Firmware flashed successfully\n");
+
if (change_status == LANCER_FW_RESET_NEEDED) {
- dev_info(&adapter->pdev->dev,
- "Resetting adapter to activate new FW\n");
+ dev_info(dev, "Resetting adapter to activate new FW\n");
status = lancer_physdev_ctrl(adapter,
PHYSDEV_CONTROL_FW_RESET_MASK);
if (status) {
- dev_err(&adapter->pdev->dev,
- "Adapter busy for FW reset.\n"
- "New FW will not be active.\n");
- goto lancer_fw_exit;
+ dev_err(dev, "Adapter busy, could not reset FW\n");
+ dev_err(dev, "Reboot server to activate new FW\n");
}
} else if (change_status != LANCER_NO_RESET_NEEDED) {
- dev_err(&adapter->pdev->dev,
- "System reboot required for new FW to be active\n");
+ dev_info(dev, "Reboot server to activate new FW\n");
}
- dev_info(&adapter->pdev->dev, "Firmware flashed successfully\n");
-lancer_fw_exit:
- return status;
+ return 0;
}
#define UFI_TYPE2 2
@@ -4374,7 +4402,6 @@ static void be_add_vxlan_port(struct net_device *netdev, sa_family_t sa_family,
return;
err:
be_disable_vxlan_offloads(adapter);
- return;
}
static void be_del_vxlan_port(struct net_device *netdev, sa_family_t sa_family,
@@ -4506,6 +4533,7 @@ static int be_map_pci_bars(struct be_adapter *adapter)
return 0;
pci_map_err:
+ dev_err(&adapter->pdev->dev, "Error in mapping PCI BARs\n");
be_unmap_pci_bars(adapter);
return -ENOMEM;
}
@@ -4713,7 +4741,6 @@ static void be_func_recovery_task(struct work_struct *work)
be_detect_error(adapter);
if (adapter->hw_error && lancer_chip(adapter)) {
-
rtnl_lock();
netif_device_detach(adapter->netdev);
rtnl_unlock();
@@ -4750,7 +4777,7 @@ static void be_worker(struct work_struct *work)
if (!adapter->stats_cmd_sent) {
if (lancer_chip(adapter))
lancer_cmd_get_pport_stats(adapter,
- &adapter->stats_cmd);
+ &adapter->stats_cmd);
else
be_cmd_get_stats(adapter, &adapter->stats_cmd);
}
@@ -4764,7 +4791,7 @@ static void be_worker(struct work_struct *work)
* allocation failures.
*/
if (rxo->rx_post_starved)
- be_post_rx_frags(rxo, GFP_KERNEL);
+ be_post_rx_frags(rxo, GFP_KERNEL, MAX_RX_POST);
}
be_eqd_update(adapter);
@@ -4822,6 +4849,8 @@ static int be_probe(struct pci_dev *pdev, const struct pci_device_id *pdev_id)
struct net_device *netdev;
char port_name;
+ dev_info(&pdev->dev, "%s version is %s\n", DRV_NAME, DRV_VER);
+
status = pci_enable_device(pdev);
if (status)
goto do_none;
@@ -4853,11 +4882,9 @@ static int be_probe(struct pci_dev *pdev, const struct pci_device_id *pdev_id)
}
}
- if (be_physfn(adapter)) {
- status = pci_enable_pcie_error_reporting(pdev);
- if (!status)
- dev_info(&pdev->dev, "PCIe error reporting enabled\n");
- }
+ status = pci_enable_pcie_error_reporting(pdev);
+ if (!status)
+ dev_info(&pdev->dev, "PCIe error reporting enabled\n");
status = be_ctrl_init(adapter);
if (status)
@@ -4897,7 +4924,8 @@ static int be_probe(struct pci_dev *pdev, const struct pci_device_id *pdev_id)
INIT_DELAYED_WORK(&adapter->work, be_worker);
INIT_DELAYED_WORK(&adapter->func_recovery_work, be_func_recovery_task);
- adapter->rx_fc = adapter->tx_fc = true;
+ adapter->rx_fc = true;
+ adapter->tx_fc = true;
status = be_setup(adapter);
if (status)
diff --git a/drivers/net/ethernet/emulex/benet/be_roce.c b/drivers/net/ethernet/emulex/benet/be_roce.c
index ef4672d..1328664 100644
--- a/drivers/net/ethernet/emulex/benet/be_roce.c
+++ b/drivers/net/ethernet/emulex/benet/be_roce.c
@@ -174,6 +174,7 @@ int be_roce_register_driver(struct ocrdma_driver *drv)
ocrdma_drv = drv;
list_for_each_entry(dev, &be_adapter_list, entry) {
struct net_device *netdev;
+
_be_roce_dev_add(dev);
netdev = dev->netdev;
if (netif_running(netdev) && netif_oper_up(netdev))
diff --git a/drivers/net/ethernet/ethoc.c b/drivers/net/ethernet/ethoc.c
index f3658bd..0bc6c10 100644
--- a/drivers/net/ethernet/ethoc.c
+++ b/drivers/net/ethernet/ethoc.c
@@ -1222,8 +1222,6 @@ static int ethoc_probe(struct platform_device *pdev)
goto error;
}
- ether_setup(netdev);
-
/* setup the net_device structure */
netdev->netdev_ops = &ethoc_netdev_ops;
netdev->watchdog_timeo = ETHOC_TIMEOUT;
diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
index ee41d98..1d5e182 100644
--- a/drivers/net/ethernet/freescale/fec.h
+++ b/drivers/net/ethernet/freescale/fec.h
@@ -27,8 +27,8 @@
*/
#define FEC_IEVENT 0x004 /* Interrupt event reg */
#define FEC_IMASK 0x008 /* Interrupt mask reg */
-#define FEC_R_DES_ACTIVE 0x010 /* Receive descriptor reg */
-#define FEC_X_DES_ACTIVE 0x014 /* Transmit descriptor reg */
+#define FEC_R_DES_ACTIVE_0 0x010 /* Receive descriptor reg */
+#define FEC_X_DES_ACTIVE_0 0x014 /* Transmit descriptor reg */
#define FEC_ECNTRL 0x024 /* Ethernet control reg */
#define FEC_MII_DATA 0x040 /* MII manage frame reg */
#define FEC_MII_SPEED 0x044 /* MII speed control reg */
@@ -38,6 +38,12 @@
#define FEC_ADDR_LOW 0x0e4 /* Low 32bits MAC address */
#define FEC_ADDR_HIGH 0x0e8 /* High 16bits MAC address */
#define FEC_OPD 0x0ec /* Opcode + Pause duration */
+#define FEC_TXIC0 0xF0 /* Tx Interrupt Coalescing for ring 0 */
+#define FEC_TXIC1 0xF4 /* Tx Interrupt Coalescing for ring 1 */
+#define FEC_TXIC2 0xF8 /* Tx Interrupt Coalescing for ring 2 */
+#define FEC_RXIC0 0x100 /* Rx Interrupt Coalescing for ring 0 */
+#define FEC_RXIC1 0x104 /* Rx Interrupt Coalescing for ring 1 */
+#define FEC_RXIC2 0x108 /* Rx Interrupt Coalescing for ring 2 */
#define FEC_HASH_TABLE_HIGH 0x118 /* High 32bits hash table */
#define FEC_HASH_TABLE_LOW 0x11c /* Low 32bits hash table */
#define FEC_GRP_HASH_TABLE_HIGH 0x120 /* High 32bits hash table */
@@ -45,14 +51,27 @@
#define FEC_X_WMRK 0x144 /* FIFO transmit water mark */
#define FEC_R_BOUND 0x14c /* FIFO receive bound reg */
#define FEC_R_FSTART 0x150 /* FIFO receive start reg */
-#define FEC_R_DES_START 0x180 /* Receive descriptor ring */
-#define FEC_X_DES_START 0x184 /* Transmit descriptor ring */
+#define FEC_R_DES_START_1 0x160 /* Receive descriptor ring 1 */
+#define FEC_X_DES_START_1 0x164 /* Transmit descriptor ring 1 */
+#define FEC_R_DES_START_2 0x16c /* Receive descriptor ring 2 */
+#define FEC_X_DES_START_2 0x170 /* Transmit descriptor ring 2 */
+#define FEC_R_DES_START_0 0x180 /* Receive descriptor ring */
+#define FEC_X_DES_START_0 0x184 /* Transmit descriptor ring */
#define FEC_R_BUFF_SIZE 0x188 /* Maximum receive buff size */
#define FEC_R_FIFO_RSFL 0x190 /* Receive FIFO section full threshold */
#define FEC_R_FIFO_RSEM 0x194 /* Receive FIFO section empty threshold */
#define FEC_R_FIFO_RAEM 0x198 /* Receive FIFO almost empty threshold */
#define FEC_R_FIFO_RAFL 0x19c /* Receive FIFO almost full threshold */
#define FEC_RACC 0x1C4 /* Receive Accelerator function */
+#define FEC_RCMR_1 0x1c8 /* Receive classification match ring 1 */
+#define FEC_RCMR_2 0x1cc /* Receive classification match ring 2 */
+#define FEC_DMA_CFG_1 0x1d8 /* DMA class configuration for ring 1 */
+#define FEC_DMA_CFG_2 0x1dc /* DMA class Configuration for ring 2 */
+#define FEC_R_DES_ACTIVE_1 0x1e0 /* Rx descriptor active for ring 1 */
+#define FEC_X_DES_ACTIVE_1 0x1e4 /* Tx descriptor active for ring 1 */
+#define FEC_R_DES_ACTIVE_2 0x1e8 /* Rx descriptor active for ring 2 */
+#define FEC_X_DES_ACTIVE_2 0x1ec /* Tx descriptor active for ring 2 */
+#define FEC_QOS_SCHEME 0x1f0 /* Set multi queues Qos scheme */
#define FEC_MIIGSK_CFGR 0x300 /* MIIGSK Configuration reg */
#define FEC_MIIGSK_ENR 0x308 /* MIIGSK Enable reg */
@@ -121,8 +140,12 @@
#define FEC_IEVENT 0x004 /* Interrupt even reg */
#define FEC_IMASK 0x008 /* Interrupt mask reg */
#define FEC_IVEC 0x00c /* Interrupt vec status reg */
-#define FEC_R_DES_ACTIVE 0x010 /* Receive descriptor reg */
-#define FEC_X_DES_ACTIVE 0x014 /* Transmit descriptor reg */
+#define FEC_R_DES_ACTIVE_0 0x010 /* Receive descriptor reg */
+#define FEC_R_DES_ACTIVE_1 FEC_R_DES_ACTIVE_0
+#define FEC_R_DES_ACTIVE_2 FEC_R_DES_ACTIVE_0
+#define FEC_X_DES_ACTIVE_0 0x014 /* Transmit descriptor reg */
+#define FEC_X_DES_ACTIVE_1 FEC_X_DES_ACTIVE_0
+#define FEC_X_DES_ACTIVE_2 FEC_X_DES_ACTIVE_0
#define FEC_MII_DATA 0x040 /* MII manage frame reg */
#define FEC_MII_SPEED 0x044 /* MII speed control reg */
#define FEC_R_BOUND 0x08c /* FIFO receive bound reg */
@@ -136,11 +159,27 @@
#define FEC_ADDR_HIGH 0x3c4 /* High 16bits MAC address */
#define FEC_GRP_HASH_TABLE_HIGH 0x3c8 /* High 32bits hash table */
#define FEC_GRP_HASH_TABLE_LOW 0x3cc /* Low 32bits hash table */
-#define FEC_R_DES_START 0x3d0 /* Receive descriptor ring */
-#define FEC_X_DES_START 0x3d4 /* Transmit descriptor ring */
+#define FEC_R_DES_START_0 0x3d0 /* Receive descriptor ring */
+#define FEC_R_DES_START_1 FEC_R_DES_START_0
+#define FEC_R_DES_START_2 FEC_R_DES_START_0
+#define FEC_X_DES_START_0 0x3d4 /* Transmit descriptor ring */
+#define FEC_X_DES_START_1 FEC_X_DES_START_0
+#define FEC_X_DES_START_2 FEC_X_DES_START_0
#define FEC_R_BUFF_SIZE 0x3d8 /* Maximum receive buff size */
#define FEC_FIFO_RAM 0x400 /* FIFO RAM buffer */
-
+/* Not existed in real chip
+ * Just for pass build.
+ */
+#define FEC_RCMR_1 0xFFF
+#define FEC_RCMR_2 0xFFF
+#define FEC_DMA_CFG_1 0xFFF
+#define FEC_DMA_CFG_2 0xFFF
+#define FEC_TXIC0 0xFFF
+#define FEC_TXIC1 0xFFF
+#define FEC_TXIC2 0xFFF
+#define FEC_RXIC0 0xFFF
+#define FEC_RXIC1 0xFFF
+#define FEC_RXIC2 0xFFF
#endif /* CONFIG_M5272 */
@@ -233,6 +272,44 @@ struct bufdesc_ex {
/* This device has up to three irqs on some platforms */
#define FEC_IRQ_NUM 3
+/* Maximum number of queues supported
+ * ENET with AVB IP can support up to 3 independent tx queues and rx queues.
+ * User can point the queue number that is less than or equal to 3.
+ */
+#define FEC_ENET_MAX_TX_QS 3
+#define FEC_ENET_MAX_RX_QS 3
+
+#define FEC_R_DES_START(X) ((X == 1) ? FEC_R_DES_START_1 : \
+ ((X == 2) ? \
+ FEC_R_DES_START_2 : FEC_R_DES_START_0))
+#define FEC_X_DES_START(X) ((X == 1) ? FEC_X_DES_START_1 : \
+ ((X == 2) ? \
+ FEC_X_DES_START_2 : FEC_X_DES_START_0))
+#define FEC_R_DES_ACTIVE(X) ((X == 1) ? FEC_R_DES_ACTIVE_1 : \
+ ((X == 2) ? \
+ FEC_R_DES_ACTIVE_2 : FEC_R_DES_ACTIVE_0))
+#define FEC_X_DES_ACTIVE(X) ((X == 1) ? FEC_X_DES_ACTIVE_1 : \
+ ((X == 2) ? \
+ FEC_X_DES_ACTIVE_2 : FEC_X_DES_ACTIVE_0))
+
+#define FEC_DMA_CFG(X) ((X == 2) ? FEC_DMA_CFG_2 : FEC_DMA_CFG_1)
+
+#define DMA_CLASS_EN (1 << 16)
+#define FEC_RCMR(X) ((X == 2) ? FEC_RCMR_2 : FEC_RCMR_1)
+#define IDLE_SLOPE_MASK 0xFFFF
+#define IDLE_SLOPE_1 0x200 /* BW fraction: 0.5 */
+#define IDLE_SLOPE_2 0x200 /* BW fraction: 0.5 */
+#define IDLE_SLOPE(X) ((X == 1) ? (IDLE_SLOPE_1 & IDLE_SLOPE_MASK) : \
+ (IDLE_SLOPE_2 & IDLE_SLOPE_MASK))
+#define RCMR_MATCHEN (0x1 << 16)
+#define RCMR_CMP_CFG(v, n) ((v & 0x7) << (n << 2))
+#define RCMR_CMP_1 (RCMR_CMP_CFG(0, 0) | RCMR_CMP_CFG(1, 1) | \
+ RCMR_CMP_CFG(2, 2) | RCMR_CMP_CFG(3, 3))
+#define RCMR_CMP_2 (RCMR_CMP_CFG(4, 0) | RCMR_CMP_CFG(5, 1) | \
+ RCMR_CMP_CFG(6, 2) | RCMR_CMP_CFG(7, 3))
+#define RCMR_CMP(X) ((X == 1) ? RCMR_CMP_1 : RCMR_CMP_2)
+#define FEC_TX_BD_FTYPE(X) ((X & 0xF) << 20)
+
/* The number of Tx and Rx buffers. These are allocated from the page
* pool. The code may assume these are power of two, so it it best
* to keep them that size.
@@ -240,7 +317,7 @@ struct bufdesc_ex {
* the skbuffer directly.
*/
-#define FEC_ENET_RX_PAGES 8
+#define FEC_ENET_RX_PAGES 256
#define FEC_ENET_RX_FRSIZE 2048
#define FEC_ENET_RX_FRPPG (PAGE_SIZE / FEC_ENET_RX_FRSIZE)
#define RX_RING_SIZE (FEC_ENET_RX_FRPPG * FEC_ENET_RX_PAGES)
@@ -256,6 +333,69 @@ struct bufdesc_ex {
#define FLAG_RX_CSUM_ENABLED (BD_ENET_RX_ICE | BD_ENET_RX_PCR)
#define FLAG_RX_CSUM_ERROR (BD_ENET_RX_ICE | BD_ENET_RX_PCR)
+/* Interrupt events/masks. */
+#define FEC_ENET_HBERR ((uint)0x80000000) /* Heartbeat error */
+#define FEC_ENET_BABR ((uint)0x40000000) /* Babbling receiver */
+#define FEC_ENET_BABT ((uint)0x20000000) /* Babbling transmitter */
+#define FEC_ENET_GRA ((uint)0x10000000) /* Graceful stop complete */
+#define FEC_ENET_TXF_0 ((uint)0x08000000) /* Full frame transmitted */
+#define FEC_ENET_TXF_1 ((uint)0x00000008) /* Full frame transmitted */
+#define FEC_ENET_TXF_2 ((uint)0x00000080) /* Full frame transmitted */
+#define FEC_ENET_TXB ((uint)0x04000000) /* A buffer was transmitted */
+#define FEC_ENET_RXF_0 ((uint)0x02000000) /* Full frame received */
+#define FEC_ENET_RXF_1 ((uint)0x00000002) /* Full frame received */
+#define FEC_ENET_RXF_2 ((uint)0x00000020) /* Full frame received */
+#define FEC_ENET_RXB ((uint)0x01000000) /* A buffer was received */
+#define FEC_ENET_MII ((uint)0x00800000) /* MII interrupt */
+#define FEC_ENET_EBERR ((uint)0x00400000) /* SDMA bus error */
+#define FEC_ENET_TXF (FEC_ENET_TXF_0 | FEC_ENET_TXF_1 | FEC_ENET_TXF_2)
+#define FEC_ENET_RXF (FEC_ENET_RXF_0 | FEC_ENET_RXF_1 | FEC_ENET_RXF_2)
+#define FEC_ENET_TS_AVAIL ((uint)0x00010000)
+#define FEC_ENET_TS_TIMER ((uint)0x00008000)
+
+#define FEC_DEFAULT_IMASK (FEC_ENET_TXF | FEC_ENET_RXF | FEC_ENET_MII | FEC_ENET_TS_TIMER)
+#define FEC_RX_DISABLED_IMASK (FEC_DEFAULT_IMASK & (~FEC_ENET_RXF))
+
+/* ENET interrupt coalescing macro define */
+#define FEC_ITR_CLK_SEL (0x1 << 30)
+#define FEC_ITR_EN (0x1 << 31)
+#define FEC_ITR_ICFT(X) ((X & 0xFF) << 20)
+#define FEC_ITR_ICTT(X) ((X) & 0xFFFF)
+#define FEC_ITR_ICFT_DEFAULT 200 /* Set 200 frame count threshold */
+#define FEC_ITR_ICTT_DEFAULT 1000 /* Set 1000us timer threshold */
+
+#define FEC_VLAN_TAG_LEN 0x04
+#define FEC_ETHTYPE_LEN 0x02
+
+struct fec_enet_priv_tx_q {
+ int index;
+ unsigned char *tx_bounce[TX_RING_SIZE];
+ struct sk_buff *tx_skbuff[TX_RING_SIZE];
+
+ dma_addr_t bd_dma;
+ struct bufdesc *tx_bd_base;
+ uint tx_ring_size;
+
+ unsigned short tx_stop_threshold;
+ unsigned short tx_wake_threshold;
+
+ struct bufdesc *cur_tx;
+ struct bufdesc *dirty_tx;
+ char *tso_hdrs;
+ dma_addr_t tso_hdrs_dma;
+};
+
+struct fec_enet_priv_rx_q {
+ int index;
+ struct sk_buff *rx_skbuff[RX_RING_SIZE];
+
+ dma_addr_t bd_dma;
+ struct bufdesc *rx_bd_base;
+ uint rx_ring_size;
+
+ struct bufdesc *cur_rx;
+};
+
/* The FEC buffer descriptors track the ring buffers. The rx_bd_base and
* tx_bd_base always point to the base of the buffer descriptors. The
* cur_rx and cur_tx point to the currently available buffer.
@@ -272,36 +412,28 @@ struct fec_enet_private {
struct clk *clk_ipg;
struct clk *clk_ahb;
+ struct clk *clk_ref;
struct clk *clk_enet_out;
struct clk *clk_ptp;
bool ptp_clk_on;
struct mutex ptp_clk_mutex;
+ unsigned int num_tx_queues;
+ unsigned int num_rx_queues;
/* The saved address of a sent-in-place packet/buffer, for skfree(). */
- unsigned char *tx_bounce[TX_RING_SIZE];
- struct sk_buff *tx_skbuff[TX_RING_SIZE];
- struct sk_buff *rx_skbuff[RX_RING_SIZE];
+ struct fec_enet_priv_tx_q *tx_queue[FEC_ENET_MAX_TX_QS];
+ struct fec_enet_priv_rx_q *rx_queue[FEC_ENET_MAX_RX_QS];
- /* CPM dual port RAM relative addresses */
- dma_addr_t bd_dma;
- /* Address of Rx and Tx buffers */
- struct bufdesc *rx_bd_base;
- struct bufdesc *tx_bd_base;
- /* The next free ring entry */
- struct bufdesc *cur_rx, *cur_tx;
- /* The ring entries to be free()ed */
- struct bufdesc *dirty_tx;
+ unsigned int total_tx_ring_size;
+ unsigned int total_rx_ring_size;
- unsigned short bufdesc_size;
- unsigned short tx_ring_size;
- unsigned short rx_ring_size;
- unsigned short tx_stop_threshold;
- unsigned short tx_wake_threshold;
+ unsigned long work_tx;
+ unsigned long work_rx;
+ unsigned long work_ts;
+ unsigned long work_mdio;
- /* Software TSO */
- char *tso_hdrs;
- dma_addr_t tso_hdrs_dma;
+ unsigned short bufdesc_size;
struct platform_device *pdev;
@@ -340,6 +472,18 @@ struct fec_enet_private {
int hwts_tx_en;
struct delayed_work time_keep;
struct regulator *reg_phy;
+
+ unsigned int tx_align;
+ unsigned int rx_align;
+
+ /* hw interrupt coalesce */
+ unsigned int rx_pkts_itr;
+ unsigned int rx_time_itr;
+ unsigned int tx_pkts_itr;
+ unsigned int tx_time_itr;
+ unsigned int itr_clk_rate;
+
+ u32 rx_copybreak;
};
void fec_ptp_init(struct platform_device *pdev);
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 89355a7..87975b5 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -57,21 +57,19 @@
#include <linux/regulator/consumer.h>
#include <linux/if_vlan.h>
#include <linux/pinctrl/consumer.h>
+#include <linux/prefetch.h>
#include <asm/cacheflush.h>
#include "fec.h"
static void set_multicast_list(struct net_device *ndev);
-
-#if defined(CONFIG_ARM)
-#define FEC_ALIGNMENT 0xf
-#else
-#define FEC_ALIGNMENT 0x3
-#endif
+static void fec_enet_itr_coal_init(struct net_device *ndev);
#define DRIVER_NAME "fec"
+#define FEC_ENET_GET_QUQUE(_x) ((_x == 0) ? 1 : ((_x == 1) ? 2 : 0))
+
/* Pause frame feild and FIFO threshold */
#define FEC_ENET_FCE (1 << 5)
#define FEC_ENET_RSEM_V 0x84
@@ -104,6 +102,22 @@ static void set_multicast_list(struct net_device *ndev);
* ENET_TDAR[TDAR].
*/
#define FEC_QUIRK_ERR006358 (1 << 7)
+/* ENET IP hw AVB
+ *
+ * i.MX6SX ENET IP add Audio Video Bridging (AVB) feature support.
+ * - Two class indicators on receive with configurable priority
+ * - Two class indicators and line speed timer on transmit allowing
+ * implementation class credit based shapers externally
+ * - Additional DMA registers provisioned to allow managing up to 3
+ * independent rings
+ */
+#define FEC_QUIRK_HAS_AVB (1 << 8)
+/* There is a TDAR race condition for mutliQ when the software sets TDAR
+ * and the UDMA clears TDAR simultaneously or in a small window (2-4 cycles).
+ * This will cause the udma_tx and udma_tx_arbiter state machines to hang.
+ * The issue exist at i.MX6SX enet IP.
+ */
+#define FEC_QUIRK_ERR007885 (1 << 9)
static struct platform_device_id fec_devtype[] = {
{
@@ -128,6 +142,12 @@ static struct platform_device_id fec_devtype[] = {
.name = "mvf600-fec",
.driver_data = FEC_QUIRK_ENET_MAC,
}, {
+ .name = "imx6sx-fec",
+ .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
+ FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
+ FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
+ FEC_QUIRK_ERR007885,
+ }, {
/* sentinel */
}
};
@@ -139,6 +159,7 @@ enum imx_fec_type {
IMX28_FEC,
IMX6Q_FEC,
MVF600_FEC,
+ IMX6SX_FEC,
};
static const struct of_device_id fec_dt_ids[] = {
@@ -147,6 +168,7 @@ static const struct of_device_id fec_dt_ids[] = {
{ .compatible = "fsl,imx28-fec", .data = &fec_devtype[IMX28_FEC], },
{ .compatible = "fsl,imx6q-fec", .data = &fec_devtype[IMX6Q_FEC], },
{ .compatible = "fsl,mvf600-fec", .data = &fec_devtype[MVF600_FEC], },
+ { .compatible = "fsl,imx6sx-fec", .data = &fec_devtype[IMX6SX_FEC], },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, fec_dt_ids);
@@ -175,21 +197,6 @@ MODULE_PARM_DESC(macaddr, "FEC Ethernet MAC address");
#endif
#endif /* CONFIG_M5272 */
-/* Interrupt events/masks. */
-#define FEC_ENET_HBERR ((uint)0x80000000) /* Heartbeat error */
-#define FEC_ENET_BABR ((uint)0x40000000) /* Babbling receiver */
-#define FEC_ENET_BABT ((uint)0x20000000) /* Babbling transmitter */
-#define FEC_ENET_GRA ((uint)0x10000000) /* Graceful stop complete */
-#define FEC_ENET_TXF ((uint)0x08000000) /* Full frame transmitted */
-#define FEC_ENET_TXB ((uint)0x04000000) /* A buffer was transmitted */
-#define FEC_ENET_RXF ((uint)0x02000000) /* Full frame received */
-#define FEC_ENET_RXB ((uint)0x01000000) /* A buffer was received */
-#define FEC_ENET_MII ((uint)0x00800000) /* MII interrupt */
-#define FEC_ENET_EBERR ((uint)0x00400000) /* SDMA bus error */
-
-#define FEC_DEFAULT_IMASK (FEC_ENET_TXF | FEC_ENET_RXF | FEC_ENET_MII)
-#define FEC_RX_DISABLED_IMASK (FEC_DEFAULT_IMASK & (~FEC_ENET_RXF))
-
/* The FEC stores dest/src/type/vlan, data, and checksum for receive packets.
*/
#define PKT_MAXBUF_SIZE 1522
@@ -230,6 +237,8 @@ MODULE_PARM_DESC(macaddr, "FEC Ethernet MAC address");
#define FEC_PAUSE_FLAG_AUTONEG 0x1
#define FEC_PAUSE_FLAG_ENABLE 0x2
+#define COPYBREAK_DEFAULT 256
+
#define TSO_HEADER_SIZE 128
/* Max number of allowed TCP segments for software TSO */
#define FEC_MAX_TSO_SEGS 100
@@ -242,22 +251,26 @@ MODULE_PARM_DESC(macaddr, "FEC Ethernet MAC address");
static int mii_cnt;
static inline
-struct bufdesc *fec_enet_get_nextdesc(struct bufdesc *bdp, struct fec_enet_private *fep)
+struct bufdesc *fec_enet_get_nextdesc(struct bufdesc *bdp,
+ struct fec_enet_private *fep,
+ int queue_id)
{
struct bufdesc *new_bd = bdp + 1;
struct bufdesc_ex *ex_new_bd = (struct bufdesc_ex *)bdp + 1;
+ struct fec_enet_priv_tx_q *txq = fep->tx_queue[queue_id];
+ struct fec_enet_priv_rx_q *rxq = fep->rx_queue[queue_id];
struct bufdesc_ex *ex_base;
struct bufdesc *base;
int ring_size;
- if (bdp >= fep->tx_bd_base) {
- base = fep->tx_bd_base;
- ring_size = fep->tx_ring_size;
- ex_base = (struct bufdesc_ex *)fep->tx_bd_base;
+ if (bdp >= txq->tx_bd_base) {
+ base = txq->tx_bd_base;
+ ring_size = txq->tx_ring_size;
+ ex_base = (struct bufdesc_ex *)txq->tx_bd_base;
} else {
- base = fep->rx_bd_base;
- ring_size = fep->rx_ring_size;
- ex_base = (struct bufdesc_ex *)fep->rx_bd_base;
+ base = rxq->rx_bd_base;
+ ring_size = rxq->rx_ring_size;
+ ex_base = (struct bufdesc_ex *)rxq->rx_bd_base;
}
if (fep->bufdesc_ex)
@@ -269,22 +282,26 @@ struct bufdesc *fec_enet_get_nextdesc(struct bufdesc *bdp, struct fec_enet_priva
}
static inline
-struct bufdesc *fec_enet_get_prevdesc(struct bufdesc *bdp, struct fec_enet_private *fep)
+struct bufdesc *fec_enet_get_prevdesc(struct bufdesc *bdp,
+ struct fec_enet_private *fep,
+ int queue_id)
{
struct bufdesc *new_bd = bdp - 1;
struct bufdesc_ex *ex_new_bd = (struct bufdesc_ex *)bdp - 1;
+ struct fec_enet_priv_tx_q *txq = fep->tx_queue[queue_id];
+ struct fec_enet_priv_rx_q *rxq = fep->rx_queue[queue_id];
struct bufdesc_ex *ex_base;
struct bufdesc *base;
int ring_size;
- if (bdp >= fep->tx_bd_base) {
- base = fep->tx_bd_base;
- ring_size = fep->tx_ring_size;
- ex_base = (struct bufdesc_ex *)fep->tx_bd_base;
+ if (bdp >= txq->tx_bd_base) {
+ base = txq->tx_bd_base;
+ ring_size = txq->tx_ring_size;
+ ex_base = (struct bufdesc_ex *)txq->tx_bd_base;
} else {
- base = fep->rx_bd_base;
- ring_size = fep->rx_ring_size;
- ex_base = (struct bufdesc_ex *)fep->rx_bd_base;
+ base = rxq->rx_bd_base;
+ ring_size = rxq->rx_ring_size;
+ ex_base = (struct bufdesc_ex *)rxq->rx_bd_base;
}
if (fep->bufdesc_ex)
@@ -300,14 +317,15 @@ static int fec_enet_get_bd_index(struct bufdesc *base, struct bufdesc *bdp,
return ((const char *)bdp - (const char *)base) / fep->bufdesc_size;
}
-static int fec_enet_get_free_txdesc_num(struct fec_enet_private *fep)
+static int fec_enet_get_free_txdesc_num(struct fec_enet_private *fep,
+ struct fec_enet_priv_tx_q *txq)
{
int entries;
- entries = ((const char *)fep->dirty_tx -
- (const char *)fep->cur_tx) / fep->bufdesc_size - 1;
+ entries = ((const char *)txq->dirty_tx -
+ (const char *)txq->cur_tx) / fep->bufdesc_size - 1;
- return entries > 0 ? entries : entries + fep->tx_ring_size;
+ return entries > 0 ? entries : entries + txq->tx_ring_size;
}
static void *swap_buffer(void *bufaddr, int len)
@@ -324,22 +342,26 @@ static void *swap_buffer(void *bufaddr, int len)
static void fec_dump(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
- struct bufdesc *bdp = fep->tx_bd_base;
- unsigned int index = 0;
+ struct bufdesc *bdp;
+ struct fec_enet_priv_tx_q *txq;
+ int index = 0;
netdev_info(ndev, "TX ring dump\n");
pr_info("Nr SC addr len SKB\n");
+ txq = fep->tx_queue[0];
+ bdp = txq->tx_bd_base;
+
do {
pr_info("%3u %c%c 0x%04x 0x%08lx %4u %p\n",
index,
- bdp == fep->cur_tx ? 'S' : ' ',
- bdp == fep->dirty_tx ? 'H' : ' ',
+ bdp == txq->cur_tx ? 'S' : ' ',
+ bdp == txq->dirty_tx ? 'H' : ' ',
bdp->cbd_sc, bdp->cbd_bufaddr, bdp->cbd_datlen,
- fep->tx_skbuff[index]);
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ txq->tx_skbuff[index]);
+ bdp = fec_enet_get_nextdesc(bdp, fep, 0);
index++;
- } while (bdp != fep->tx_bd_base);
+ } while (bdp != txq->tx_bd_base);
}
static inline bool is_ipv4_pkt(struct sk_buff *skb)
@@ -365,14 +387,17 @@ fec_enet_clear_csum(struct sk_buff *skb, struct net_device *ndev)
}
static int
-fec_enet_txq_submit_frag_skb(struct sk_buff *skb, struct net_device *ndev)
+fec_enet_txq_submit_frag_skb(struct fec_enet_priv_tx_q *txq,
+ struct sk_buff *skb,
+ struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
- struct bufdesc *bdp = fep->cur_tx;
+ struct bufdesc *bdp = txq->cur_tx;
struct bufdesc_ex *ebdp;
int nr_frags = skb_shinfo(skb)->nr_frags;
+ unsigned short queue = skb_get_queue_mapping(skb);
int frag, frag_len;
unsigned short status;
unsigned int estatus = 0;
@@ -384,7 +409,7 @@ fec_enet_txq_submit_frag_skb(struct sk_buff *skb, struct net_device *ndev)
for (frag = 0; frag < nr_frags; frag++) {
this_frag = &skb_shinfo(skb)->frags[frag];
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
ebdp = (struct bufdesc_ex *)bdp;
status = bdp->cbd_sc;
@@ -404,6 +429,8 @@ fec_enet_txq_submit_frag_skb(struct sk_buff *skb, struct net_device *ndev)
}
if (fep->bufdesc_ex) {
+ if (id_entry->driver_data & FEC_QUIRK_HAS_AVB)
+ estatus |= FEC_TX_BD_FTYPE(queue);
if (skb->ip_summed == CHECKSUM_PARTIAL)
estatus |= BD_ENET_TX_PINS | BD_ENET_TX_IINS;
ebdp->cbd_bdu = 0;
@@ -412,11 +439,11 @@ fec_enet_txq_submit_frag_skb(struct sk_buff *skb, struct net_device *ndev)
bufaddr = page_address(this_frag->page.p) + this_frag->page_offset;
- index = fec_enet_get_bd_index(fep->tx_bd_base, bdp, fep);
- if (((unsigned long) bufaddr) & FEC_ALIGNMENT ||
+ index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep);
+ if (((unsigned long) bufaddr) & fep->tx_align ||
id_entry->driver_data & FEC_QUIRK_SWAP_FRAME) {
- memcpy(fep->tx_bounce[index], bufaddr, frag_len);
- bufaddr = fep->tx_bounce[index];
+ memcpy(txq->tx_bounce[index], bufaddr, frag_len);
+ bufaddr = txq->tx_bounce[index];
if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
swap_buffer(bufaddr, frag_len);
@@ -436,21 +463,22 @@ fec_enet_txq_submit_frag_skb(struct sk_buff *skb, struct net_device *ndev)
bdp->cbd_sc = status;
}
- fep->cur_tx = bdp;
+ txq->cur_tx = bdp;
return 0;
dma_mapping_error:
- bdp = fep->cur_tx;
+ bdp = txq->cur_tx;
for (i = 0; i < frag; i++) {
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
bdp->cbd_datlen, DMA_TO_DEVICE);
}
return NETDEV_TX_OK;
}
-static int fec_enet_txq_submit_skb(struct sk_buff *skb, struct net_device *ndev)
+static int fec_enet_txq_submit_skb(struct fec_enet_priv_tx_q *txq,
+ struct sk_buff *skb, struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
@@ -461,12 +489,13 @@ static int fec_enet_txq_submit_skb(struct sk_buff *skb, struct net_device *ndev)
dma_addr_t addr;
unsigned short status;
unsigned short buflen;
+ unsigned short queue;
unsigned int estatus = 0;
unsigned int index;
int entries_free;
int ret;
- entries_free = fec_enet_get_free_txdesc_num(fep);
+ entries_free = fec_enet_get_free_txdesc_num(fep, txq);
if (entries_free < MAX_SKB_FRAGS + 1) {
dev_kfree_skb_any(skb);
if (net_ratelimit())
@@ -481,7 +510,7 @@ static int fec_enet_txq_submit_skb(struct sk_buff *skb, struct net_device *ndev)
}
/* Fill in a Tx ring entry */
- bdp = fep->cur_tx;
+ bdp = txq->cur_tx;
status = bdp->cbd_sc;
status &= ~BD_ENET_TX_STATS;
@@ -489,11 +518,12 @@ static int fec_enet_txq_submit_skb(struct sk_buff *skb, struct net_device *ndev)
bufaddr = skb->data;
buflen = skb_headlen(skb);
- index = fec_enet_get_bd_index(fep->tx_bd_base, bdp, fep);
- if (((unsigned long) bufaddr) & FEC_ALIGNMENT ||
+ queue = skb_get_queue_mapping(skb);
+ index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep);
+ if (((unsigned long) bufaddr) & fep->tx_align ||
id_entry->driver_data & FEC_QUIRK_SWAP_FRAME) {
- memcpy(fep->tx_bounce[index], skb->data, buflen);
- bufaddr = fep->tx_bounce[index];
+ memcpy(txq->tx_bounce[index], skb->data, buflen);
+ bufaddr = txq->tx_bounce[index];
if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
swap_buffer(bufaddr, buflen);
@@ -509,7 +539,7 @@ static int fec_enet_txq_submit_skb(struct sk_buff *skb, struct net_device *ndev)
}
if (nr_frags) {
- ret = fec_enet_txq_submit_frag_skb(skb, ndev);
+ ret = fec_enet_txq_submit_frag_skb(txq, skb, ndev);
if (ret)
return ret;
} else {
@@ -530,6 +560,9 @@ static int fec_enet_txq_submit_skb(struct sk_buff *skb, struct net_device *ndev)
fep->hwts_tx_en))
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ if (id_entry->driver_data & FEC_QUIRK_HAS_AVB)
+ estatus |= FEC_TX_BD_FTYPE(queue);
+
if (skb->ip_summed == CHECKSUM_PARTIAL)
estatus |= BD_ENET_TX_PINS | BD_ENET_TX_IINS;
@@ -537,10 +570,10 @@ static int fec_enet_txq_submit_skb(struct sk_buff *skb, struct net_device *ndev)
ebdp->cbd_esc = estatus;
}
- last_bdp = fep->cur_tx;
- index = fec_enet_get_bd_index(fep->tx_bd_base, last_bdp, fep);
+ last_bdp = txq->cur_tx;
+ index = fec_enet_get_bd_index(txq->tx_bd_base, last_bdp, fep);
/* Save skb pointer */
- fep->tx_skbuff[index] = skb;
+ txq->tx_skbuff[index] = skb;
bdp->cbd_datlen = buflen;
bdp->cbd_bufaddr = addr;
@@ -552,27 +585,29 @@ static int fec_enet_txq_submit_skb(struct sk_buff *skb, struct net_device *ndev)
bdp->cbd_sc = status;
/* If this was the last BD in the ring, start at the beginning again. */
- bdp = fec_enet_get_nextdesc(last_bdp, fep);
+ bdp = fec_enet_get_nextdesc(last_bdp, fep, queue);
skb_tx_timestamp(skb);
- fep->cur_tx = bdp;
+ txq->cur_tx = bdp;
/* Trigger transmission start */
- writel(0, fep->hwp + FEC_X_DES_ACTIVE);
+ writel(0, fep->hwp + FEC_X_DES_ACTIVE(queue));
return 0;
}
static int
-fec_enet_txq_put_data_tso(struct sk_buff *skb, struct net_device *ndev,
- struct bufdesc *bdp, int index, char *data,
- int size, bool last_tcp, bool is_last)
+fec_enet_txq_put_data_tso(struct fec_enet_priv_tx_q *txq, struct sk_buff *skb,
+ struct net_device *ndev,
+ struct bufdesc *bdp, int index, char *data,
+ int size, bool last_tcp, bool is_last)
{
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
- struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ struct bufdesc_ex *ebdp = container_of(bdp, struct bufdesc_ex, desc);
+ unsigned short queue = skb_get_queue_mapping(skb);
unsigned short status;
unsigned int estatus = 0;
dma_addr_t addr;
@@ -582,10 +617,10 @@ fec_enet_txq_put_data_tso(struct sk_buff *skb, struct net_device *ndev,
status |= (BD_ENET_TX_TC | BD_ENET_TX_READY);
- if (((unsigned long) data) & FEC_ALIGNMENT ||
+ if (((unsigned long) data) & fep->tx_align ||
id_entry->driver_data & FEC_QUIRK_SWAP_FRAME) {
- memcpy(fep->tx_bounce[index], data, size);
- data = fep->tx_bounce[index];
+ memcpy(txq->tx_bounce[index], data, size);
+ data = txq->tx_bounce[index];
if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
swap_buffer(data, size);
@@ -603,6 +638,8 @@ fec_enet_txq_put_data_tso(struct sk_buff *skb, struct net_device *ndev,
bdp->cbd_bufaddr = addr;
if (fep->bufdesc_ex) {
+ if (id_entry->driver_data & FEC_QUIRK_HAS_AVB)
+ estatus |= FEC_TX_BD_FTYPE(queue);
if (skb->ip_summed == CHECKSUM_PARTIAL)
estatus |= BD_ENET_TX_PINS | BD_ENET_TX_IINS;
ebdp->cbd_bdu = 0;
@@ -624,14 +661,16 @@ fec_enet_txq_put_data_tso(struct sk_buff *skb, struct net_device *ndev,
}
static int
-fec_enet_txq_put_hdr_tso(struct sk_buff *skb, struct net_device *ndev,
- struct bufdesc *bdp, int index)
+fec_enet_txq_put_hdr_tso(struct fec_enet_priv_tx_q *txq,
+ struct sk_buff *skb, struct net_device *ndev,
+ struct bufdesc *bdp, int index)
{
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
int hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
- struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ struct bufdesc_ex *ebdp = container_of(bdp, struct bufdesc_ex, desc);
+ unsigned short queue = skb_get_queue_mapping(skb);
void *bufaddr;
unsigned long dmabuf;
unsigned short status;
@@ -641,12 +680,12 @@ fec_enet_txq_put_hdr_tso(struct sk_buff *skb, struct net_device *ndev,
status &= ~BD_ENET_TX_STATS;
status |= (BD_ENET_TX_TC | BD_ENET_TX_READY);
- bufaddr = fep->tso_hdrs + index * TSO_HEADER_SIZE;
- dmabuf = fep->tso_hdrs_dma + index * TSO_HEADER_SIZE;
- if (((unsigned long) bufaddr) & FEC_ALIGNMENT ||
+ bufaddr = txq->tso_hdrs + index * TSO_HEADER_SIZE;
+ dmabuf = txq->tso_hdrs_dma + index * TSO_HEADER_SIZE;
+ if (((unsigned long)bufaddr) & fep->tx_align ||
id_entry->driver_data & FEC_QUIRK_SWAP_FRAME) {
- memcpy(fep->tx_bounce[index], skb->data, hdr_len);
- bufaddr = fep->tx_bounce[index];
+ memcpy(txq->tx_bounce[index], skb->data, hdr_len);
+ bufaddr = txq->tx_bounce[index];
if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
swap_buffer(bufaddr, hdr_len);
@@ -665,6 +704,8 @@ fec_enet_txq_put_hdr_tso(struct sk_buff *skb, struct net_device *ndev,
bdp->cbd_datlen = hdr_len;
if (fep->bufdesc_ex) {
+ if (id_entry->driver_data & FEC_QUIRK_HAS_AVB)
+ estatus |= FEC_TX_BD_FTYPE(queue);
if (skb->ip_summed == CHECKSUM_PARTIAL)
estatus |= BD_ENET_TX_PINS | BD_ENET_TX_IINS;
ebdp->cbd_bdu = 0;
@@ -676,17 +717,22 @@ fec_enet_txq_put_hdr_tso(struct sk_buff *skb, struct net_device *ndev,
return 0;
}
-static int fec_enet_txq_submit_tso(struct sk_buff *skb, struct net_device *ndev)
+static int fec_enet_txq_submit_tso(struct fec_enet_priv_tx_q *txq,
+ struct sk_buff *skb,
+ struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
int hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
int total_len, data_left;
- struct bufdesc *bdp = fep->cur_tx;
+ struct bufdesc *bdp = txq->cur_tx;
+ unsigned short queue = skb_get_queue_mapping(skb);
struct tso_t tso;
unsigned int index = 0;
int ret;
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
- if (tso_count_descs(skb) >= fec_enet_get_free_txdesc_num(fep)) {
+ if (tso_count_descs(skb) >= fec_enet_get_free_txdesc_num(fep, txq)) {
dev_kfree_skb_any(skb);
if (net_ratelimit())
netdev_err(ndev, "NOT enough BD for TSO!\n");
@@ -706,14 +752,14 @@ static int fec_enet_txq_submit_tso(struct sk_buff *skb, struct net_device *ndev)
while (total_len > 0) {
char *hdr;
- index = fec_enet_get_bd_index(fep->tx_bd_base, bdp, fep);
+ index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep);
data_left = min_t(int, skb_shinfo(skb)->gso_size, total_len);
total_len -= data_left;
/* prepare packet headers: MAC + IP + TCP */
- hdr = fep->tso_hdrs + index * TSO_HEADER_SIZE;
+ hdr = txq->tso_hdrs + index * TSO_HEADER_SIZE;
tso_build_hdr(skb, hdr, &tso, data_left, total_len == 0);
- ret = fec_enet_txq_put_hdr_tso(skb, ndev, bdp, index);
+ ret = fec_enet_txq_put_hdr_tso(txq, skb, ndev, bdp, index);
if (ret)
goto err_release;
@@ -721,10 +767,13 @@ static int fec_enet_txq_submit_tso(struct sk_buff *skb, struct net_device *ndev)
int size;
size = min_t(int, tso.size, data_left);
- bdp = fec_enet_get_nextdesc(bdp, fep);
- index = fec_enet_get_bd_index(fep->tx_bd_base, bdp, fep);
- ret = fec_enet_txq_put_data_tso(skb, ndev, bdp, index, tso.data,
- size, size == data_left,
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
+ index = fec_enet_get_bd_index(txq->tx_bd_base,
+ bdp, fep);
+ ret = fec_enet_txq_put_data_tso(txq, skb, ndev,
+ bdp, index,
+ tso.data, size,
+ size == data_left,
total_len == 0);
if (ret)
goto err_release;
@@ -733,17 +782,22 @@ static int fec_enet_txq_submit_tso(struct sk_buff *skb, struct net_device *ndev)
tso_build_data(skb, &tso, size);
}
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
}
/* Save skb pointer */
- fep->tx_skbuff[index] = skb;
+ txq->tx_skbuff[index] = skb;
skb_tx_timestamp(skb);
- fep->cur_tx = bdp;
+ txq->cur_tx = bdp;
/* Trigger transmission start */
- writel(0, fep->hwp + FEC_X_DES_ACTIVE);
+ if (!(id_entry->driver_data & FEC_QUIRK_ERR007885) ||
+ !readl(fep->hwp + FEC_X_DES_ACTIVE(queue)) ||
+ !readl(fep->hwp + FEC_X_DES_ACTIVE(queue)) ||
+ !readl(fep->hwp + FEC_X_DES_ACTIVE(queue)) ||
+ !readl(fep->hwp + FEC_X_DES_ACTIVE(queue)))
+ writel(0, fep->hwp + FEC_X_DES_ACTIVE(queue));
return 0;
@@ -757,18 +811,25 @@ fec_enet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
int entries_free;
+ unsigned short queue;
+ struct fec_enet_priv_tx_q *txq;
+ struct netdev_queue *nq;
int ret;
+ queue = skb_get_queue_mapping(skb);
+ txq = fep->tx_queue[queue];
+ nq = netdev_get_tx_queue(ndev, queue);
+
if (skb_is_gso(skb))
- ret = fec_enet_txq_submit_tso(skb, ndev);
+ ret = fec_enet_txq_submit_tso(txq, skb, ndev);
else
- ret = fec_enet_txq_submit_skb(skb, ndev);
+ ret = fec_enet_txq_submit_skb(txq, skb, ndev);
if (ret)
return ret;
- entries_free = fec_enet_get_free_txdesc_num(fep);
- if (entries_free <= fep->tx_stop_threshold)
- netif_stop_queue(ndev);
+ entries_free = fec_enet_get_free_txdesc_num(fep, txq);
+ if (entries_free <= txq->tx_stop_threshold)
+ netif_tx_stop_queue(nq);
return NETDEV_TX_OK;
}
@@ -778,46 +839,111 @@ fec_enet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
static void fec_enet_bd_init(struct net_device *dev)
{
struct fec_enet_private *fep = netdev_priv(dev);
+ struct fec_enet_priv_tx_q *txq;
+ struct fec_enet_priv_rx_q *rxq;
struct bufdesc *bdp;
unsigned int i;
+ unsigned int q;
- /* Initialize the receive buffer descriptors. */
- bdp = fep->rx_bd_base;
- for (i = 0; i < fep->rx_ring_size; i++) {
+ for (q = 0; q < fep->num_rx_queues; q++) {
+ /* Initialize the receive buffer descriptors. */
+ rxq = fep->rx_queue[q];
+ bdp = rxq->rx_bd_base;
- /* Initialize the BD for every fragment in the page. */
- if (bdp->cbd_bufaddr)
- bdp->cbd_sc = BD_ENET_RX_EMPTY;
- else
+ for (i = 0; i < rxq->rx_ring_size; i++) {
+
+ /* Initialize the BD for every fragment in the page. */
+ if (bdp->cbd_bufaddr)
+ bdp->cbd_sc = BD_ENET_RX_EMPTY;
+ else
+ bdp->cbd_sc = 0;
+ bdp = fec_enet_get_nextdesc(bdp, fep, q);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = fec_enet_get_prevdesc(bdp, fep, q);
+ bdp->cbd_sc |= BD_SC_WRAP;
+
+ rxq->cur_rx = rxq->rx_bd_base;
+ }
+
+ for (q = 0; q < fep->num_tx_queues; q++) {
+ /* ...and the same for transmit */
+ txq = fep->tx_queue[q];
+ bdp = txq->tx_bd_base;
+ txq->cur_tx = bdp;
+
+ for (i = 0; i < txq->tx_ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
bdp->cbd_sc = 0;
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ if (txq->tx_skbuff[i]) {
+ dev_kfree_skb_any(txq->tx_skbuff[i]);
+ txq->tx_skbuff[i] = NULL;
+ }
+ bdp->cbd_bufaddr = 0;
+ bdp = fec_enet_get_nextdesc(bdp, fep, q);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = fec_enet_get_prevdesc(bdp, fep, q);
+ bdp->cbd_sc |= BD_SC_WRAP;
+ txq->dirty_tx = bdp;
}
+}
- /* Set the last buffer to wrap */
- bdp = fec_enet_get_prevdesc(bdp, fep);
- bdp->cbd_sc |= BD_SC_WRAP;
+static void fec_enet_active_rxring(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ int i;
- fep->cur_rx = fep->rx_bd_base;
+ for (i = 0; i < fep->num_rx_queues; i++)
+ writel(0, fep->hwp + FEC_R_DES_ACTIVE(i));
+}
- /* ...and the same for transmit */
- bdp = fep->tx_bd_base;
- fep->cur_tx = bdp;
- for (i = 0; i < fep->tx_ring_size; i++) {
+static void fec_enet_enable_ring(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ struct fec_enet_priv_tx_q *txq;
+ struct fec_enet_priv_rx_q *rxq;
+ int i;
- /* Initialize the BD for every fragment in the page. */
- bdp->cbd_sc = 0;
- if (fep->tx_skbuff[i]) {
- dev_kfree_skb_any(fep->tx_skbuff[i]);
- fep->tx_skbuff[i] = NULL;
- }
- bdp->cbd_bufaddr = 0;
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ for (i = 0; i < fep->num_rx_queues; i++) {
+ rxq = fep->rx_queue[i];
+ writel(rxq->bd_dma, fep->hwp + FEC_R_DES_START(i));
+
+ /* enable DMA1/2 */
+ if (i)
+ writel(RCMR_MATCHEN | RCMR_CMP(i),
+ fep->hwp + FEC_RCMR(i));
}
- /* Set the last buffer to wrap */
- bdp = fec_enet_get_prevdesc(bdp, fep);
- bdp->cbd_sc |= BD_SC_WRAP;
- fep->dirty_tx = bdp;
+ for (i = 0; i < fep->num_tx_queues; i++) {
+ txq = fep->tx_queue[i];
+ writel(txq->bd_dma, fep->hwp + FEC_X_DES_START(i));
+
+ /* enable DMA1/2 */
+ if (i)
+ writel(DMA_CLASS_EN | IDLE_SLOPE(i),
+ fep->hwp + FEC_DMA_CFG(i));
+ }
+}
+
+static void fec_enet_reset_skb(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ struct fec_enet_priv_tx_q *txq;
+ int i, j;
+
+ for (i = 0; i < fep->num_tx_queues; i++) {
+ txq = fep->tx_queue[i];
+
+ for (j = 0; j < txq->tx_ring_size; j++) {
+ if (txq->tx_skbuff[j]) {
+ dev_kfree_skb_any(txq->tx_skbuff[j]);
+ txq->tx_skbuff[j] = NULL;
+ }
+ }
+ }
}
/*
@@ -831,15 +957,21 @@ fec_restart(struct net_device *ndev)
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
- int i;
u32 val;
u32 temp_mac[2];
u32 rcntl = OPT_FRAME_SIZE | 0x04;
u32 ecntl = 0x2; /* ETHEREN */
- /* Whack a reset. We should wait for this. */
- writel(1, fep->hwp + FEC_ECNTRL);
- udelay(10);
+ /* Whack a reset. We should wait for this.
+ * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+ * instead of reset MAC itself.
+ */
+ if (id_entry && id_entry->driver_data & FEC_QUIRK_HAS_AVB) {
+ writel(0, fep->hwp + FEC_ECNTRL);
+ } else {
+ writel(1, fep->hwp + FEC_ECNTRL);
+ udelay(10);
+ }
/*
* enet-mac reset will reset mac address registers too,
@@ -859,22 +991,10 @@ fec_restart(struct net_device *ndev)
fec_enet_bd_init(ndev);
- /* Set receive and transmit descriptor base. */
- writel(fep->bd_dma, fep->hwp + FEC_R_DES_START);
- if (fep->bufdesc_ex)
- writel((unsigned long)fep->bd_dma + sizeof(struct bufdesc_ex)
- * fep->rx_ring_size, fep->hwp + FEC_X_DES_START);
- else
- writel((unsigned long)fep->bd_dma + sizeof(struct bufdesc)
- * fep->rx_ring_size, fep->hwp + FEC_X_DES_START);
-
+ fec_enet_enable_ring(ndev);
- for (i = 0; i <= TX_RING_MOD_MASK; i++) {
- if (fep->tx_skbuff[i]) {
- dev_kfree_skb_any(fep->tx_skbuff[i]);
- fep->tx_skbuff[i] = NULL;
- }
- }
+ /* Reset tx SKB buffers. */
+ fec_enet_reset_skb(ndev);
/* Enable MII mode */
if (fep->full_duplex == DUPLEX_FULL) {
@@ -996,13 +1116,17 @@ fec_restart(struct net_device *ndev)
/* And last, enable the transmit and receive processing */
writel(ecntl, fep->hwp + FEC_ECNTRL);
- writel(0, fep->hwp + FEC_R_DES_ACTIVE);
+ fec_enet_active_rxring(ndev);
if (fep->bufdesc_ex)
fec_ptp_start_cyclecounter(ndev);
/* Enable interrupts we wish to service */
writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
+
+ /* Init the interrupt coalescing */
+ fec_enet_itr_coal_init(ndev);
+
}
static void
@@ -1021,9 +1145,16 @@ fec_stop(struct net_device *ndev)
netdev_err(ndev, "Graceful transmit stop did not complete!\n");
}
- /* Whack a reset. We should wait for this. */
- writel(1, fep->hwp + FEC_ECNTRL);
- udelay(10);
+ /* Whack a reset. We should wait for this.
+ * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+ * instead of reset MAC itself.
+ */
+ if (id_entry && id_entry->driver_data & FEC_QUIRK_HAS_AVB) {
+ writel(0, fep->hwp + FEC_ECNTRL);
+ } else {
+ writel(1, fep->hwp + FEC_ECNTRL);
+ udelay(10);
+ }
writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED);
writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
@@ -1081,37 +1212,45 @@ fec_enet_hwtstamp(struct fec_enet_private *fep, unsigned ts,
}
static void
-fec_enet_tx(struct net_device *ndev)
+fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
{
struct fec_enet_private *fep;
struct bufdesc *bdp;
unsigned short status;
struct sk_buff *skb;
+ struct fec_enet_priv_tx_q *txq;
+ struct netdev_queue *nq;
int index = 0;
int entries_free;
fep = netdev_priv(ndev);
- bdp = fep->dirty_tx;
+
+ queue_id = FEC_ENET_GET_QUQUE(queue_id);
+
+ txq = fep->tx_queue[queue_id];
+ /* get next bdp of dirty_tx */
+ nq = netdev_get_tx_queue(ndev, queue_id);
+ bdp = txq->dirty_tx;
/* get next bdp of dirty_tx */
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
while (((status = bdp->cbd_sc) & BD_ENET_TX_READY) == 0) {
/* current queue is empty */
- if (bdp == fep->cur_tx)
+ if (bdp == txq->cur_tx)
break;
- index = fec_enet_get_bd_index(fep->tx_bd_base, bdp, fep);
+ index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep);
- skb = fep->tx_skbuff[index];
- fep->tx_skbuff[index] = NULL;
- if (!IS_TSO_HEADER(fep, bdp->cbd_bufaddr))
+ skb = txq->tx_skbuff[index];
+ txq->tx_skbuff[index] = NULL;
+ if (!IS_TSO_HEADER(txq, bdp->cbd_bufaddr))
dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
bdp->cbd_datlen, DMA_TO_DEVICE);
bdp->cbd_bufaddr = 0;
if (!skb) {
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
continue;
}
@@ -1153,23 +1292,81 @@ fec_enet_tx(struct net_device *ndev)
/* Free the sk buffer associated with this last transmit */
dev_kfree_skb_any(skb);
- fep->dirty_tx = bdp;
+ txq->dirty_tx = bdp;
/* Update pointer to next buffer descriptor to be transmitted */
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
/* Since we have freed up a buffer, the ring is no longer full
*/
if (netif_queue_stopped(ndev)) {
- entries_free = fec_enet_get_free_txdesc_num(fep);
- if (entries_free >= fep->tx_wake_threshold)
- netif_wake_queue(ndev);
+ entries_free = fec_enet_get_free_txdesc_num(fep, txq);
+ if (entries_free >= txq->tx_wake_threshold)
+ netif_tx_wake_queue(nq);
}
}
/* ERR006538: Keep the transmitter going */
- if (bdp != fep->cur_tx && readl(fep->hwp + FEC_X_DES_ACTIVE) == 0)
- writel(0, fep->hwp + FEC_X_DES_ACTIVE);
+ if (bdp != txq->cur_tx &&
+ readl(fep->hwp + FEC_X_DES_ACTIVE(queue_id)) == 0)
+ writel(0, fep->hwp + FEC_X_DES_ACTIVE(queue_id));
+}
+
+static void
+fec_enet_tx(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ u16 queue_id;
+ /* First process class A queue, then Class B and Best Effort queue */
+ for_each_set_bit(queue_id, &fep->work_tx, FEC_ENET_MAX_TX_QS) {
+ clear_bit(queue_id, &fep->work_tx);
+ fec_enet_tx_queue(ndev, queue_id);
+ }
+ return;
+}
+
+static int
+fec_enet_new_rxbdp(struct net_device *ndev, struct bufdesc *bdp, struct sk_buff *skb)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ int off;
+
+ off = ((unsigned long)skb->data) & fep->rx_align;
+ if (off)
+ skb_reserve(skb, fep->rx_align + 1 - off);
+
+ bdp->cbd_bufaddr = dma_map_single(&fep->pdev->dev, skb->data,
+ FEC_ENET_RX_FRSIZE - fep->rx_align,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(&fep->pdev->dev, bdp->cbd_bufaddr)) {
+ if (net_ratelimit())
+ netdev_err(ndev, "Rx DMA memory map failed\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static bool fec_enet_copybreak(struct net_device *ndev, struct sk_buff **skb,
+ struct bufdesc *bdp, u32 length)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ struct sk_buff *new_skb;
+
+ if (length > fep->rx_copybreak)
+ return false;
+
+ new_skb = netdev_alloc_skb(ndev, length);
+ if (!new_skb)
+ return false;
+
+ dma_sync_single_for_cpu(&fep->pdev->dev, bdp->cbd_bufaddr,
+ FEC_ENET_RX_FRSIZE - fep->rx_align,
+ DMA_FROM_DEVICE);
+ memcpy(new_skb->data, (*skb)->data, length);
+ *skb = new_skb;
+
+ return true;
}
/* During a receive, the cur_rx points to the current incoming buffer.
@@ -1178,14 +1375,16 @@ fec_enet_tx(struct net_device *ndev)
* effectively tossing the packet.
*/
static int
-fec_enet_rx(struct net_device *ndev, int budget)
+fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
{
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
+ struct fec_enet_priv_rx_q *rxq;
struct bufdesc *bdp;
unsigned short status;
- struct sk_buff *skb;
+ struct sk_buff *skb_new = NULL;
+ struct sk_buff *skb;
ushort pkt_len;
__u8 *data;
int pkt_received = 0;
@@ -1193,15 +1392,18 @@ fec_enet_rx(struct net_device *ndev, int budget)
bool vlan_packet_rcvd = false;
u16 vlan_tag;
int index = 0;
+ bool is_copybreak;
#ifdef CONFIG_M532x
flush_cache_all();
#endif
+ queue_id = FEC_ENET_GET_QUQUE(queue_id);
+ rxq = fep->rx_queue[queue_id];
/* First, grab all of the stats for the incoming packet.
* These get messed up if we get called due to a busy condition.
*/
- bdp = fep->cur_rx;
+ bdp = rxq->cur_rx;
while (!((status = bdp->cbd_sc) & BD_ENET_RX_EMPTY)) {
@@ -1215,7 +1417,6 @@ fec_enet_rx(struct net_device *ndev, int budget)
if ((status & BD_ENET_RX_LAST) == 0)
netdev_err(ndev, "rcv is not +last\n");
- writel(FEC_ENET_RXF, fep->hwp + FEC_IEVENT);
/* Check for errors. */
if (status & (BD_ENET_RX_LG | BD_ENET_RX_SH | BD_ENET_RX_NO |
@@ -1248,11 +1449,28 @@ fec_enet_rx(struct net_device *ndev, int budget)
pkt_len = bdp->cbd_datlen;
ndev->stats.rx_bytes += pkt_len;
- index = fec_enet_get_bd_index(fep->rx_bd_base, bdp, fep);
- data = fep->rx_skbuff[index]->data;
- dma_sync_single_for_cpu(&fep->pdev->dev, bdp->cbd_bufaddr,
- FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
+ index = fec_enet_get_bd_index(rxq->rx_bd_base, bdp, fep);
+ skb = rxq->rx_skbuff[index];
+
+ /* The packet length includes FCS, but we don't want to
+ * include that when passing upstream as it messes up
+ * bridging applications.
+ */
+ is_copybreak = fec_enet_copybreak(ndev, &skb, bdp, pkt_len - 4);
+ if (!is_copybreak) {
+ skb_new = netdev_alloc_skb(ndev, FEC_ENET_RX_FRSIZE);
+ if (unlikely(!skb_new)) {
+ ndev->stats.rx_dropped++;
+ goto rx_processing_done;
+ }
+ dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
+ FEC_ENET_RX_FRSIZE - fep->rx_align,
+ DMA_FROM_DEVICE);
+ }
+ prefetch(skb->data - NET_IP_ALIGN);
+ skb_put(skb, pkt_len - 4);
+ data = skb->data;
if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
swap_buffer(data, pkt_len);
@@ -1264,66 +1482,53 @@ fec_enet_rx(struct net_device *ndev, int budget)
/* If this is a VLAN packet remove the VLAN Tag */
vlan_packet_rcvd = false;
if ((ndev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
- fep->bufdesc_ex && (ebdp->cbd_esc & BD_ENET_RX_VLAN)) {
+ fep->bufdesc_ex && (ebdp->cbd_esc & BD_ENET_RX_VLAN)) {
/* Push and remove the vlan tag */
struct vlan_hdr *vlan_header =
(struct vlan_hdr *) (data + ETH_HLEN);
vlan_tag = ntohs(vlan_header->h_vlan_TCI);
- pkt_len -= VLAN_HLEN;
vlan_packet_rcvd = true;
+
+ skb_copy_to_linear_data_offset(skb, VLAN_HLEN,
+ data, (2 * ETH_ALEN));
+ skb_pull(skb, VLAN_HLEN);
}
- /* This does 16 byte alignment, exactly what we need.
- * The packet length includes FCS, but we don't want to
- * include that when passing upstream as it messes up
- * bridging applications.
- */
- skb = netdev_alloc_skb(ndev, pkt_len - 4 + NET_IP_ALIGN);
+ skb->protocol = eth_type_trans(skb, ndev);
- if (unlikely(!skb)) {
- ndev->stats.rx_dropped++;
- } else {
- int payload_offset = (2 * ETH_ALEN);
- skb_reserve(skb, NET_IP_ALIGN);
- skb_put(skb, pkt_len - 4); /* Make room */
-
- /* Extract the frame data without the VLAN header. */
- skb_copy_to_linear_data(skb, data, (2 * ETH_ALEN));
- if (vlan_packet_rcvd)
- payload_offset = (2 * ETH_ALEN) + VLAN_HLEN;
- skb_copy_to_linear_data_offset(skb, (2 * ETH_ALEN),
- data + payload_offset,
- pkt_len - 4 - (2 * ETH_ALEN));
-
- skb->protocol = eth_type_trans(skb, ndev);
-
- /* Get receive timestamp from the skb */
- if (fep->hwts_rx_en && fep->bufdesc_ex)
- fec_enet_hwtstamp(fep, ebdp->ts,
- skb_hwtstamps(skb));
-
- if (fep->bufdesc_ex &&
- (fep->csum_flags & FLAG_RX_CSUM_ENABLED)) {
- if (!(ebdp->cbd_esc & FLAG_RX_CSUM_ERROR)) {
- /* don't check it */
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- } else {
- skb_checksum_none_assert(skb);
- }
+ /* Get receive timestamp from the skb */
+ if (fep->hwts_rx_en && fep->bufdesc_ex)
+ fec_enet_hwtstamp(fep, ebdp->ts,
+ skb_hwtstamps(skb));
+
+ if (fep->bufdesc_ex &&
+ (fep->csum_flags & FLAG_RX_CSUM_ENABLED)) {
+ if (!(ebdp->cbd_esc & FLAG_RX_CSUM_ERROR)) {
+ /* don't check it */
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ } else {
+ skb_checksum_none_assert(skb);
}
+ }
+
+ /* Handle received VLAN packets */
+ if (vlan_packet_rcvd)
+ __vlan_hwaccel_put_tag(skb,
+ htons(ETH_P_8021Q),
+ vlan_tag);
- /* Handle received VLAN packets */
- if (vlan_packet_rcvd)
- __vlan_hwaccel_put_tag(skb,
- htons(ETH_P_8021Q),
- vlan_tag);
+ napi_gro_receive(&fep->napi, skb);
- napi_gro_receive(&fep->napi, skb);
+ if (is_copybreak) {
+ dma_sync_single_for_device(&fep->pdev->dev, bdp->cbd_bufaddr,
+ FEC_ENET_RX_FRSIZE - fep->rx_align,
+ DMA_FROM_DEVICE);
+ } else {
+ rxq->rx_skbuff[index] = skb_new;
+ fec_enet_new_rxbdp(ndev, bdp, skb_new);
}
- dma_sync_single_for_device(&fep->pdev->dev, bdp->cbd_bufaddr,
- FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
rx_processing_done:
/* Clear the status flags for this buffer */
status &= ~BD_ENET_RX_STATS;
@@ -1341,19 +1546,56 @@ rx_processing_done:
}
/* Update BD pointer to next entry */
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
/* Doing this here will keep the FEC running while we process
* incoming frames. On a heavily loaded network, we should be
* able to keep up at the expense of system resources.
*/
- writel(0, fep->hwp + FEC_R_DES_ACTIVE);
+ writel(0, fep->hwp + FEC_R_DES_ACTIVE(queue_id));
}
- fep->cur_rx = bdp;
+ rxq->cur_rx = bdp;
+ return pkt_received;
+}
+static int
+fec_enet_rx(struct net_device *ndev, int budget)
+{
+ int pkt_received = 0;
+ u16 queue_id;
+ struct fec_enet_private *fep = netdev_priv(ndev);
+
+ for_each_set_bit(queue_id, &fep->work_rx, FEC_ENET_MAX_RX_QS) {
+ clear_bit(queue_id, &fep->work_rx);
+ pkt_received += fec_enet_rx_queue(ndev,
+ budget - pkt_received, queue_id);
+ }
return pkt_received;
}
+static bool
+fec_enet_collect_events(struct fec_enet_private *fep, uint int_events)
+{
+ if (int_events == 0)
+ return false;
+
+ if (int_events & FEC_ENET_RXF)
+ fep->work_rx |= (1 << 2);
+ if (int_events & FEC_ENET_RXF_1)
+ fep->work_rx |= (1 << 0);
+ if (int_events & FEC_ENET_RXF_2)
+ fep->work_rx |= (1 << 1);
+
+ if (int_events & FEC_ENET_TXF)
+ fep->work_tx |= (1 << 2);
+ if (int_events & FEC_ENET_TXF_1)
+ fep->work_tx |= (1 << 0);
+ if (int_events & FEC_ENET_TXF_2)
+ fep->work_tx |= (1 << 1);
+
+ return true;
+}
+
static irqreturn_t
fec_enet_interrupt(int irq, void *dev_id)
{
@@ -1365,6 +1607,7 @@ fec_enet_interrupt(int irq, void *dev_id)
int_events = readl(fep->hwp + FEC_IEVENT);
writel(int_events & ~napi_mask, fep->hwp + FEC_IEVENT);
+ fec_enet_collect_events(fep, int_events);
if (int_events & napi_mask) {
ret = IRQ_HANDLED;
@@ -1621,6 +1864,11 @@ static int fec_enet_clk_enable(struct net_device *ndev, bool enable)
}
mutex_unlock(&fep->ptp_clk_mutex);
}
+ if (fep->clk_ref) {
+ ret = clk_prepare_enable(fep->clk_ref);
+ if (ret)
+ goto failed_clk_ref;
+ }
} else {
clk_disable_unprepare(fep->clk_ahb);
clk_disable_unprepare(fep->clk_ipg);
@@ -1632,9 +1880,15 @@ static int fec_enet_clk_enable(struct net_device *ndev, bool enable)
fep->ptp_clk_on = false;
mutex_unlock(&fep->ptp_clk_mutex);
}
+ if (fep->clk_ref)
+ clk_disable_unprepare(fep->clk_ref);
}
return 0;
+
+failed_clk_ref:
+ if (fep->clk_ref)
+ clk_disable_unprepare(fep->clk_ref);
failed_clk_ptp:
if (fep->clk_enet_out)
clk_disable_unprepare(fep->clk_enet_out);
@@ -1674,13 +1928,13 @@ static int fec_enet_mii_probe(struct net_device *ndev)
continue;
if (dev_id--)
continue;
- strncpy(mdio_bus_id, fep->mii_bus->id, MII_BUS_ID_SIZE);
+ strlcpy(mdio_bus_id, fep->mii_bus->id, MII_BUS_ID_SIZE);
break;
}
if (phy_id >= PHY_MAX_ADDR) {
netdev_info(ndev, "no PHY, assuming direct connection to switch\n");
- strncpy(mdio_bus_id, "fixed-0", MII_BUS_ID_SIZE);
+ strlcpy(mdio_bus_id, "fixed-0", MII_BUS_ID_SIZE);
phy_id = 0;
}
@@ -2062,12 +2316,179 @@ static int fec_enet_nway_reset(struct net_device *dev)
return genphy_restart_aneg(phydev);
}
+/* ITR clock source is enet system clock (clk_ahb).
+ * TCTT unit is cycle_ns * 64 cycle
+ * So, the ICTT value = X us / (cycle_ns * 64)
+ */
+static int fec_enet_us_to_itr_clock(struct net_device *ndev, int us)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+
+ return us * (fep->itr_clk_rate / 64000) / 1000;
+}
+
+/* Set threshold for interrupt coalescing */
+static void fec_enet_itr_coal_set(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+ int rx_itr, tx_itr;
+
+ if (!(id_entry->driver_data & FEC_QUIRK_HAS_AVB))
+ return;
+
+ /* Must be greater than zero to avoid unpredictable behavior */
+ if (!fep->rx_time_itr || !fep->rx_pkts_itr ||
+ !fep->tx_time_itr || !fep->tx_pkts_itr)
+ return;
+
+ /* Select enet system clock as Interrupt Coalescing
+ * timer Clock Source
+ */
+ rx_itr = FEC_ITR_CLK_SEL;
+ tx_itr = FEC_ITR_CLK_SEL;
+
+ /* set ICFT and ICTT */
+ rx_itr |= FEC_ITR_ICFT(fep->rx_pkts_itr);
+ rx_itr |= FEC_ITR_ICTT(fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr));
+ tx_itr |= FEC_ITR_ICFT(fep->tx_pkts_itr);
+ tx_itr |= FEC_ITR_ICTT(fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr));
+
+ rx_itr |= FEC_ITR_EN;
+ tx_itr |= FEC_ITR_EN;
+
+ writel(tx_itr, fep->hwp + FEC_TXIC0);
+ writel(rx_itr, fep->hwp + FEC_RXIC0);
+ writel(tx_itr, fep->hwp + FEC_TXIC1);
+ writel(rx_itr, fep->hwp + FEC_RXIC1);
+ writel(tx_itr, fep->hwp + FEC_TXIC2);
+ writel(rx_itr, fep->hwp + FEC_RXIC2);
+}
+
+static int
+fec_enet_get_coalesce(struct net_device *ndev, struct ethtool_coalesce *ec)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+
+ if (!(id_entry->driver_data & FEC_QUIRK_HAS_AVB))
+ return -EOPNOTSUPP;
+
+ ec->rx_coalesce_usecs = fep->rx_time_itr;
+ ec->rx_max_coalesced_frames = fep->rx_pkts_itr;
+
+ ec->tx_coalesce_usecs = fep->tx_time_itr;
+ ec->tx_max_coalesced_frames = fep->tx_pkts_itr;
+
+ return 0;
+}
+
+static int
+fec_enet_set_coalesce(struct net_device *ndev, struct ethtool_coalesce *ec)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+
+ unsigned int cycle;
+
+ if (!(id_entry->driver_data & FEC_QUIRK_HAS_AVB))
+ return -EOPNOTSUPP;
+
+ if (ec->rx_max_coalesced_frames > 255) {
+ pr_err("Rx coalesced frames exceed hardware limiation");
+ return -EINVAL;
+ }
+
+ if (ec->tx_max_coalesced_frames > 255) {
+ pr_err("Tx coalesced frame exceed hardware limiation");
+ return -EINVAL;
+ }
+
+ cycle = fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr);
+ if (cycle > 0xFFFF) {
+ pr_err("Rx coalesed usec exceeed hardware limiation");
+ return -EINVAL;
+ }
+
+ cycle = fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr);
+ if (cycle > 0xFFFF) {
+ pr_err("Rx coalesed usec exceeed hardware limiation");
+ return -EINVAL;
+ }
+
+ fep->rx_time_itr = ec->rx_coalesce_usecs;
+ fep->rx_pkts_itr = ec->rx_max_coalesced_frames;
+
+ fep->tx_time_itr = ec->tx_coalesce_usecs;
+ fep->tx_pkts_itr = ec->tx_max_coalesced_frames;
+
+ fec_enet_itr_coal_set(ndev);
+
+ return 0;
+}
+
+static void fec_enet_itr_coal_init(struct net_device *ndev)
+{
+ struct ethtool_coalesce ec;
+
+ ec.rx_coalesce_usecs = FEC_ITR_ICTT_DEFAULT;
+ ec.rx_max_coalesced_frames = FEC_ITR_ICFT_DEFAULT;
+
+ ec.tx_coalesce_usecs = FEC_ITR_ICTT_DEFAULT;
+ ec.tx_max_coalesced_frames = FEC_ITR_ICFT_DEFAULT;
+
+ fec_enet_set_coalesce(ndev, &ec);
+}
+
+static int fec_enet_get_tunable(struct net_device *netdev,
+ const struct ethtool_tunable *tuna,
+ void *data)
+{
+ struct fec_enet_private *fep = netdev_priv(netdev);
+ int ret = 0;
+
+ switch (tuna->id) {
+ case ETHTOOL_RX_COPYBREAK:
+ *(u32 *)data = fep->rx_copybreak;
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
+static int fec_enet_set_tunable(struct net_device *netdev,
+ const struct ethtool_tunable *tuna,
+ const void *data)
+{
+ struct fec_enet_private *fep = netdev_priv(netdev);
+ int ret = 0;
+
+ switch (tuna->id) {
+ case ETHTOOL_RX_COPYBREAK:
+ fep->rx_copybreak = *(u32 *)data;
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
static const struct ethtool_ops fec_enet_ethtool_ops = {
.get_settings = fec_enet_get_settings,
.set_settings = fec_enet_set_settings,
.get_drvinfo = fec_enet_get_drvinfo,
.nway_reset = fec_enet_nway_reset,
.get_link = ethtool_op_get_link,
+ .get_coalesce = fec_enet_get_coalesce,
+ .set_coalesce = fec_enet_set_coalesce,
#ifndef CONFIG_M5272
.get_pauseparam = fec_enet_get_pauseparam,
.set_pauseparam = fec_enet_set_pauseparam,
@@ -2076,6 +2497,8 @@ static const struct ethtool_ops fec_enet_ethtool_ops = {
.get_sset_count = fec_enet_get_sset_count,
#endif
.get_ts_info = fec_enet_get_ts_info,
+ .get_tunable = fec_enet_get_tunable,
+ .set_tunable = fec_enet_set_tunable,
};
static int fec_enet_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd)
@@ -2105,55 +2528,136 @@ static void fec_enet_free_buffers(struct net_device *ndev)
unsigned int i;
struct sk_buff *skb;
struct bufdesc *bdp;
+ struct fec_enet_priv_tx_q *txq;
+ struct fec_enet_priv_rx_q *rxq;
+ unsigned int q;
+
+ for (q = 0; q < fep->num_rx_queues; q++) {
+ rxq = fep->rx_queue[q];
+ bdp = rxq->rx_bd_base;
+ for (i = 0; i < rxq->rx_ring_size; i++) {
+ skb = rxq->rx_skbuff[i];
+ rxq->rx_skbuff[i] = NULL;
+ if (skb) {
+ dma_unmap_single(&fep->pdev->dev,
+ bdp->cbd_bufaddr,
+ FEC_ENET_RX_FRSIZE - fep->rx_align,
+ DMA_FROM_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ bdp = fec_enet_get_nextdesc(bdp, fep, q);
+ }
+ }
- bdp = fep->rx_bd_base;
- for (i = 0; i < fep->rx_ring_size; i++) {
- skb = fep->rx_skbuff[i];
- fep->rx_skbuff[i] = NULL;
- if (skb) {
- dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
- FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
+ for (q = 0; q < fep->num_tx_queues; q++) {
+ txq = fep->tx_queue[q];
+ bdp = txq->tx_bd_base;
+ for (i = 0; i < txq->tx_ring_size; i++) {
+ kfree(txq->tx_bounce[i]);
+ txq->tx_bounce[i] = NULL;
+ skb = txq->tx_skbuff[i];
+ txq->tx_skbuff[i] = NULL;
dev_kfree_skb(skb);
}
- bdp = fec_enet_get_nextdesc(bdp, fep);
}
+}
- bdp = fep->tx_bd_base;
- for (i = 0; i < fep->tx_ring_size; i++) {
- kfree(fep->tx_bounce[i]);
- fep->tx_bounce[i] = NULL;
- skb = fep->tx_skbuff[i];
- fep->tx_skbuff[i] = NULL;
- dev_kfree_skb(skb);
+static void fec_enet_free_queue(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ int i;
+ struct fec_enet_priv_tx_q *txq;
+
+ for (i = 0; i < fep->num_tx_queues; i++)
+ if (fep->tx_queue[i] && fep->tx_queue[i]->tso_hdrs) {
+ txq = fep->tx_queue[i];
+ dma_free_coherent(NULL,
+ txq->tx_ring_size * TSO_HEADER_SIZE,
+ txq->tso_hdrs,
+ txq->tso_hdrs_dma);
+ }
+
+ for (i = 0; i < fep->num_rx_queues; i++)
+ if (fep->rx_queue[i])
+ kfree(fep->rx_queue[i]);
+
+ for (i = 0; i < fep->num_tx_queues; i++)
+ if (fep->tx_queue[i])
+ kfree(fep->tx_queue[i]);
+}
+
+static int fec_enet_alloc_queue(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ int i;
+ int ret = 0;
+ struct fec_enet_priv_tx_q *txq;
+
+ for (i = 0; i < fep->num_tx_queues; i++) {
+ txq = kzalloc(sizeof(*txq), GFP_KERNEL);
+ if (!txq) {
+ ret = -ENOMEM;
+ goto alloc_failed;
+ }
+
+ fep->tx_queue[i] = txq;
+ txq->tx_ring_size = TX_RING_SIZE;
+ fep->total_tx_ring_size += fep->tx_queue[i]->tx_ring_size;
+
+ txq->tx_stop_threshold = FEC_MAX_SKB_DESCS;
+ txq->tx_wake_threshold =
+ (txq->tx_ring_size - txq->tx_stop_threshold) / 2;
+
+ txq->tso_hdrs = dma_alloc_coherent(NULL,
+ txq->tx_ring_size * TSO_HEADER_SIZE,
+ &txq->tso_hdrs_dma,
+ GFP_KERNEL);
+ if (!txq->tso_hdrs) {
+ ret = -ENOMEM;
+ goto alloc_failed;
+ }
+ }
+
+ for (i = 0; i < fep->num_rx_queues; i++) {
+ fep->rx_queue[i] = kzalloc(sizeof(*fep->rx_queue[i]),
+ GFP_KERNEL);
+ if (!fep->rx_queue[i]) {
+ ret = -ENOMEM;
+ goto alloc_failed;
+ }
+
+ fep->rx_queue[i]->rx_ring_size = RX_RING_SIZE;
+ fep->total_rx_ring_size += fep->rx_queue[i]->rx_ring_size;
}
+ return ret;
+
+alloc_failed:
+ fec_enet_free_queue(ndev);
+ return ret;
}
-static int fec_enet_alloc_buffers(struct net_device *ndev)
+static int
+fec_enet_alloc_rxq_buffers(struct net_device *ndev, unsigned int queue)
{
struct fec_enet_private *fep = netdev_priv(ndev);
unsigned int i;
struct sk_buff *skb;
struct bufdesc *bdp;
+ struct fec_enet_priv_rx_q *rxq;
- bdp = fep->rx_bd_base;
- for (i = 0; i < fep->rx_ring_size; i++) {
- dma_addr_t addr;
-
+ rxq = fep->rx_queue[queue];
+ bdp = rxq->rx_bd_base;
+ for (i = 0; i < rxq->rx_ring_size; i++) {
skb = netdev_alloc_skb(ndev, FEC_ENET_RX_FRSIZE);
if (!skb)
goto err_alloc;
- addr = dma_map_single(&fep->pdev->dev, skb->data,
- FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
- if (dma_mapping_error(&fep->pdev->dev, addr)) {
+ if (fec_enet_new_rxbdp(ndev, bdp, skb)) {
dev_kfree_skb(skb);
- if (net_ratelimit())
- netdev_err(ndev, "Rx DMA memory map failed\n");
goto err_alloc;
}
- fep->rx_skbuff[i] = skb;
- bdp->cbd_bufaddr = addr;
+ rxq->rx_skbuff[i] = skb;
bdp->cbd_sc = BD_ENET_RX_EMPTY;
if (fep->bufdesc_ex) {
@@ -2161,17 +2665,32 @@ static int fec_enet_alloc_buffers(struct net_device *ndev)
ebdp->cbd_esc = BD_ENET_RX_INT;
}
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
}
/* Set the last buffer to wrap. */
- bdp = fec_enet_get_prevdesc(bdp, fep);
+ bdp = fec_enet_get_prevdesc(bdp, fep, queue);
bdp->cbd_sc |= BD_SC_WRAP;
+ return 0;
- bdp = fep->tx_bd_base;
- for (i = 0; i < fep->tx_ring_size; i++) {
- fep->tx_bounce[i] = kmalloc(FEC_ENET_TX_FRSIZE, GFP_KERNEL);
- if (!fep->tx_bounce[i])
+ err_alloc:
+ fec_enet_free_buffers(ndev);
+ return -ENOMEM;
+}
+
+static int
+fec_enet_alloc_txq_buffers(struct net_device *ndev, unsigned int queue)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ unsigned int i;
+ struct bufdesc *bdp;
+ struct fec_enet_priv_tx_q *txq;
+
+ txq = fep->tx_queue[queue];
+ bdp = txq->tx_bd_base;
+ for (i = 0; i < txq->tx_ring_size; i++) {
+ txq->tx_bounce[i] = kmalloc(FEC_ENET_TX_FRSIZE, GFP_KERNEL);
+ if (!txq->tx_bounce[i])
goto err_alloc;
bdp->cbd_sc = 0;
@@ -2182,11 +2701,11 @@ static int fec_enet_alloc_buffers(struct net_device *ndev)
ebdp->cbd_esc = BD_ENET_TX_INT;
}
- bdp = fec_enet_get_nextdesc(bdp, fep);
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue);
}
/* Set the last buffer to wrap. */
- bdp = fec_enet_get_prevdesc(bdp, fep);
+ bdp = fec_enet_get_prevdesc(bdp, fep, queue);
bdp->cbd_sc |= BD_SC_WRAP;
return 0;
@@ -2196,6 +2715,21 @@ static int fec_enet_alloc_buffers(struct net_device *ndev)
return -ENOMEM;
}
+static int fec_enet_alloc_buffers(struct net_device *ndev)
+{
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ unsigned int i;
+
+ for (i = 0; i < fep->num_rx_queues; i++)
+ if (fec_enet_alloc_rxq_buffers(ndev, i))
+ return -ENOMEM;
+
+ for (i = 0; i < fep->num_tx_queues; i++)
+ if (fec_enet_alloc_txq_buffers(ndev, i))
+ return -ENOMEM;
+ return 0;
+}
+
static int
fec_enet_open(struct net_device *ndev)
{
@@ -2213,20 +2747,26 @@ fec_enet_open(struct net_device *ndev)
ret = fec_enet_alloc_buffers(ndev);
if (ret)
- return ret;
+ goto err_enet_alloc;
/* Probe and connect to PHY when open the interface */
ret = fec_enet_mii_probe(ndev);
- if (ret) {
- fec_enet_free_buffers(ndev);
- return ret;
- }
+ if (ret)
+ goto err_enet_mii_probe;
fec_restart(ndev);
napi_enable(&fep->napi);
phy_start(fep->phy_dev);
- netif_start_queue(ndev);
+ netif_tx_start_all_queues(ndev);
+
return 0;
+
+err_enet_mii_probe:
+ fec_enet_free_buffers(ndev);
+err_enet_alloc:
+ fec_enet_clk_enable(ndev, false);
+ pinctrl_pm_select_sleep_state(&fep->pdev->dev);
+ return ret;
}
static int
@@ -2399,7 +2939,7 @@ static int fec_set_features(struct net_device *netdev,
/* Resume the device after updates */
if (netif_running(netdev) && changed & FEATURES_NEED_QUIESCE) {
fec_restart(netdev);
- netif_wake_queue(netdev);
+ netif_tx_wake_all_queues(netdev);
netif_tx_unlock_bh(netdev);
napi_enable(&fep->napi);
}
@@ -2432,39 +2972,38 @@ static int fec_enet_init(struct net_device *ndev)
struct fec_enet_private *fep = netdev_priv(ndev);
const struct platform_device_id *id_entry =
platform_get_device_id(fep->pdev);
+ struct fec_enet_priv_tx_q *txq;
+ struct fec_enet_priv_rx_q *rxq;
struct bufdesc *cbd_base;
+ dma_addr_t bd_dma;
int bd_size;
+ unsigned int i;
- /* init the tx & rx ring size */
- fep->tx_ring_size = TX_RING_SIZE;
- fep->rx_ring_size = RX_RING_SIZE;
+#if defined(CONFIG_ARM)
+ fep->rx_align = 0xf;
+ fep->tx_align = 0xf;
+#else
+ fep->rx_align = 0x3;
+ fep->tx_align = 0x3;
+#endif
- fep->tx_stop_threshold = FEC_MAX_SKB_DESCS;
- fep->tx_wake_threshold = (fep->tx_ring_size - fep->tx_stop_threshold) / 2;
+ fec_enet_alloc_queue(ndev);
if (fep->bufdesc_ex)
fep->bufdesc_size = sizeof(struct bufdesc_ex);
else
fep->bufdesc_size = sizeof(struct bufdesc);
- bd_size = (fep->tx_ring_size + fep->rx_ring_size) *
+ bd_size = (fep->total_tx_ring_size + fep->total_rx_ring_size) *
fep->bufdesc_size;
/* Allocate memory for buffer descriptors. */
- cbd_base = dma_alloc_coherent(NULL, bd_size, &fep->bd_dma,
+ cbd_base = dma_alloc_coherent(NULL, bd_size, &bd_dma,
GFP_KERNEL);
- if (!cbd_base)
- return -ENOMEM;
-
- fep->tso_hdrs = dma_alloc_coherent(NULL, fep->tx_ring_size * TSO_HEADER_SIZE,
- &fep->tso_hdrs_dma, GFP_KERNEL);
- if (!fep->tso_hdrs) {
- dma_free_coherent(NULL, bd_size, cbd_base, fep->bd_dma);
+ if (!cbd_base) {
return -ENOMEM;
}
- memset(cbd_base, 0, PAGE_SIZE);
-
- fep->netdev = ndev;
+ memset(cbd_base, 0, bd_size);
/* Get the Ethernet address */
fec_get_mac(ndev);
@@ -2472,12 +3011,36 @@ static int fec_enet_init(struct net_device *ndev)
fec_set_mac_address(ndev, NULL);
/* Set receive and transmit descriptor base. */
- fep->rx_bd_base = cbd_base;
- if (fep->bufdesc_ex)
- fep->tx_bd_base = (struct bufdesc *)
- (((struct bufdesc_ex *)cbd_base) + fep->rx_ring_size);
- else
- fep->tx_bd_base = cbd_base + fep->rx_ring_size;
+ for (i = 0; i < fep->num_rx_queues; i++) {
+ rxq = fep->rx_queue[i];
+ rxq->index = i;
+ rxq->rx_bd_base = (struct bufdesc *)cbd_base;
+ rxq->bd_dma = bd_dma;
+ if (fep->bufdesc_ex) {
+ bd_dma += sizeof(struct bufdesc_ex) * rxq->rx_ring_size;
+ cbd_base = (struct bufdesc *)
+ (((struct bufdesc_ex *)cbd_base) + rxq->rx_ring_size);
+ } else {
+ bd_dma += sizeof(struct bufdesc) * rxq->rx_ring_size;
+ cbd_base += rxq->rx_ring_size;
+ }
+ }
+
+ for (i = 0; i < fep->num_tx_queues; i++) {
+ txq = fep->tx_queue[i];
+ txq->index = i;
+ txq->tx_bd_base = (struct bufdesc *)cbd_base;
+ txq->bd_dma = bd_dma;
+ if (fep->bufdesc_ex) {
+ bd_dma += sizeof(struct bufdesc_ex) * txq->tx_ring_size;
+ cbd_base = (struct bufdesc *)
+ (((struct bufdesc_ex *)cbd_base) + txq->tx_ring_size);
+ } else {
+ bd_dma += sizeof(struct bufdesc) * txq->tx_ring_size;
+ cbd_base += txq->tx_ring_size;
+ }
+ }
+
/* The FEC Ethernet specific entries in the device structure */
ndev->watchdog_timeo = TX_TIMEOUT;
@@ -2500,6 +3063,11 @@ static int fec_enet_init(struct net_device *ndev)
fep->csum_flags |= FLAG_RX_CSUM_ENABLED;
}
+ if (id_entry->driver_data & FEC_QUIRK_HAS_AVB) {
+ fep->tx_align = 0;
+ fep->rx_align = 0x3f;
+ }
+
ndev->hw_features = ndev->features;
fec_restart(ndev);
@@ -2545,6 +3113,42 @@ static void fec_reset_phy(struct platform_device *pdev)
}
#endif /* CONFIG_OF */
+static void
+fec_enet_get_queue_num(struct platform_device *pdev, int *num_tx, int *num_rx)
+{
+ struct device_node *np = pdev->dev.of_node;
+ int err;
+
+ *num_tx = *num_rx = 1;
+
+ if (!np || !of_device_is_available(np))
+ return;
+
+ /* parse the num of tx and rx queues */
+ err = of_property_read_u32(np, "fsl,num-tx-queues", num_tx);
+ if (err)
+ *num_tx = 1;
+
+ err = of_property_read_u32(np, "fsl,num-rx-queues", num_rx);
+ if (err)
+ *num_rx = 1;
+
+ if (*num_tx < 1 || *num_tx > FEC_ENET_MAX_TX_QS) {
+ dev_warn(&pdev->dev, "Invalid num_tx(=%d), fall back to 1\n",
+ *num_tx);
+ *num_tx = 1;
+ return;
+ }
+
+ if (*num_rx < 1 || *num_rx > FEC_ENET_MAX_RX_QS) {
+ dev_warn(&pdev->dev, "Invalid num_rx(=%d), fall back to 1\n",
+ *num_rx);
+ *num_rx = 1;
+ return;
+ }
+
+}
+
static int
fec_probe(struct platform_device *pdev)
{
@@ -2556,13 +3160,18 @@ fec_probe(struct platform_device *pdev)
const struct of_device_id *of_id;
static int dev_id;
struct device_node *np = pdev->dev.of_node, *phy_node;
+ int num_tx_qs;
+ int num_rx_qs;
of_id = of_match_device(fec_dt_ids, &pdev->dev);
if (of_id)
pdev->id_entry = of_id->data;
+ fec_enet_get_queue_num(pdev, &num_tx_qs, &num_rx_qs);
+
/* Init network device */
- ndev = alloc_etherdev(sizeof(struct fec_enet_private));
+ ndev = alloc_etherdev_mqs(sizeof(struct fec_enet_private),
+ num_tx_qs, num_rx_qs);
if (!ndev)
return -ENOMEM;
@@ -2571,6 +3180,9 @@ fec_probe(struct platform_device *pdev)
/* setup board info structure */
fep = netdev_priv(ndev);
+ fep->num_rx_queues = num_rx_qs;
+ fep->num_tx_queues = num_tx_qs;
+
#if !defined(CONFIG_M5272)
/* default enable pause frame auto negotiation */
if (pdev->id_entry &&
@@ -2630,6 +3242,8 @@ fec_probe(struct platform_device *pdev)
goto failed_clk;
}
+ fep->itr_clk_rate = clk_get_rate(fep->clk_ahb);
+
/* enet_out is optional, depends on board */
fep->clk_enet_out = devm_clk_get(&pdev->dev, "enet_out");
if (IS_ERR(fep->clk_enet_out))
@@ -2637,6 +3251,12 @@ fec_probe(struct platform_device *pdev)
fep->ptp_clk_on = false;
mutex_init(&fep->ptp_clk_mutex);
+
+ /* clk_ref is optional, depends on board */
+ fep->clk_ref = devm_clk_get(&pdev->dev, "enet_clk_ref");
+ if (IS_ERR(fep->clk_ref))
+ fep->clk_ref = NULL;
+
fep->clk_ptp = devm_clk_get(&pdev->dev, "ptp");
fep->bufdesc_ex =
pdev->id_entry->driver_data & FEC_QUIRK_HAS_BUFDESC_EX;
@@ -2684,6 +3304,7 @@ fec_probe(struct platform_device *pdev)
goto failed_irq;
}
+ init_completion(&fep->mdio_done);
ret = fec_enet_mii_init(pdev);
if (ret)
goto failed_mii_init;
@@ -2700,6 +3321,7 @@ fec_probe(struct platform_device *pdev)
if (fep->bufdesc_ex && fep->ptp_clock)
netdev_info(ndev, "registered PHC device %d\n", fep->dev_id);
+ fep->rx_copybreak = COPYBREAK_DEFAULT;
INIT_WORK(&fep->tx_timeout_work, fec_enet_timeout_work);
return 0;
diff --git a/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c b/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
index 748fd24..c92c3b7 100644
--- a/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
+++ b/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
@@ -215,139 +215,23 @@ static int fs_enet_rx_napi(struct napi_struct *napi, int budget)
return received;
}
-/* non NAPI receive function */
-static int fs_enet_rx_non_napi(struct net_device *dev)
+static int fs_enet_tx_napi(struct napi_struct *napi, int budget)
{
- struct fs_enet_private *fep = netdev_priv(dev);
- const struct fs_platform_info *fpi = fep->fpi;
- cbd_t __iomem *bdp;
- struct sk_buff *skb, *skbn, *skbt;
- int received = 0;
- u16 pkt_len, sc;
- int curidx;
- /*
- * First, grab all of the stats for the incoming packet.
- * These get messed up if we get called due to a busy condition.
- */
- bdp = fep->cur_rx;
-
- while (((sc = CBDR_SC(bdp)) & BD_ENET_RX_EMPTY) == 0) {
-
- curidx = bdp - fep->rx_bd_base;
-
- /*
- * Since we have allocated space to hold a complete frame,
- * the last indicator should be set.
- */
- if ((sc & BD_ENET_RX_LAST) == 0)
- dev_warn(fep->dev, "rcv is not +last\n");
-
- /*
- * Check for errors.
- */
- if (sc & (BD_ENET_RX_LG | BD_ENET_RX_SH | BD_ENET_RX_CL |
- BD_ENET_RX_NO | BD_ENET_RX_CR | BD_ENET_RX_OV)) {
- fep->stats.rx_errors++;
- /* Frame too long or too short. */
- if (sc & (BD_ENET_RX_LG | BD_ENET_RX_SH))
- fep->stats.rx_length_errors++;
- /* Frame alignment */
- if (sc & (BD_ENET_RX_NO | BD_ENET_RX_CL))
- fep->stats.rx_frame_errors++;
- /* CRC Error */
- if (sc & BD_ENET_RX_CR)
- fep->stats.rx_crc_errors++;
- /* FIFO overrun */
- if (sc & BD_ENET_RX_OV)
- fep->stats.rx_crc_errors++;
-
- skb = fep->rx_skbuff[curidx];
-
- dma_unmap_single(fep->dev, CBDR_BUFADDR(bdp),
- L1_CACHE_ALIGN(PKT_MAXBUF_SIZE),
- DMA_FROM_DEVICE);
-
- skbn = skb;
-
- } else {
-
- skb = fep->rx_skbuff[curidx];
-
- dma_unmap_single(fep->dev, CBDR_BUFADDR(bdp),
- L1_CACHE_ALIGN(PKT_MAXBUF_SIZE),
- DMA_FROM_DEVICE);
-
- /*
- * Process the incoming frame.
- */
- fep->stats.rx_packets++;
- pkt_len = CBDR_DATLEN(bdp) - 4; /* remove CRC */
- fep->stats.rx_bytes += pkt_len + 4;
-
- if (pkt_len <= fpi->rx_copybreak) {
- /* +2 to make IP header L1 cache aligned */
- skbn = netdev_alloc_skb(dev, pkt_len + 2);
- if (skbn != NULL) {
- skb_reserve(skbn, 2); /* align IP header */
- skb_copy_from_linear_data(skb,
- skbn->data, pkt_len);
- /* swap */
- skbt = skb;
- skb = skbn;
- skbn = skbt;
- }
- } else {
- skbn = netdev_alloc_skb(dev, ENET_RX_FRSIZE);
-
- if (skbn)
- skb_align(skbn, ENET_RX_ALIGN);
- }
-
- if (skbn != NULL) {
- skb_put(skb, pkt_len); /* Make room */
- skb->protocol = eth_type_trans(skb, dev);
- received++;
- netif_rx(skb);
- } else {
- fep->stats.rx_dropped++;
- skbn = skb;
- }
- }
-
- fep->rx_skbuff[curidx] = skbn;
- CBDW_BUFADDR(bdp, dma_map_single(fep->dev, skbn->data,
- L1_CACHE_ALIGN(PKT_MAXBUF_SIZE),
- DMA_FROM_DEVICE));
- CBDW_DATLEN(bdp, 0);
- CBDW_SC(bdp, (sc & ~BD_ENET_RX_STATS) | BD_ENET_RX_EMPTY);
-
- /*
- * Update BD pointer to next entry.
- */
- if ((sc & BD_ENET_RX_WRAP) == 0)
- bdp++;
- else
- bdp = fep->rx_bd_base;
-
- (*fep->ops->rx_bd_done)(dev);
- }
-
- fep->cur_rx = bdp;
-
- return 0;
-}
-
-static void fs_enet_tx(struct net_device *dev)
-{
- struct fs_enet_private *fep = netdev_priv(dev);
+ struct fs_enet_private *fep = container_of(napi, struct fs_enet_private,
+ napi_tx);
+ struct net_device *dev = fep->ndev;
cbd_t __iomem *bdp;
struct sk_buff *skb;
int dirtyidx, do_wake, do_restart;
u16 sc;
+ int has_tx_work = 0;
spin_lock(&fep->tx_lock);
bdp = fep->dirty_tx;
+ /* clear TX status bits for napi*/
+ (*fep->ops->napi_clear_tx_event)(dev);
+
do_wake = do_restart = 0;
while (((sc = CBDR_SC(bdp)) & BD_ENET_TX_READY) == 0) {
dirtyidx = bdp - fep->tx_bd_base;
@@ -400,7 +284,7 @@ static void fs_enet_tx(struct net_device *dev)
/*
* Free the sk buffer associated with this last transmit.
*/
- dev_kfree_skb_irq(skb);
+ dev_kfree_skb(skb);
fep->tx_skbuff[dirtyidx] = NULL;
/*
@@ -417,6 +301,7 @@ static void fs_enet_tx(struct net_device *dev)
*/
if (!fep->tx_free++)
do_wake = 1;
+ has_tx_work = 1;
}
fep->dirty_tx = bdp;
@@ -424,10 +309,19 @@ static void fs_enet_tx(struct net_device *dev)
if (do_restart)
(*fep->ops->tx_restart)(dev);
+ if (!has_tx_work) {
+ napi_complete(napi);
+ (*fep->ops->napi_enable_tx)(dev);
+ }
+
spin_unlock(&fep->tx_lock);
if (do_wake)
netif_wake_queue(dev);
+
+ if (has_tx_work)
+ return budget;
+ return 0;
}
/*
@@ -453,8 +347,7 @@ fs_enet_interrupt(int irq, void *dev_id)
nr++;
int_clr_events = int_events;
- if (fpi->use_napi)
- int_clr_events &= ~fep->ev_napi_rx;
+ int_clr_events &= ~fep->ev_napi_rx;
(*fep->ops->clear_int_events)(dev, int_clr_events);
@@ -462,23 +355,28 @@ fs_enet_interrupt(int irq, void *dev_id)
(*fep->ops->ev_error)(dev, int_events);
if (int_events & fep->ev_rx) {
- if (!fpi->use_napi)
- fs_enet_rx_non_napi(dev);
- else {
- napi_ok = napi_schedule_prep(&fep->napi);
-
- (*fep->ops->napi_disable_rx)(dev);
- (*fep->ops->clear_int_events)(dev, fep->ev_napi_rx);
-
- /* NOTE: it is possible for FCCs in NAPI mode */
- /* to submit a spurious interrupt while in poll */
- if (napi_ok)
- __napi_schedule(&fep->napi);
- }
+ napi_ok = napi_schedule_prep(&fep->napi);
+
+ (*fep->ops->napi_disable_rx)(dev);
+ (*fep->ops->clear_int_events)(dev, fep->ev_napi_rx);
+
+ /* NOTE: it is possible for FCCs in NAPI mode */
+ /* to submit a spurious interrupt while in poll */
+ if (napi_ok)
+ __napi_schedule(&fep->napi);
}
- if (int_events & fep->ev_tx)
- fs_enet_tx(dev);
+ if (int_events & fep->ev_tx) {
+ napi_ok = napi_schedule_prep(&fep->napi_tx);
+
+ (*fep->ops->napi_disable_tx)(dev);
+ (*fep->ops->clear_int_events)(dev, fep->ev_napi_tx);
+
+ /* NOTE: it is possible for FCCs in NAPI mode */
+ /* to submit a spurious interrupt while in poll */
+ if (napi_ok)
+ __napi_schedule(&fep->napi_tx);
+ }
}
handled = nr > 0;
@@ -611,7 +509,6 @@ static int fs_enet_start_xmit(struct sk_buff *skb, struct net_device *dev)
cbd_t __iomem *bdp;
int curidx;
u16 sc;
- unsigned long flags;
#ifdef CONFIG_FS_ENET_MPC5121_FEC
if (((unsigned long)skb->data) & 0x3) {
@@ -626,7 +523,7 @@ static int fs_enet_start_xmit(struct sk_buff *skb, struct net_device *dev)
}
}
#endif
- spin_lock_irqsave(&fep->tx_lock, flags);
+ spin_lock(&fep->tx_lock);
/*
* Fill in a Tx ring entry
@@ -635,7 +532,7 @@ static int fs_enet_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (!fep->tx_free || (CBDR_SC(bdp) & BD_ENET_TX_READY)) {
netif_stop_queue(dev);
- spin_unlock_irqrestore(&fep->tx_lock, flags);
+ spin_unlock(&fep->tx_lock);
/*
* Ooops. All transmit buffers are full. Bail out.
@@ -691,7 +588,7 @@ static int fs_enet_start_xmit(struct sk_buff *skb, struct net_device *dev)
(*fep->ops->tx_kickstart)(dev);
- spin_unlock_irqrestore(&fep->tx_lock, flags);
+ spin_unlock(&fep->tx_lock);
return NETDEV_TX_OK;
}
@@ -811,24 +708,24 @@ static int fs_enet_open(struct net_device *dev)
/* not doing this, will cause a crash in fs_enet_rx_napi */
fs_init_bds(fep->ndev);
- if (fep->fpi->use_napi)
- napi_enable(&fep->napi);
+ napi_enable(&fep->napi);
+ napi_enable(&fep->napi_tx);
/* Install our interrupt handler. */
r = request_irq(fep->interrupt, fs_enet_interrupt, IRQF_SHARED,
"fs_enet-mac", dev);
if (r != 0) {
dev_err(fep->dev, "Could not allocate FS_ENET IRQ!");
- if (fep->fpi->use_napi)
- napi_disable(&fep->napi);
+ napi_disable(&fep->napi);
+ napi_disable(&fep->napi_tx);
return -EINVAL;
}
err = fs_init_phy(dev);
if (err) {
free_irq(fep->interrupt, dev);
- if (fep->fpi->use_napi)
- napi_disable(&fep->napi);
+ napi_disable(&fep->napi);
+ napi_disable(&fep->napi_tx);
return err;
}
phy_start(fep->phydev);
@@ -845,8 +742,8 @@ static int fs_enet_close(struct net_device *dev)
netif_stop_queue(dev);
netif_carrier_off(dev);
- if (fep->fpi->use_napi)
- napi_disable(&fep->napi);
+ napi_disable(&fep->napi);
+ napi_disable(&fep->napi_tx);
phy_stop(fep->phydev);
spin_lock_irqsave(&fep->lock, flags);
@@ -1022,7 +919,6 @@ static int fs_enet_probe(struct platform_device *ofdev)
fpi->rx_ring = 32;
fpi->tx_ring = 32;
fpi->rx_copybreak = 240;
- fpi->use_napi = 1;
fpi->napi_weight = 17;
fpi->phy_node = of_parse_phandle(ofdev->dev.of_node, "phy-handle", 0);
if (!fpi->phy_node && of_phy_is_fixed_link(ofdev->dev.of_node)) {
@@ -1102,9 +998,8 @@ static int fs_enet_probe(struct platform_device *ofdev)
ndev->netdev_ops = &fs_enet_netdev_ops;
ndev->watchdog_timeo = 2 * HZ;
- if (fpi->use_napi)
- netif_napi_add(ndev, &fep->napi, fs_enet_rx_napi,
- fpi->napi_weight);
+ netif_napi_add(ndev, &fep->napi, fs_enet_rx_napi, fpi->napi_weight);
+ netif_napi_add(ndev, &fep->napi_tx, fs_enet_tx_napi, 2);
ndev->ethtool_ops = &fs_ethtool_ops;
diff --git a/drivers/net/ethernet/freescale/fs_enet/fs_enet.h b/drivers/net/ethernet/freescale/fs_enet/fs_enet.h
index 1ece4b1..3a4b49e 100644
--- a/drivers/net/ethernet/freescale/fs_enet/fs_enet.h
+++ b/drivers/net/ethernet/freescale/fs_enet/fs_enet.h
@@ -84,6 +84,9 @@ struct fs_ops {
void (*napi_clear_rx_event)(struct net_device *dev);
void (*napi_enable_rx)(struct net_device *dev);
void (*napi_disable_rx)(struct net_device *dev);
+ void (*napi_clear_tx_event)(struct net_device *dev);
+ void (*napi_enable_tx)(struct net_device *dev);
+ void (*napi_disable_tx)(struct net_device *dev);
void (*rx_bd_done)(struct net_device *dev);
void (*tx_kickstart)(struct net_device *dev);
u32 (*get_int_events)(struct net_device *dev);
@@ -119,6 +122,7 @@ struct phy_info {
struct fs_enet_private {
struct napi_struct napi;
+ struct napi_struct napi_tx;
struct device *dev; /* pointer back to the device (must be initialized first) */
struct net_device *ndev;
spinlock_t lock; /* during all ops except TX pckt processing */
@@ -149,6 +153,7 @@ struct fs_enet_private {
/* event masks */
u32 ev_napi_rx; /* mask of NAPI rx events */
+ u32 ev_napi_tx; /* mask of NAPI rx events */
u32 ev_rx; /* rx event mask */
u32 ev_tx; /* tx event mask */
u32 ev_err; /* error event mask */
@@ -191,8 +196,8 @@ void fs_cleanup_bds(struct net_device *dev);
#define DRV_MODULE_NAME "fs_enet"
#define PFX DRV_MODULE_NAME ": "
-#define DRV_MODULE_VERSION "1.0"
-#define DRV_MODULE_RELDATE "Aug 8, 2005"
+#define DRV_MODULE_VERSION "1.1"
+#define DRV_MODULE_RELDATE "Sep 22, 2014"
/***************************************************************************/
diff --git a/drivers/net/ethernet/freescale/fs_enet/mac-fcc.c b/drivers/net/ethernet/freescale/fs_enet/mac-fcc.c
index f5383ab..2c578db 100644
--- a/drivers/net/ethernet/freescale/fs_enet/mac-fcc.c
+++ b/drivers/net/ethernet/freescale/fs_enet/mac-fcc.c
@@ -125,6 +125,7 @@ out:
}
#define FCC_NAPI_RX_EVENT_MSK (FCC_ENET_RXF | FCC_ENET_RXB)
+#define FCC_NAPI_TX_EVENT_MSK (FCC_ENET_TXF | FCC_ENET_TXB)
#define FCC_RX_EVENT (FCC_ENET_RXF)
#define FCC_TX_EVENT (FCC_ENET_TXB)
#define FCC_ERR_EVENT_MSK (FCC_ENET_TXE)
@@ -137,6 +138,7 @@ static int setup_data(struct net_device *dev)
return -EINVAL;
fep->ev_napi_rx = FCC_NAPI_RX_EVENT_MSK;
+ fep->ev_napi_tx = FCC_NAPI_TX_EVENT_MSK;
fep->ev_rx = FCC_RX_EVENT;
fep->ev_tx = FCC_TX_EVENT;
fep->ev_err = FCC_ERR_EVENT_MSK;
@@ -446,6 +448,30 @@ static void napi_disable_rx(struct net_device *dev)
C16(fccp, fcc_fccm, FCC_NAPI_RX_EVENT_MSK);
}
+static void napi_clear_tx_event(struct net_device *dev)
+{
+ struct fs_enet_private *fep = netdev_priv(dev);
+ fcc_t __iomem *fccp = fep->fcc.fccp;
+
+ W16(fccp, fcc_fcce, FCC_NAPI_TX_EVENT_MSK);
+}
+
+static void napi_enable_tx(struct net_device *dev)
+{
+ struct fs_enet_private *fep = netdev_priv(dev);
+ fcc_t __iomem *fccp = fep->fcc.fccp;
+
+ S16(fccp, fcc_fccm, FCC_NAPI_TX_EVENT_MSK);
+}
+
+static void napi_disable_tx(struct net_device *dev)
+{
+ struct fs_enet_private *fep = netdev_priv(dev);
+ fcc_t __iomem *fccp = fep->fcc.fccp;
+
+ C16(fccp, fcc_fccm, FCC_NAPI_TX_EVENT_MSK);
+}
+
static void rx_bd_done(struct net_device *dev)
{
/* nothing */
@@ -572,6 +598,9 @@ const struct fs_ops fs_fcc_ops = {
.napi_clear_rx_event = napi_clear_rx_event,
.napi_enable_rx = napi_enable_rx,
.napi_disable_rx = napi_disable_rx,
+ .napi_clear_tx_event = napi_clear_tx_event,
+ .napi_enable_tx = napi_enable_tx,
+ .napi_disable_tx = napi_disable_tx,
.rx_bd_done = rx_bd_done,
.tx_kickstart = tx_kickstart,
.get_int_events = get_int_events,
diff --git a/drivers/net/ethernet/freescale/fs_enet/mac-fec.c b/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
index 1eedfba..3d4e08b 100644
--- a/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
+++ b/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
@@ -110,6 +110,7 @@ static int do_pd_setup(struct fs_enet_private *fep)
}
#define FEC_NAPI_RX_EVENT_MSK (FEC_ENET_RXF | FEC_ENET_RXB)
+#define FEC_NAPI_TX_EVENT_MSK (FEC_ENET_TXF | FEC_ENET_TXB)
#define FEC_RX_EVENT (FEC_ENET_RXF)
#define FEC_TX_EVENT (FEC_ENET_TXF)
#define FEC_ERR_EVENT_MSK (FEC_ENET_HBERR | FEC_ENET_BABR | \
@@ -126,6 +127,7 @@ static int setup_data(struct net_device *dev)
fep->fec.htlo = 0;
fep->ev_napi_rx = FEC_NAPI_RX_EVENT_MSK;
+ fep->ev_napi_tx = FEC_NAPI_TX_EVENT_MSK;
fep->ev_rx = FEC_RX_EVENT;
fep->ev_tx = FEC_TX_EVENT;
fep->ev_err = FEC_ERR_EVENT_MSK;
@@ -415,6 +417,30 @@ static void napi_disable_rx(struct net_device *dev)
FC(fecp, imask, FEC_NAPI_RX_EVENT_MSK);
}
+static void napi_clear_tx_event(struct net_device *dev)
+{
+ struct fs_enet_private *fep = netdev_priv(dev);
+ struct fec __iomem *fecp = fep->fec.fecp;
+
+ FW(fecp, ievent, FEC_NAPI_TX_EVENT_MSK);
+}
+
+static void napi_enable_tx(struct net_device *dev)
+{
+ struct fs_enet_private *fep = netdev_priv(dev);
+ struct fec __iomem *fecp = fep->fec.fecp;
+
+ FS(fecp, imask, FEC_NAPI_TX_EVENT_MSK);
+}
+
+static void napi_disable_tx(struct net_device *dev)
+{
+ struct fs_enet_private *fep = netdev_priv(dev);
+ struct fec __iomem *fecp = fep->fec.fecp;
+
+ FC(fecp, imask, FEC_NAPI_TX_EVENT_MSK);
+}
+
static void rx_bd_done(struct net_device *dev)
{
struct fs_enet_private *fep = netdev_priv(dev);
@@ -487,6 +513,9 @@ const struct fs_ops fs_fec_ops = {
.napi_clear_rx_event = napi_clear_rx_event,
.napi_enable_rx = napi_enable_rx,
.napi_disable_rx = napi_disable_rx,
+ .napi_clear_tx_event = napi_clear_tx_event,
+ .napi_enable_tx = napi_enable_tx,
+ .napi_disable_tx = napi_disable_tx,
.rx_bd_done = rx_bd_done,
.tx_kickstart = tx_kickstart,
.get_int_events = get_int_events,
diff --git a/drivers/net/ethernet/freescale/fs_enet/mac-scc.c b/drivers/net/ethernet/freescale/fs_enet/mac-scc.c
index 90b3b19..41aa0b4 100644
--- a/drivers/net/ethernet/freescale/fs_enet/mac-scc.c
+++ b/drivers/net/ethernet/freescale/fs_enet/mac-scc.c
@@ -116,6 +116,7 @@ static int do_pd_setup(struct fs_enet_private *fep)
}
#define SCC_NAPI_RX_EVENT_MSK (SCCE_ENET_RXF | SCCE_ENET_RXB)
+#define SCC_NAPI_TX_EVENT_MSK (SCCE_ENET_TXF | SCCE_ENET_TXB)
#define SCC_RX_EVENT (SCCE_ENET_RXF)
#define SCC_TX_EVENT (SCCE_ENET_TXB)
#define SCC_ERR_EVENT_MSK (SCCE_ENET_TXE | SCCE_ENET_BSY)
@@ -130,6 +131,7 @@ static int setup_data(struct net_device *dev)
fep->scc.htlo = 0;
fep->ev_napi_rx = SCC_NAPI_RX_EVENT_MSK;
+ fep->ev_napi_tx = SCC_NAPI_TX_EVENT_MSK;
fep->ev_rx = SCC_RX_EVENT;
fep->ev_tx = SCC_TX_EVENT | SCCE_ENET_TXE;
fep->ev_err = SCC_ERR_EVENT_MSK;
@@ -398,6 +400,30 @@ static void napi_disable_rx(struct net_device *dev)
C16(sccp, scc_sccm, SCC_NAPI_RX_EVENT_MSK);
}
+static void napi_clear_tx_event(struct net_device *dev)
+{
+ struct fs_enet_private *fep = netdev_priv(dev);
+ scc_t __iomem *sccp = fep->scc.sccp;
+
+ W16(sccp, scc_scce, SCC_NAPI_TX_EVENT_MSK);
+}
+
+static void napi_enable_tx(struct net_device *dev)
+{
+ struct fs_enet_private *fep = netdev_priv(dev);
+ scc_t __iomem *sccp = fep->scc.sccp;
+
+ S16(sccp, scc_sccm, SCC_NAPI_TX_EVENT_MSK);
+}
+
+static void napi_disable_tx(struct net_device *dev)
+{
+ struct fs_enet_private *fep = netdev_priv(dev);
+ scc_t __iomem *sccp = fep->scc.sccp;
+
+ C16(sccp, scc_sccm, SCC_NAPI_TX_EVENT_MSK);
+}
+
static void rx_bd_done(struct net_device *dev)
{
/* nothing */
@@ -471,6 +497,9 @@ const struct fs_ops fs_scc_ops = {
.napi_clear_rx_event = napi_clear_rx_event,
.napi_enable_rx = napi_enable_rx,
.napi_disable_rx = napi_disable_rx,
+ .napi_clear_tx_event = napi_clear_tx_event,
+ .napi_enable_tx = napi_enable_tx,
+ .napi_disable_tx = napi_disable_tx,
.rx_bd_done = rx_bd_done,
.tx_kickstart = tx_kickstart,
.get_int_events = get_int_events,
diff --git a/drivers/net/ethernet/hp/hp100.c b/drivers/net/ethernet/hp/hp100.c
index ed7916f..76a6e0c 100644
--- a/drivers/net/ethernet/hp/hp100.c
+++ b/drivers/net/ethernet/hp/hp100.c
@@ -1627,7 +1627,7 @@ static void hp100_clean_txring(struct net_device *dev)
#endif
/* Conversion to new PCI API : NOP */
pci_unmap_single(lp->pci_dev, (dma_addr_t) lp->txrhead->pdl[1], lp->txrhead->pdl[2], PCI_DMA_TODEVICE);
- dev_kfree_skb_any(lp->txrhead->skb);
+ dev_consume_skb_any(lp->txrhead->skb);
lp->txrhead->skb = NULL;
lp->txrhead = lp->txrhead->next;
lp->txrcommit--;
@@ -1745,7 +1745,7 @@ static netdev_tx_t hp100_start_xmit(struct sk_buff *skb,
hp100_ints_on();
spin_unlock_irqrestore(&lp->lock, flags);
- dev_kfree_skb_any(skb);
+ dev_consume_skb_any(skb);
#ifdef HP100_DEBUG_TX
printk("hp100: %s: start_xmit: end\n", dev->name);
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index bb9f0ba..6a6d5ee 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -300,4 +300,23 @@ config I40EVF
will be called i40evf. MSI-X interrupt support is required
for this driver to work correctly.
+config FM10K
+ tristate "Intel(R) FM10000 Ethernet Switch Host Interface Support"
+ default n
+ depends on PCI_MSI
+ ---help---
+ This driver supports Intel(R) FM10000 Ethernet Switch Host
+ Interface. For more information on how to identify your adapter,
+ go to the Adapter & Driver ID Guide at:
+
+ <http://support.intel.com/support/network/sb/CS-008441.htm>
+
+ For general information and support, go to the Intel support
+ website at:
+
+ <http://support.intel.com>
+
+ To compile this driver as a module, choose M here. The module
+ will be called fm10k. MSI-X interrupt support is required
+
endif # NET_VENDOR_INTEL
diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile
index cdbbca8..5ea764d 100644
--- a/drivers/net/ethernet/intel/Makefile
+++ b/drivers/net/ethernet/intel/Makefile
@@ -12,3 +12,4 @@ obj-$(CONFIG_IXGBEVF) += ixgbevf/
obj-$(CONFIG_I40E) += i40e/
obj-$(CONFIG_IXGB) += ixgb/
obj-$(CONFIG_I40EVF) += i40evf/
+obj-$(CONFIG_FM10K) += fm10k/
diff --git a/drivers/net/ethernet/intel/e1000/e1000.h b/drivers/net/ethernet/intel/e1000/e1000.h
index 10a0f22..6970710 100644
--- a/drivers/net/ethernet/intel/e1000/e1000.h
+++ b/drivers/net/ethernet/intel/e1000/e1000.h
@@ -148,16 +148,23 @@ struct e1000_adapter;
/* wrapper around a pointer to a socket buffer,
* so a DMA handle can be stored along with the buffer
*/
-struct e1000_buffer {
+struct e1000_tx_buffer {
struct sk_buff *skb;
dma_addr_t dma;
- struct page *page;
unsigned long time_stamp;
u16 length;
u16 next_to_watch;
- unsigned int segs;
+ bool mapped_as_page;
+ unsigned short segs;
unsigned int bytecount;
- u16 mapped_as_page;
+};
+
+struct e1000_rx_buffer {
+ union {
+ struct page *page; /* jumbo: alloc_page */
+ u8 *data; /* else, netdev_alloc_frag */
+ } rxbuf;
+ dma_addr_t dma;
};
struct e1000_tx_ring {
@@ -174,7 +181,7 @@ struct e1000_tx_ring {
/* next descriptor to check for DD status bit */
unsigned int next_to_clean;
/* array of buffer information structs */
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
u16 tdh;
u16 tdt;
@@ -195,7 +202,7 @@ struct e1000_rx_ring {
/* next descriptor to check for DD status bit */
unsigned int next_to_clean;
/* array of buffer information structs */
- struct e1000_buffer *buffer_info;
+ struct e1000_rx_buffer *buffer_info;
struct sk_buff *rx_skb_top;
/* cpu for rx queue */
diff --git a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
index cca5bca..b691eb4 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
@@ -1,35 +1,30 @@
/*******************************************************************************
-
- Intel PRO/1000 Linux driver
- Copyright(c) 1999 - 2006 Intel Corporation.
-
- This program is free software; you can redistribute it and/or modify it
- under the terms and conditions of the GNU General Public License,
- version 2, as published by the Free Software Foundation.
-
- This program is distributed in the hope it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
-
- You should have received a copy of the GNU General Public License along with
- this program; if not, write to the Free Software Foundation, Inc.,
- 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
-
- The full GNU General Public License is included in this distribution in
- the file called "COPYING".
-
- Contact Information:
- Linux NICS <linux.nics@intel.com>
- e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
- Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
-
-*******************************************************************************/
+ * Intel PRO/1000 Linux driver
+ * Copyright(c) 1999 - 2006 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * Linux NICS <linux.nics@intel.com>
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ *
+ ******************************************************************************/
/* ethtool support for e1000 */
#include "e1000.h"
-#include <asm/uaccess.h>
+#include <linux/uaccess.h>
enum {NETDEV_STATS, E1000_STATS};
@@ -42,7 +37,7 @@ struct e1000_stats {
#define E1000_STAT(m) E1000_STATS, \
sizeof(((struct e1000_adapter *)0)->m), \
- offsetof(struct e1000_adapter, m)
+ offsetof(struct e1000_adapter, m)
#define E1000_NETDEV_STAT(m) NETDEV_STATS, \
sizeof(((struct net_device *)0)->m), \
offsetof(struct net_device, m)
@@ -104,6 +99,7 @@ static const char e1000_gstrings_test[][ETH_GSTRING_LEN] = {
"Interrupt test (offline)", "Loopback test (offline)",
"Link test (on/offline)"
};
+
#define E1000_TEST_LEN ARRAY_SIZE(e1000_gstrings_test)
static int e1000_get_settings(struct net_device *netdev,
@@ -113,7 +109,6 @@ static int e1000_get_settings(struct net_device *netdev,
struct e1000_hw *hw = &adapter->hw;
if (hw->media_type == e1000_media_type_copper) {
-
ecmd->supported = (SUPPORTED_10baseT_Half |
SUPPORTED_10baseT_Full |
SUPPORTED_100baseT_Half |
@@ -155,9 +150,8 @@ static int e1000_get_settings(struct net_device *netdev,
}
if (er32(STATUS) & E1000_STATUS_LU) {
-
e1000_get_speed_and_duplex(hw, &adapter->link_speed,
- &adapter->link_duplex);
+ &adapter->link_duplex);
ethtool_cmd_speed_set(ecmd, adapter->link_speed);
/* unfortunately FULL_DUPLEX != DUPLEX_FULL
@@ -247,9 +241,9 @@ static int e1000_set_settings(struct net_device *netdev,
if (netif_running(adapter->netdev)) {
e1000_down(adapter);
e1000_up(adapter);
- } else
+ } else {
e1000_reset(adapter);
-
+ }
clear_bit(__E1000_RESETTING, &adapter->flags);
return 0;
}
@@ -279,11 +273,11 @@ static void e1000_get_pauseparam(struct net_device *netdev,
pause->autoneg =
(adapter->fc_autoneg ? AUTONEG_ENABLE : AUTONEG_DISABLE);
- if (hw->fc == E1000_FC_RX_PAUSE)
+ if (hw->fc == E1000_FC_RX_PAUSE) {
pause->rx_pause = 1;
- else if (hw->fc == E1000_FC_TX_PAUSE)
+ } else if (hw->fc == E1000_FC_TX_PAUSE) {
pause->tx_pause = 1;
- else if (hw->fc == E1000_FC_FULL) {
+ } else if (hw->fc == E1000_FC_FULL) {
pause->rx_pause = 1;
pause->tx_pause = 1;
}
@@ -316,8 +310,9 @@ static int e1000_set_pauseparam(struct net_device *netdev,
if (netif_running(adapter->netdev)) {
e1000_down(adapter);
e1000_up(adapter);
- } else
+ } else {
e1000_reset(adapter);
+ }
} else
retval = ((hw->media_type == e1000_media_type_fiber) ?
e1000_setup_link(hw) : e1000_force_mac_fc(hw));
@@ -329,12 +324,14 @@ static int e1000_set_pauseparam(struct net_device *netdev,
static u32 e1000_get_msglevel(struct net_device *netdev)
{
struct e1000_adapter *adapter = netdev_priv(netdev);
+
return adapter->msg_enable;
}
static void e1000_set_msglevel(struct net_device *netdev, u32 data)
{
struct e1000_adapter *adapter = netdev_priv(netdev);
+
adapter->msg_enable = data;
}
@@ -526,7 +523,7 @@ static int e1000_set_eeprom(struct net_device *netdev,
* only the first byte of the word is being modified
*/
ret_val = e1000_read_eeprom(hw, last_word, 1,
- &eeprom_buff[last_word - first_word]);
+ &eeprom_buff[last_word - first_word]);
}
/* Device's eeprom is always little-endian, word addressable */
@@ -618,13 +615,12 @@ static int e1000_set_ringparam(struct net_device *netdev,
adapter->tx_ring = txdr;
adapter->rx_ring = rxdr;
- rxdr->count = max(ring->rx_pending,(u32)E1000_MIN_RXD);
- rxdr->count = min(rxdr->count,(u32)(mac_type < e1000_82544 ?
+ rxdr->count = max(ring->rx_pending, (u32)E1000_MIN_RXD);
+ rxdr->count = min(rxdr->count, (u32)(mac_type < e1000_82544 ?
E1000_MAX_RXD : E1000_MAX_82544_RXD));
rxdr->count = ALIGN(rxdr->count, REQ_RX_DESCRIPTOR_MULTIPLE);
-
- txdr->count = max(ring->tx_pending,(u32)E1000_MIN_TXD);
- txdr->count = min(txdr->count,(u32)(mac_type < e1000_82544 ?
+ txdr->count = max(ring->tx_pending, (u32)E1000_MIN_TXD);
+ txdr->count = min(txdr->count, (u32)(mac_type < e1000_82544 ?
E1000_MAX_TXD : E1000_MAX_82544_TXD));
txdr->count = ALIGN(txdr->count, REQ_TX_DESCRIPTOR_MULTIPLE);
@@ -680,8 +676,9 @@ static bool reg_pattern_test(struct e1000_adapter *adapter, u64 *data, int reg,
u32 mask, u32 write)
{
struct e1000_hw *hw = &adapter->hw;
- static const u32 test[] =
- {0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF};
+ static const u32 test[] = {
+ 0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF
+ };
u8 __iomem *address = hw->hw_addr + reg;
u32 read;
int i;
@@ -793,8 +790,8 @@ static int e1000_reg_test(struct e1000_adapter *adapter, u64 *data)
REG_PATTERN_TEST(TIDV, 0x0000FFFF, 0x0000FFFF);
value = E1000_RAR_ENTRIES;
for (i = 0; i < value; i++) {
- REG_PATTERN_TEST(RA + (((i << 1) + 1) << 2), 0x8003FFFF,
- 0xFFFFFFFF);
+ REG_PATTERN_TEST(RA + (((i << 1) + 1) << 2),
+ 0x8003FFFF, 0xFFFFFFFF);
}
} else {
REG_SET_AND_CHECK(RCTL, 0xFFFFFFFF, 0x01FFFFFF);
@@ -877,7 +874,6 @@ static int e1000_intr_test(struct e1000_adapter *adapter, u64 *data)
/* Test each interrupt */
for (; i < 10; i++) {
-
/* Interrupt to test */
mask = 1 << i;
@@ -972,10 +968,9 @@ static void e1000_free_desc_rings(struct e1000_adapter *adapter)
if (rxdr->buffer_info[i].dma)
dma_unmap_single(&pdev->dev,
rxdr->buffer_info[i].dma,
- rxdr->buffer_info[i].length,
+ E1000_RXBUFFER_2048,
DMA_FROM_DEVICE);
- if (rxdr->buffer_info[i].skb)
- dev_kfree_skb(rxdr->buffer_info[i].skb);
+ kfree(rxdr->buffer_info[i].rxbuf.data);
}
}
@@ -1010,7 +1005,7 @@ static int e1000_setup_desc_rings(struct e1000_adapter *adapter)
if (!txdr->count)
txdr->count = E1000_DEFAULT_TXD;
- txdr->buffer_info = kcalloc(txdr->count, sizeof(struct e1000_buffer),
+ txdr->buffer_info = kcalloc(txdr->count, sizeof(struct e1000_tx_buffer),
GFP_KERNEL);
if (!txdr->buffer_info) {
ret_val = 1;
@@ -1069,7 +1064,7 @@ static int e1000_setup_desc_rings(struct e1000_adapter *adapter)
if (!rxdr->count)
rxdr->count = E1000_DEFAULT_RXD;
- rxdr->buffer_info = kcalloc(rxdr->count, sizeof(struct e1000_buffer),
+ rxdr->buffer_info = kcalloc(rxdr->count, sizeof(struct e1000_rx_buffer),
GFP_KERNEL);
if (!rxdr->buffer_info) {
ret_val = 5;
@@ -1099,25 +1094,25 @@ static int e1000_setup_desc_rings(struct e1000_adapter *adapter)
for (i = 0; i < rxdr->count; i++) {
struct e1000_rx_desc *rx_desc = E1000_RX_DESC(*rxdr, i);
- struct sk_buff *skb;
+ u8 *buf;
- skb = alloc_skb(E1000_RXBUFFER_2048 + NET_IP_ALIGN, GFP_KERNEL);
- if (!skb) {
+ buf = kzalloc(E1000_RXBUFFER_2048 + NET_SKB_PAD + NET_IP_ALIGN,
+ GFP_KERNEL);
+ if (!buf) {
ret_val = 7;
goto err_nomem;
}
- skb_reserve(skb, NET_IP_ALIGN);
- rxdr->buffer_info[i].skb = skb;
- rxdr->buffer_info[i].length = E1000_RXBUFFER_2048;
+ rxdr->buffer_info[i].rxbuf.data = buf;
+
rxdr->buffer_info[i].dma =
- dma_map_single(&pdev->dev, skb->data,
+ dma_map_single(&pdev->dev,
+ buf + NET_SKB_PAD + NET_IP_ALIGN,
E1000_RXBUFFER_2048, DMA_FROM_DEVICE);
if (dma_mapping_error(&pdev->dev, rxdr->buffer_info[i].dma)) {
ret_val = 8;
goto err_nomem;
}
rx_desc->buffer_addr = cpu_to_le64(rxdr->buffer_info[i].dma);
- memset(skb->data, 0x00, skb->len);
}
return 0;
@@ -1149,8 +1144,7 @@ static void e1000_phy_reset_clk_and_crs(struct e1000_adapter *adapter)
*/
e1000_read_phy_reg(hw, M88E1000_EXT_PHY_SPEC_CTRL, &phy_reg);
phy_reg |= M88E1000_EPSCR_TX_CLK_25;
- e1000_write_phy_reg(hw,
- M88E1000_EXT_PHY_SPEC_CTRL, phy_reg);
+ e1000_write_phy_reg(hw, M88E1000_EXT_PHY_SPEC_CTRL, phy_reg);
/* In addition, because of the s/w reset above, we need to enable
* CRS on TX. This must be set for both full and half duplex
@@ -1158,8 +1152,7 @@ static void e1000_phy_reset_clk_and_crs(struct e1000_adapter *adapter)
*/
e1000_read_phy_reg(hw, M88E1000_PHY_SPEC_CTRL, &phy_reg);
phy_reg |= M88E1000_PSCR_ASSERT_CRS_ON_TX;
- e1000_write_phy_reg(hw,
- M88E1000_PHY_SPEC_CTRL, phy_reg);
+ e1000_write_phy_reg(hw, M88E1000_PHY_SPEC_CTRL, phy_reg);
}
static int e1000_nonintegrated_phy_loopback(struct e1000_adapter *adapter)
@@ -1216,7 +1209,7 @@ static int e1000_nonintegrated_phy_loopback(struct e1000_adapter *adapter)
/* Check Phy Configuration */
e1000_read_phy_reg(hw, PHY_CTRL, &phy_reg);
if (phy_reg != 0x4100)
- return 9;
+ return 9;
e1000_read_phy_reg(hw, M88E1000_EXT_PHY_SPEC_CTRL, &phy_reg);
if (phy_reg != 0x0070)
@@ -1261,7 +1254,7 @@ static int e1000_integrated_phy_loopback(struct e1000_adapter *adapter)
E1000_CTRL_FD); /* Force Duplex to FULL */
if (hw->media_type == e1000_media_type_copper &&
- hw->phy_type == e1000_phy_m88)
+ hw->phy_type == e1000_phy_m88)
ctrl_reg |= E1000_CTRL_ILOS; /* Invert Loss of Signal */
else {
/* Set the ILOS bit on the fiber Nic is half
@@ -1299,7 +1292,7 @@ static int e1000_set_phy_loopback(struct e1000_adapter *adapter)
* attempt this 10 times.
*/
while (e1000_nonintegrated_phy_loopback(adapter) &&
- count++ < 10);
+ count++ < 10);
if (count < 11)
return 0;
}
@@ -1348,8 +1341,9 @@ static int e1000_setup_loopback_test(struct e1000_adapter *adapter)
ew32(RCTL, rctl);
return 0;
}
- } else if (hw->media_type == e1000_media_type_copper)
+ } else if (hw->media_type == e1000_media_type_copper) {
return e1000_set_phy_loopback(adapter);
+ }
return 7;
}
@@ -1391,13 +1385,13 @@ static void e1000_create_lbtest_frame(struct sk_buff *skb,
memset(&skb->data[frame_size / 2 + 12], 0xAF, 1);
}
-static int e1000_check_lbtest_frame(struct sk_buff *skb,
+static int e1000_check_lbtest_frame(const unsigned char *data,
unsigned int frame_size)
{
frame_size &= ~1;
- if (*(skb->data + 3) == 0xFF) {
- if ((*(skb->data + frame_size / 2 + 10) == 0xBE) &&
- (*(skb->data + frame_size / 2 + 12) == 0xAF)) {
+ if (*(data + 3) == 0xFF) {
+ if ((*(data + frame_size / 2 + 10) == 0xBE) &&
+ (*(data + frame_size / 2 + 12) == 0xAF)) {
return 0;
}
}
@@ -1410,7 +1404,7 @@ static int e1000_run_loopback_test(struct e1000_adapter *adapter)
struct e1000_tx_ring *txdr = &adapter->test_tx_ring;
struct e1000_rx_ring *rxdr = &adapter->test_rx_ring;
struct pci_dev *pdev = adapter->pdev;
- int i, j, k, l, lc, good_cnt, ret_val=0;
+ int i, j, k, l, lc, good_cnt, ret_val = 0;
unsigned long time;
ew32(RDT, rxdr->count - 1);
@@ -1429,12 +1423,13 @@ static int e1000_run_loopback_test(struct e1000_adapter *adapter)
for (j = 0; j <= lc; j++) { /* loop count loop */
for (i = 0; i < 64; i++) { /* send the packets */
e1000_create_lbtest_frame(txdr->buffer_info[i].skb,
- 1024);
+ 1024);
dma_sync_single_for_device(&pdev->dev,
txdr->buffer_info[k].dma,
txdr->buffer_info[k].length,
DMA_TO_DEVICE);
- if (unlikely(++k == txdr->count)) k = 0;
+ if (unlikely(++k == txdr->count))
+ k = 0;
}
ew32(TDT, k);
E1000_WRITE_FLUSH();
@@ -1444,15 +1439,17 @@ static int e1000_run_loopback_test(struct e1000_adapter *adapter)
do { /* receive the sent packets */
dma_sync_single_for_cpu(&pdev->dev,
rxdr->buffer_info[l].dma,
- rxdr->buffer_info[l].length,
+ E1000_RXBUFFER_2048,
DMA_FROM_DEVICE);
ret_val = e1000_check_lbtest_frame(
- rxdr->buffer_info[l].skb,
+ rxdr->buffer_info[l].rxbuf.data +
+ NET_SKB_PAD + NET_IP_ALIGN,
1024);
if (!ret_val)
good_cnt++;
- if (unlikely(++l == rxdr->count)) l = 0;
+ if (unlikely(++l == rxdr->count))
+ l = 0;
/* time + 20 msecs (200 msecs on 2.4) is more than
* enough time to complete the receives, if it's
* exceeded, break and error off
@@ -1494,6 +1491,7 @@ static int e1000_link_test(struct e1000_adapter *adapter, u64 *data)
*data = 0;
if (hw->media_type == e1000_media_type_internal_serdes) {
int i = 0;
+
hw->serdes_has_link = false;
/* On some blade server designs, link establishment
@@ -1512,9 +1510,8 @@ static int e1000_link_test(struct e1000_adapter *adapter, u64 *data)
if (hw->autoneg) /* if auto_neg is set wait for it */
msleep(4000);
- if (!(er32(STATUS) & E1000_STATUS_LU)) {
+ if (!(er32(STATUS) & E1000_STATUS_LU))
*data = 1;
- }
}
return *data;
}
@@ -1665,8 +1662,7 @@ static void e1000_get_wol(struct net_device *netdev,
struct e1000_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
- wol->supported = WAKE_UCAST | WAKE_MCAST |
- WAKE_BCAST | WAKE_MAGIC;
+ wol->supported = WAKE_UCAST | WAKE_MCAST | WAKE_BCAST | WAKE_MAGIC;
wol->wolopts = 0;
/* this function will set ->supported = 0 and return 1 if wol is not
@@ -1819,6 +1815,7 @@ static int e1000_set_coalesce(struct net_device *netdev,
static int e1000_nway_reset(struct net_device *netdev)
{
struct e1000_adapter *adapter = netdev_priv(netdev);
+
if (netif_running(netdev))
e1000_reinit_locked(adapter);
return 0;
@@ -1830,22 +1827,29 @@ static void e1000_get_ethtool_stats(struct net_device *netdev,
struct e1000_adapter *adapter = netdev_priv(netdev);
int i;
char *p = NULL;
+ const struct e1000_stats *stat = e1000_gstrings_stats;
e1000_update_stats(adapter);
for (i = 0; i < E1000_GLOBAL_STATS_LEN; i++) {
- switch (e1000_gstrings_stats[i].type) {
+ switch (stat->type) {
case NETDEV_STATS:
- p = (char *) netdev +
- e1000_gstrings_stats[i].stat_offset;
+ p = (char *)netdev + stat->stat_offset;
break;
case E1000_STATS:
- p = (char *) adapter +
- e1000_gstrings_stats[i].stat_offset;
+ p = (char *)adapter + stat->stat_offset;
+ break;
+ default:
+ WARN_ONCE(1, "Invalid E1000 stat type: %u index %d\n",
+ stat->type, i);
break;
}
- data[i] = (e1000_gstrings_stats[i].sizeof_stat ==
- sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ if (stat->sizeof_stat == sizeof(u64))
+ data[i] = *(u64 *)p;
+ else
+ data[i] = *(u32 *)p;
+
+ stat++;
}
/* BUG_ON(i != E1000_STATS_LEN); */
}
@@ -1858,8 +1862,7 @@ static void e1000_get_strings(struct net_device *netdev, u32 stringset,
switch (stringset) {
case ETH_SS_TEST:
- memcpy(data, *e1000_gstrings_test,
- sizeof(e1000_gstrings_test));
+ memcpy(data, e1000_gstrings_test, sizeof(e1000_gstrings_test));
break;
case ETH_SS_STATS:
for (i = 0; i < E1000_GLOBAL_STATS_LEN; i++) {
diff --git a/drivers/net/ethernet/intel/e1000/e1000_hw.c b/drivers/net/ethernet/intel/e1000/e1000_hw.c
index 1acf503..45c8c864 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_hw.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_hw.c
@@ -4837,84 +4837,6 @@ void e1000_update_adaptive(struct e1000_hw *hw)
}
/**
- * e1000_tbi_adjust_stats
- * @hw: Struct containing variables accessed by shared code
- * @frame_len: The length of the frame in question
- * @mac_addr: The Ethernet destination address of the frame in question
- *
- * Adjusts the statistic counters when a frame is accepted by TBI_ACCEPT
- */
-void e1000_tbi_adjust_stats(struct e1000_hw *hw, struct e1000_hw_stats *stats,
- u32 frame_len, u8 *mac_addr)
-{
- u64 carry_bit;
-
- /* First adjust the frame length. */
- frame_len--;
- /* We need to adjust the statistics counters, since the hardware
- * counters overcount this packet as a CRC error and undercount
- * the packet as a good packet
- */
- /* This packet should not be counted as a CRC error. */
- stats->crcerrs--;
- /* This packet does count as a Good Packet Received. */
- stats->gprc++;
-
- /* Adjust the Good Octets received counters */
- carry_bit = 0x80000000 & stats->gorcl;
- stats->gorcl += frame_len;
- /* If the high bit of Gorcl (the low 32 bits of the Good Octets
- * Received Count) was one before the addition,
- * AND it is zero after, then we lost the carry out,
- * need to add one to Gorch (Good Octets Received Count High).
- * This could be simplified if all environments supported
- * 64-bit integers.
- */
- if (carry_bit && ((stats->gorcl & 0x80000000) == 0))
- stats->gorch++;
- /* Is this a broadcast or multicast? Check broadcast first,
- * since the test for a multicast frame will test positive on
- * a broadcast frame.
- */
- if (is_broadcast_ether_addr(mac_addr))
- /* Broadcast packet */
- stats->bprc++;
- else if (is_multicast_ether_addr(mac_addr))
- /* Multicast packet */
- stats->mprc++;
-
- if (frame_len == hw->max_frame_size) {
- /* In this case, the hardware has overcounted the number of
- * oversize frames.
- */
- if (stats->roc > 0)
- stats->roc--;
- }
-
- /* Adjust the bin counters when the extra byte put the frame in the
- * wrong bin. Remember that the frame_len was adjusted above.
- */
- if (frame_len == 64) {
- stats->prc64++;
- stats->prc127--;
- } else if (frame_len == 127) {
- stats->prc127++;
- stats->prc255--;
- } else if (frame_len == 255) {
- stats->prc255++;
- stats->prc511--;
- } else if (frame_len == 511) {
- stats->prc511++;
- stats->prc1023--;
- } else if (frame_len == 1023) {
- stats->prc1023++;
- stats->prc1522--;
- } else if (frame_len == 1522) {
- stats->prc1522++;
- }
-}
-
-/**
* e1000_get_bus_info
* @hw: Struct containing variables accessed by shared code
*
diff --git a/drivers/net/ethernet/intel/e1000/e1000_hw.h b/drivers/net/ethernet/intel/e1000/e1000_hw.h
index 11578c8..5cf7268c 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_hw.h
+++ b/drivers/net/ethernet/intel/e1000/e1000_hw.h
@@ -393,8 +393,6 @@ s32 e1000_blink_led_start(struct e1000_hw *hw);
/* Everything else */
void e1000_reset_adaptive(struct e1000_hw *hw);
void e1000_update_adaptive(struct e1000_hw *hw);
-void e1000_tbi_adjust_stats(struct e1000_hw *hw, struct e1000_hw_stats *stats,
- u32 frame_len, u8 * mac_addr);
void e1000_get_bus_info(struct e1000_hw *hw);
void e1000_pci_set_mwi(struct e1000_hw *hw);
void e1000_pci_clear_mwi(struct e1000_hw *hw);
diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
index ad3d5d1..5f6aded 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
@@ -1497,7 +1497,7 @@ static int e1000_setup_tx_resources(struct e1000_adapter *adapter,
struct pci_dev *pdev = adapter->pdev;
int size;
- size = sizeof(struct e1000_buffer) * txdr->count;
+ size = sizeof(struct e1000_tx_buffer) * txdr->count;
txdr->buffer_info = vzalloc(size);
if (!txdr->buffer_info)
return -ENOMEM;
@@ -1687,7 +1687,7 @@ static int e1000_setup_rx_resources(struct e1000_adapter *adapter,
struct pci_dev *pdev = adapter->pdev;
int size, desc_len;
- size = sizeof(struct e1000_buffer) * rxdr->count;
+ size = sizeof(struct e1000_rx_buffer) * rxdr->count;
rxdr->buffer_info = vzalloc(size);
if (!rxdr->buffer_info)
return -ENOMEM;
@@ -1947,8 +1947,9 @@ void e1000_free_all_tx_resources(struct e1000_adapter *adapter)
e1000_free_tx_resources(adapter, &adapter->tx_ring[i]);
}
-static void e1000_unmap_and_free_tx_resource(struct e1000_adapter *adapter,
- struct e1000_buffer *buffer_info)
+static void
+e1000_unmap_and_free_tx_resource(struct e1000_adapter *adapter,
+ struct e1000_tx_buffer *buffer_info)
{
if (buffer_info->dma) {
if (buffer_info->mapped_as_page)
@@ -1977,7 +1978,7 @@ static void e1000_clean_tx_ring(struct e1000_adapter *adapter,
struct e1000_tx_ring *tx_ring)
{
struct e1000_hw *hw = &adapter->hw;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
unsigned long size;
unsigned int i;
@@ -1989,7 +1990,7 @@ static void e1000_clean_tx_ring(struct e1000_adapter *adapter,
}
netdev_reset_queue(adapter->netdev);
- size = sizeof(struct e1000_buffer) * tx_ring->count;
+ size = sizeof(struct e1000_tx_buffer) * tx_ring->count;
memset(tx_ring->buffer_info, 0, size);
/* Zero out the descriptor ring */
@@ -2053,6 +2054,28 @@ void e1000_free_all_rx_resources(struct e1000_adapter *adapter)
e1000_free_rx_resources(adapter, &adapter->rx_ring[i]);
}
+#define E1000_HEADROOM (NET_SKB_PAD + NET_IP_ALIGN)
+static unsigned int e1000_frag_len(const struct e1000_adapter *a)
+{
+ return SKB_DATA_ALIGN(a->rx_buffer_len + E1000_HEADROOM) +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+}
+
+static void *e1000_alloc_frag(const struct e1000_adapter *a)
+{
+ unsigned int len = e1000_frag_len(a);
+ u8 *data = netdev_alloc_frag(len);
+
+ if (likely(data))
+ data += E1000_HEADROOM;
+ return data;
+}
+
+static void e1000_free_frag(const void *data)
+{
+ put_page(virt_to_head_page(data));
+}
+
/**
* e1000_clean_rx_ring - Free Rx Buffers per Queue
* @adapter: board private structure
@@ -2062,44 +2085,42 @@ static void e1000_clean_rx_ring(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring)
{
struct e1000_hw *hw = &adapter->hw;
- struct e1000_buffer *buffer_info;
+ struct e1000_rx_buffer *buffer_info;
struct pci_dev *pdev = adapter->pdev;
unsigned long size;
unsigned int i;
- /* Free all the Rx ring sk_buffs */
+ /* Free all the Rx netfrags */
for (i = 0; i < rx_ring->count; i++) {
buffer_info = &rx_ring->buffer_info[i];
- if (buffer_info->dma &&
- adapter->clean_rx == e1000_clean_rx_irq) {
- dma_unmap_single(&pdev->dev, buffer_info->dma,
- buffer_info->length,
- DMA_FROM_DEVICE);
- } else if (buffer_info->dma &&
- adapter->clean_rx == e1000_clean_jumbo_rx_irq) {
- dma_unmap_page(&pdev->dev, buffer_info->dma,
- buffer_info->length,
- DMA_FROM_DEVICE);
+ if (adapter->clean_rx == e1000_clean_rx_irq) {
+ if (buffer_info->dma)
+ dma_unmap_single(&pdev->dev, buffer_info->dma,
+ adapter->rx_buffer_len,
+ DMA_FROM_DEVICE);
+ if (buffer_info->rxbuf.data) {
+ e1000_free_frag(buffer_info->rxbuf.data);
+ buffer_info->rxbuf.data = NULL;
+ }
+ } else if (adapter->clean_rx == e1000_clean_jumbo_rx_irq) {
+ if (buffer_info->dma)
+ dma_unmap_page(&pdev->dev, buffer_info->dma,
+ adapter->rx_buffer_len,
+ DMA_FROM_DEVICE);
+ if (buffer_info->rxbuf.page) {
+ put_page(buffer_info->rxbuf.page);
+ buffer_info->rxbuf.page = NULL;
+ }
}
buffer_info->dma = 0;
- if (buffer_info->page) {
- put_page(buffer_info->page);
- buffer_info->page = NULL;
- }
- if (buffer_info->skb) {
- dev_kfree_skb(buffer_info->skb);
- buffer_info->skb = NULL;
- }
}
/* there also may be some cached data from a chained receive */
- if (rx_ring->rx_skb_top) {
- dev_kfree_skb(rx_ring->rx_skb_top);
- rx_ring->rx_skb_top = NULL;
- }
+ napi_free_frags(&adapter->napi);
+ rx_ring->rx_skb_top = NULL;
- size = sizeof(struct e1000_buffer) * rx_ring->count;
+ size = sizeof(struct e1000_rx_buffer) * rx_ring->count;
memset(rx_ring->buffer_info, 0, size);
/* Zero out the descriptor ring */
@@ -2678,7 +2699,7 @@ static int e1000_tso(struct e1000_adapter *adapter,
__be16 protocol)
{
struct e1000_context_desc *context_desc;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
unsigned int i;
u32 cmd_length = 0;
u16 ipcse = 0, tucse, mss;
@@ -2750,7 +2771,7 @@ static bool e1000_tx_csum(struct e1000_adapter *adapter,
__be16 protocol)
{
struct e1000_context_desc *context_desc;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
unsigned int i;
u8 css;
u32 cmd_len = E1000_TXD_CMD_DEXT;
@@ -2809,7 +2830,7 @@ static int e1000_tx_map(struct e1000_adapter *adapter,
{
struct e1000_hw *hw = &adapter->hw;
struct pci_dev *pdev = adapter->pdev;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
unsigned int len = skb_headlen(skb);
unsigned int offset = 0, size, count = 0, i;
unsigned int f, bytecount, segs;
@@ -2955,7 +2976,7 @@ static void e1000_tx_queue(struct e1000_adapter *adapter,
{
struct e1000_hw *hw = &adapter->hw;
struct e1000_tx_desc *tx_desc = NULL;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
u32 txd_upper = 0, txd_lower = E1000_TXD_CMD_IFCS;
unsigned int i;
@@ -3373,7 +3394,7 @@ static void e1000_dump(struct e1000_adapter *adapter)
for (i = 0; tx_ring->desc && (i < tx_ring->count); i++) {
struct e1000_tx_desc *tx_desc = E1000_TX_DESC(*tx_ring, i);
- struct e1000_buffer *buffer_info = &tx_ring->buffer_info[i];
+ struct e1000_tx_buffer *buffer_info = &tx_ring->buffer_info[i];
struct my_u { __le64 a; __le64 b; };
struct my_u *u = (struct my_u *)tx_desc;
const char *type;
@@ -3415,7 +3436,7 @@ rx_ring_summary:
for (i = 0; rx_ring->desc && (i < rx_ring->count); i++) {
struct e1000_rx_desc *rx_desc = E1000_RX_DESC(*rx_ring, i);
- struct e1000_buffer *buffer_info = &rx_ring->buffer_info[i];
+ struct e1000_rx_buffer *buffer_info = &rx_ring->buffer_info[i];
struct my_u { __le64 a; __le64 b; };
struct my_u *u = (struct my_u *)rx_desc;
const char *type;
@@ -3429,7 +3450,7 @@ rx_ring_summary:
pr_info("R[0x%03X] %016llX %016llX %016llX %p %s\n",
i, le64_to_cpu(u->a), le64_to_cpu(u->b),
- (u64)buffer_info->dma, buffer_info->skb, type);
+ (u64)buffer_info->dma, buffer_info->rxbuf.data, type);
} /* for */
/* dump the descriptor caches */
@@ -3811,7 +3832,7 @@ static bool e1000_clean_tx_irq(struct e1000_adapter *adapter,
struct e1000_hw *hw = &adapter->hw;
struct net_device *netdev = adapter->netdev;
struct e1000_tx_desc *tx_desc, *eop_desc;
- struct e1000_buffer *buffer_info;
+ struct e1000_tx_buffer *buffer_info;
unsigned int i, eop;
unsigned int count = 0;
unsigned int total_tx_bytes=0, total_tx_packets=0;
@@ -3949,12 +3970,12 @@ static void e1000_rx_checksum(struct e1000_adapter *adapter, u32 status_err,
}
/**
- * e1000_consume_page - helper function
+ * e1000_consume_page - helper function for jumbo Rx path
**/
-static void e1000_consume_page(struct e1000_buffer *bi, struct sk_buff *skb,
+static void e1000_consume_page(struct e1000_rx_buffer *bi, struct sk_buff *skb,
u16 length)
{
- bi->page = NULL;
+ bi->rxbuf.page = NULL;
skb->len += length;
skb->data_len += length;
skb->truesize += PAGE_SIZE;
@@ -3981,6 +4002,113 @@ static void e1000_receive_skb(struct e1000_adapter *adapter, u8 status,
}
/**
+ * e1000_tbi_adjust_stats
+ * @hw: Struct containing variables accessed by shared code
+ * @frame_len: The length of the frame in question
+ * @mac_addr: The Ethernet destination address of the frame in question
+ *
+ * Adjusts the statistic counters when a frame is accepted by TBI_ACCEPT
+ */
+static void e1000_tbi_adjust_stats(struct e1000_hw *hw,
+ struct e1000_hw_stats *stats,
+ u32 frame_len, const u8 *mac_addr)
+{
+ u64 carry_bit;
+
+ /* First adjust the frame length. */
+ frame_len--;
+ /* We need to adjust the statistics counters, since the hardware
+ * counters overcount this packet as a CRC error and undercount
+ * the packet as a good packet
+ */
+ /* This packet should not be counted as a CRC error. */
+ stats->crcerrs--;
+ /* This packet does count as a Good Packet Received. */
+ stats->gprc++;
+
+ /* Adjust the Good Octets received counters */
+ carry_bit = 0x80000000 & stats->gorcl;
+ stats->gorcl += frame_len;
+ /* If the high bit of Gorcl (the low 32 bits of the Good Octets
+ * Received Count) was one before the addition,
+ * AND it is zero after, then we lost the carry out,
+ * need to add one to Gorch (Good Octets Received Count High).
+ * This could be simplified if all environments supported
+ * 64-bit integers.
+ */
+ if (carry_bit && ((stats->gorcl & 0x80000000) == 0))
+ stats->gorch++;
+ /* Is this a broadcast or multicast? Check broadcast first,
+ * since the test for a multicast frame will test positive on
+ * a broadcast frame.
+ */
+ if (is_broadcast_ether_addr(mac_addr))
+ stats->bprc++;
+ else if (is_multicast_ether_addr(mac_addr))
+ stats->mprc++;
+
+ if (frame_len == hw->max_frame_size) {
+ /* In this case, the hardware has overcounted the number of
+ * oversize frames.
+ */
+ if (stats->roc > 0)
+ stats->roc--;
+ }
+
+ /* Adjust the bin counters when the extra byte put the frame in the
+ * wrong bin. Remember that the frame_len was adjusted above.
+ */
+ if (frame_len == 64) {
+ stats->prc64++;
+ stats->prc127--;
+ } else if (frame_len == 127) {
+ stats->prc127++;
+ stats->prc255--;
+ } else if (frame_len == 255) {
+ stats->prc255++;
+ stats->prc511--;
+ } else if (frame_len == 511) {
+ stats->prc511++;
+ stats->prc1023--;
+ } else if (frame_len == 1023) {
+ stats->prc1023++;
+ stats->prc1522--;
+ } else if (frame_len == 1522) {
+ stats->prc1522++;
+ }
+}
+
+static bool e1000_tbi_should_accept(struct e1000_adapter *adapter,
+ u8 status, u8 errors,
+ u32 length, const u8 *data)
+{
+ struct e1000_hw *hw = &adapter->hw;
+ u8 last_byte = *(data + length - 1);
+
+ if (TBI_ACCEPT(hw, status, errors, length, last_byte)) {
+ unsigned long irq_flags;
+
+ spin_lock_irqsave(&adapter->stats_lock, irq_flags);
+ e1000_tbi_adjust_stats(hw, &adapter->stats, length, data);
+ spin_unlock_irqrestore(&adapter->stats_lock, irq_flags);
+
+ return true;
+ }
+
+ return false;
+}
+
+static struct sk_buff *e1000_alloc_rx_skb(struct e1000_adapter *adapter,
+ unsigned int bufsz)
+{
+ struct sk_buff *skb = netdev_alloc_skb_ip_align(adapter->netdev, bufsz);
+
+ if (unlikely(!skb))
+ adapter->alloc_rx_buff_failed++;
+ return skb;
+}
+
+/**
* e1000_clean_jumbo_rx_irq - Send received data up the network stack; legacy
* @adapter: board private structure
* @rx_ring: ring to clean
@@ -3994,12 +4122,10 @@ static bool e1000_clean_jumbo_rx_irq(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int *work_done, int work_to_do)
{
- struct e1000_hw *hw = &adapter->hw;
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct e1000_rx_desc *rx_desc, *next_rxd;
- struct e1000_buffer *buffer_info, *next_buffer;
- unsigned long irq_flags;
+ struct e1000_rx_buffer *buffer_info, *next_buffer;
u32 length;
unsigned int i;
int cleaned_count = 0;
@@ -4020,8 +4146,6 @@ static bool e1000_clean_jumbo_rx_irq(struct e1000_adapter *adapter,
rmb(); /* read descriptor and rx_buffer_info after status DD */
status = rx_desc->status;
- skb = buffer_info->skb;
- buffer_info->skb = NULL;
if (++i == rx_ring->count) i = 0;
next_rxd = E1000_RX_DESC(*rx_ring, i);
@@ -4032,7 +4156,7 @@ static bool e1000_clean_jumbo_rx_irq(struct e1000_adapter *adapter,
cleaned = true;
cleaned_count++;
dma_unmap_page(&pdev->dev, buffer_info->dma,
- buffer_info->length, DMA_FROM_DEVICE);
+ adapter->rx_buffer_len, DMA_FROM_DEVICE);
buffer_info->dma = 0;
length = le16_to_cpu(rx_desc->length);
@@ -4040,25 +4164,15 @@ static bool e1000_clean_jumbo_rx_irq(struct e1000_adapter *adapter,
/* errors is only valid for DD + EOP descriptors */
if (unlikely((status & E1000_RXD_STAT_EOP) &&
(rx_desc->errors & E1000_RXD_ERR_FRAME_ERR_MASK))) {
- u8 *mapped;
- u8 last_byte;
-
- mapped = page_address(buffer_info->page);
- last_byte = *(mapped + length - 1);
- if (TBI_ACCEPT(hw, status, rx_desc->errors, length,
- last_byte)) {
- spin_lock_irqsave(&adapter->stats_lock,
- irq_flags);
- e1000_tbi_adjust_stats(hw, &adapter->stats,
- length, mapped);
- spin_unlock_irqrestore(&adapter->stats_lock,
- irq_flags);
+ u8 *mapped = page_address(buffer_info->rxbuf.page);
+
+ if (e1000_tbi_should_accept(adapter, status,
+ rx_desc->errors,
+ length, mapped)) {
length--;
+ } else if (netdev->features & NETIF_F_RXALL) {
+ goto process_skb;
} else {
- if (netdev->features & NETIF_F_RXALL)
- goto process_skb;
- /* recycle both page and skb */
- buffer_info->skb = skb;
/* an error means any chain goes out the window
* too
*/
@@ -4075,16 +4189,18 @@ process_skb:
/* this descriptor is only the beginning (or middle) */
if (!rxtop) {
/* this is the beginning of a chain */
- rxtop = skb;
- skb_fill_page_desc(rxtop, 0, buffer_info->page,
+ rxtop = napi_get_frags(&adapter->napi);
+ if (!rxtop)
+ break;
+
+ skb_fill_page_desc(rxtop, 0,
+ buffer_info->rxbuf.page,
0, length);
} else {
/* this is the middle of a chain */
skb_fill_page_desc(rxtop,
skb_shinfo(rxtop)->nr_frags,
- buffer_info->page, 0, length);
- /* re-use the skb, only consumed the page */
- buffer_info->skb = skb;
+ buffer_info->rxbuf.page, 0, length);
}
e1000_consume_page(buffer_info, rxtop, length);
goto next_desc;
@@ -4093,32 +4209,51 @@ process_skb:
/* end of the chain */
skb_fill_page_desc(rxtop,
skb_shinfo(rxtop)->nr_frags,
- buffer_info->page, 0, length);
- /* re-use the current skb, we only consumed the
- * page
- */
- buffer_info->skb = skb;
+ buffer_info->rxbuf.page, 0, length);
skb = rxtop;
rxtop = NULL;
e1000_consume_page(buffer_info, skb, length);
} else {
+ struct page *p;
/* no chain, got EOP, this buf is the packet
* copybreak to save the put_page/alloc_page
*/
- if (length <= copybreak &&
- skb_tailroom(skb) >= length) {
+ p = buffer_info->rxbuf.page;
+ if (length <= copybreak) {
u8 *vaddr;
- vaddr = kmap_atomic(buffer_info->page);
+
+ if (likely(!(netdev->features & NETIF_F_RXFCS)))
+ length -= 4;
+ skb = e1000_alloc_rx_skb(adapter,
+ length);
+ if (!skb)
+ break;
+
+ vaddr = kmap_atomic(p);
memcpy(skb_tail_pointer(skb), vaddr,
length);
kunmap_atomic(vaddr);
/* re-use the page, so don't erase
- * buffer_info->page
+ * buffer_info->rxbuf.page
*/
skb_put(skb, length);
+ e1000_rx_checksum(adapter,
+ status | rx_desc->errors << 24,
+ le16_to_cpu(rx_desc->csum), skb);
+
+ total_rx_bytes += skb->len;
+ total_rx_packets++;
+
+ e1000_receive_skb(adapter, status,
+ rx_desc->special, skb);
+ goto next_desc;
} else {
- skb_fill_page_desc(skb, 0,
- buffer_info->page, 0,
+ skb = napi_get_frags(&adapter->napi);
+ if (!skb) {
+ adapter->alloc_rx_buff_failed++;
+ break;
+ }
+ skb_fill_page_desc(skb, 0, p, 0,
length);
e1000_consume_page(buffer_info, skb,
length);
@@ -4137,14 +4272,14 @@ process_skb:
pskb_trim(skb, skb->len - 4);
total_rx_packets++;
- /* eth type trans needs skb->data to point to something */
- if (!pskb_may_pull(skb, ETH_HLEN)) {
- e_err(drv, "pskb_may_pull failed.\n");
- dev_kfree_skb(skb);
- goto next_desc;
+ if (status & E1000_RXD_STAT_VP) {
+ __le16 vlan = rx_desc->special;
+ u16 vid = le16_to_cpu(vlan) & E1000_RXD_SPC_VLAN_MASK;
+
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vid);
}
- e1000_receive_skb(adapter, status, rx_desc->special, skb);
+ napi_gro_frags(&adapter->napi);
next_desc:
rx_desc->status = 0;
@@ -4175,25 +4310,25 @@ next_desc:
/* this should improve performance for small packets with large amounts
* of reassembly being done in the stack
*/
-static void e1000_check_copybreak(struct net_device *netdev,
- struct e1000_buffer *buffer_info,
- u32 length, struct sk_buff **skb)
+static struct sk_buff *e1000_copybreak(struct e1000_adapter *adapter,
+ struct e1000_rx_buffer *buffer_info,
+ u32 length, const void *data)
{
- struct sk_buff *new_skb;
+ struct sk_buff *skb;
if (length > copybreak)
- return;
+ return NULL;
- new_skb = netdev_alloc_skb_ip_align(netdev, length);
- if (!new_skb)
- return;
+ skb = e1000_alloc_rx_skb(adapter, length);
+ if (!skb)
+ return NULL;
+
+ dma_sync_single_for_cpu(&adapter->pdev->dev, buffer_info->dma,
+ length, DMA_FROM_DEVICE);
- skb_copy_to_linear_data_offset(new_skb, -NET_IP_ALIGN,
- (*skb)->data - NET_IP_ALIGN,
- length + NET_IP_ALIGN);
- /* save the skb in buffer_info as good */
- buffer_info->skb = *skb;
- *skb = new_skb;
+ memcpy(skb_put(skb, length), data, length);
+
+ return skb;
}
/**
@@ -4207,12 +4342,10 @@ static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int *work_done, int work_to_do)
{
- struct e1000_hw *hw = &adapter->hw;
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct e1000_rx_desc *rx_desc, *next_rxd;
- struct e1000_buffer *buffer_info, *next_buffer;
- unsigned long flags;
+ struct e1000_rx_buffer *buffer_info, *next_buffer;
u32 length;
unsigned int i;
int cleaned_count = 0;
@@ -4225,6 +4358,7 @@ static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
while (rx_desc->status & E1000_RXD_STAT_DD) {
struct sk_buff *skb;
+ u8 *data;
u8 status;
if (*work_done >= work_to_do)
@@ -4233,10 +4367,27 @@ static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
rmb(); /* read descriptor and rx_buffer_info after status DD */
status = rx_desc->status;
- skb = buffer_info->skb;
- buffer_info->skb = NULL;
+ length = le16_to_cpu(rx_desc->length);
+
+ data = buffer_info->rxbuf.data;
+ prefetch(data);
+ skb = e1000_copybreak(adapter, buffer_info, length, data);
+ if (!skb) {
+ unsigned int frag_len = e1000_frag_len(adapter);
+
+ skb = build_skb(data - E1000_HEADROOM, frag_len);
+ if (!skb) {
+ adapter->alloc_rx_buff_failed++;
+ break;
+ }
- prefetch(skb->data - NET_IP_ALIGN);
+ skb_reserve(skb, E1000_HEADROOM);
+ dma_unmap_single(&pdev->dev, buffer_info->dma,
+ adapter->rx_buffer_len,
+ DMA_FROM_DEVICE);
+ buffer_info->dma = 0;
+ buffer_info->rxbuf.data = NULL;
+ }
if (++i == rx_ring->count) i = 0;
next_rxd = E1000_RX_DESC(*rx_ring, i);
@@ -4246,11 +4397,7 @@ static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
cleaned = true;
cleaned_count++;
- dma_unmap_single(&pdev->dev, buffer_info->dma,
- buffer_info->length, DMA_FROM_DEVICE);
- buffer_info->dma = 0;
- length = le16_to_cpu(rx_desc->length);
/* !EOP means multiple descriptors were used to store a single
* packet, if thats the case we need to toss it. In fact, we
* to toss every packet with the EOP bit clear and the next
@@ -4262,29 +4409,22 @@ static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
if (adapter->discarding) {
/* All receives must fit into a single buffer */
- e_dbg("Receive packet consumed multiple buffers\n");
- /* recycle */
- buffer_info->skb = skb;
+ netdev_dbg(netdev, "Receive packet consumed multiple buffers\n");
+ dev_kfree_skb(skb);
if (status & E1000_RXD_STAT_EOP)
adapter->discarding = false;
goto next_desc;
}
if (unlikely(rx_desc->errors & E1000_RXD_ERR_FRAME_ERR_MASK)) {
- u8 last_byte = *(skb->data + length - 1);
- if (TBI_ACCEPT(hw, status, rx_desc->errors, length,
- last_byte)) {
- spin_lock_irqsave(&adapter->stats_lock, flags);
- e1000_tbi_adjust_stats(hw, &adapter->stats,
- length, skb->data);
- spin_unlock_irqrestore(&adapter->stats_lock,
- flags);
+ if (e1000_tbi_should_accept(adapter, status,
+ rx_desc->errors,
+ length, data)) {
length--;
+ } else if (netdev->features & NETIF_F_RXALL) {
+ goto process_skb;
} else {
- if (netdev->features & NETIF_F_RXALL)
- goto process_skb;
- /* recycle */
- buffer_info->skb = skb;
+ dev_kfree_skb(skb);
goto next_desc;
}
}
@@ -4299,9 +4439,10 @@ process_skb:
*/
length -= 4;
- e1000_check_copybreak(netdev, buffer_info, length, &skb);
-
- skb_put(skb, length);
+ if (buffer_info->rxbuf.data == NULL)
+ skb_put(skb, length);
+ else /* copybreak skb */
+ skb_trim(skb, length);
/* Receive Checksum Offload */
e1000_rx_checksum(adapter,
@@ -4347,38 +4488,19 @@ static void
e1000_alloc_jumbo_rx_buffers(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring, int cleaned_count)
{
- struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct e1000_rx_desc *rx_desc;
- struct e1000_buffer *buffer_info;
- struct sk_buff *skb;
+ struct e1000_rx_buffer *buffer_info;
unsigned int i;
- unsigned int bufsz = 256 - 16 /*for skb_reserve */ ;
i = rx_ring->next_to_use;
buffer_info = &rx_ring->buffer_info[i];
while (cleaned_count--) {
- skb = buffer_info->skb;
- if (skb) {
- skb_trim(skb, 0);
- goto check_page;
- }
-
- skb = netdev_alloc_skb_ip_align(netdev, bufsz);
- if (unlikely(!skb)) {
- /* Better luck next round */
- adapter->alloc_rx_buff_failed++;
- break;
- }
-
- buffer_info->skb = skb;
- buffer_info->length = adapter->rx_buffer_len;
-check_page:
/* allocate a new page if necessary */
- if (!buffer_info->page) {
- buffer_info->page = alloc_page(GFP_ATOMIC);
- if (unlikely(!buffer_info->page)) {
+ if (!buffer_info->rxbuf.page) {
+ buffer_info->rxbuf.page = alloc_page(GFP_ATOMIC);
+ if (unlikely(!buffer_info->rxbuf.page)) {
adapter->alloc_rx_buff_failed++;
break;
}
@@ -4386,17 +4508,15 @@ check_page:
if (!buffer_info->dma) {
buffer_info->dma = dma_map_page(&pdev->dev,
- buffer_info->page, 0,
- buffer_info->length,
+ buffer_info->rxbuf.page, 0,
+ adapter->rx_buffer_len,
DMA_FROM_DEVICE);
if (dma_mapping_error(&pdev->dev, buffer_info->dma)) {
- put_page(buffer_info->page);
- dev_kfree_skb(skb);
- buffer_info->page = NULL;
- buffer_info->skb = NULL;
+ put_page(buffer_info->rxbuf.page);
+ buffer_info->rxbuf.page = NULL;
buffer_info->dma = 0;
adapter->alloc_rx_buff_failed++;
- break; /* while !buffer_info->skb */
+ break;
}
}
@@ -4432,11 +4552,9 @@ static void e1000_alloc_rx_buffers(struct e1000_adapter *adapter,
int cleaned_count)
{
struct e1000_hw *hw = &adapter->hw;
- struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct e1000_rx_desc *rx_desc;
- struct e1000_buffer *buffer_info;
- struct sk_buff *skb;
+ struct e1000_rx_buffer *buffer_info;
unsigned int i;
unsigned int bufsz = adapter->rx_buffer_len;
@@ -4444,57 +4562,52 @@ static void e1000_alloc_rx_buffers(struct e1000_adapter *adapter,
buffer_info = &rx_ring->buffer_info[i];
while (cleaned_count--) {
- skb = buffer_info->skb;
- if (skb) {
- skb_trim(skb, 0);
- goto map_skb;
- }
+ void *data;
- skb = netdev_alloc_skb_ip_align(netdev, bufsz);
- if (unlikely(!skb)) {
+ if (buffer_info->rxbuf.data)
+ goto skip;
+
+ data = e1000_alloc_frag(adapter);
+ if (!data) {
/* Better luck next round */
adapter->alloc_rx_buff_failed++;
break;
}
/* Fix for errata 23, can't cross 64kB boundary */
- if (!e1000_check_64k_bound(adapter, skb->data, bufsz)) {
- struct sk_buff *oldskb = skb;
+ if (!e1000_check_64k_bound(adapter, data, bufsz)) {
+ void *olddata = data;
e_err(rx_err, "skb align check failed: %u bytes at "
- "%p\n", bufsz, skb->data);
+ "%p\n", bufsz, data);
/* Try again, without freeing the previous */
- skb = netdev_alloc_skb_ip_align(netdev, bufsz);
+ data = e1000_alloc_frag(adapter);
/* Failed allocation, critical failure */
- if (!skb) {
- dev_kfree_skb(oldskb);
+ if (!data) {
+ e1000_free_frag(olddata);
adapter->alloc_rx_buff_failed++;
break;
}
- if (!e1000_check_64k_bound(adapter, skb->data, bufsz)) {
+ if (!e1000_check_64k_bound(adapter, data, bufsz)) {
/* give up */
- dev_kfree_skb(skb);
- dev_kfree_skb(oldskb);
+ e1000_free_frag(data);
+ e1000_free_frag(olddata);
adapter->alloc_rx_buff_failed++;
- break; /* while !buffer_info->skb */
+ break;
}
/* Use new allocation */
- dev_kfree_skb(oldskb);
+ e1000_free_frag(olddata);
}
- buffer_info->skb = skb;
- buffer_info->length = adapter->rx_buffer_len;
-map_skb:
buffer_info->dma = dma_map_single(&pdev->dev,
- skb->data,
- buffer_info->length,
+ data,
+ adapter->rx_buffer_len,
DMA_FROM_DEVICE);
if (dma_mapping_error(&pdev->dev, buffer_info->dma)) {
- dev_kfree_skb(skb);
- buffer_info->skb = NULL;
+ e1000_free_frag(data);
buffer_info->dma = 0;
adapter->alloc_rx_buff_failed++;
- break; /* while !buffer_info->skb */
+ break;
}
/* XXX if it was allocated cleanly it will never map to a
@@ -4508,17 +4621,20 @@ map_skb:
e_err(rx_err, "dma align check failed: %u bytes at "
"%p\n", adapter->rx_buffer_len,
(void *)(unsigned long)buffer_info->dma);
- dev_kfree_skb(skb);
- buffer_info->skb = NULL;
dma_unmap_single(&pdev->dev, buffer_info->dma,
adapter->rx_buffer_len,
DMA_FROM_DEVICE);
+
+ e1000_free_frag(data);
+ buffer_info->rxbuf.data = NULL;
buffer_info->dma = 0;
adapter->alloc_rx_buff_failed++;
- break; /* while !buffer_info->skb */
+ break;
}
+ buffer_info->rxbuf.data = data;
+ skip:
rx_desc = E1000_RX_DESC(*rx_ring, i);
rx_desc->buffer_addr = cpu_to_le64(buffer_info->dma);
diff --git a/drivers/net/ethernet/intel/fm10k/Makefile b/drivers/net/ethernet/intel/fm10k/Makefile
new file mode 100644
index 0000000..08859dd
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/Makefile
@@ -0,0 +1,33 @@
+################################################################################
+#
+# Intel Ethernet Switch Host Interface Driver
+# Copyright(c) 2013 - 2014 Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+# more details.
+#
+# The full GNU General Public License is included in this distribution in
+# the file called "COPYING".
+#
+# Contact Information:
+# e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+# Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+#
+################################################################################
+
+#
+# Makefile for the Intel(R) FM10000 Ethernet Switch Host Interface driver
+#
+
+obj-$(CONFIG_FM10K) += fm10k.o
+
+fm10k-objs := fm10k_main.o fm10k_common.o fm10k_pci.o \
+ fm10k_netdev.o fm10k_ethtool.o fm10k_pf.o fm10k_vf.o \
+ fm10k_mbx.o fm10k_iov.o fm10k_tlv.o \
+ fm10k_debugfs.o fm10k_ptp.o fm10k_dcbnl.o
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k.h b/drivers/net/ethernet/intel/fm10k/fm10k.h
new file mode 100644
index 0000000..42eb434
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k.h
@@ -0,0 +1,530 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_H_
+#define _FM10K_H_
+
+#include <linux/types.h>
+#include <linux/etherdevice.h>
+#include <linux/rtnetlink.h>
+#include <linux/if_vlan.h>
+#include <linux/pci.h>
+#include <linux/net_tstamp.h>
+#include <linux/clocksource.h>
+#include <linux/ptp_clock_kernel.h>
+
+#include "fm10k_pf.h"
+#include "fm10k_vf.h"
+
+#define FM10K_MAX_JUMBO_FRAME_SIZE 15358 /* Maximum supported size 15K */
+
+#define MAX_QUEUES FM10K_MAX_QUEUES_PF
+
+#define FM10K_MIN_RXD 128
+#define FM10K_MAX_RXD 4096
+#define FM10K_DEFAULT_RXD 256
+
+#define FM10K_MIN_TXD 128
+#define FM10K_MAX_TXD 4096
+#define FM10K_DEFAULT_TXD 256
+#define FM10K_DEFAULT_TX_WORK 256
+
+#define FM10K_RXBUFFER_256 256
+#define FM10K_RX_HDR_LEN FM10K_RXBUFFER_256
+#define FM10K_RXBUFFER_2048 2048
+#define FM10K_RX_BUFSZ FM10K_RXBUFFER_2048
+
+/* How many Rx Buffers do we bundle into one write to the hardware ? */
+#define FM10K_RX_BUFFER_WRITE 16 /* Must be power of 2 */
+
+#define FM10K_MAX_STATIONS 63
+struct fm10k_l2_accel {
+ int size;
+ u16 count;
+ u16 dglort;
+ struct rcu_head rcu;
+ struct net_device *macvlan[0];
+};
+
+enum fm10k_ring_state_t {
+ __FM10K_TX_DETECT_HANG,
+ __FM10K_HANG_CHECK_ARMED,
+};
+
+#define check_for_tx_hang(ring) \
+ test_bit(__FM10K_TX_DETECT_HANG, &(ring)->state)
+#define set_check_for_tx_hang(ring) \
+ set_bit(__FM10K_TX_DETECT_HANG, &(ring)->state)
+#define clear_check_for_tx_hang(ring) \
+ clear_bit(__FM10K_TX_DETECT_HANG, &(ring)->state)
+
+struct fm10k_tx_buffer {
+ struct fm10k_tx_desc *next_to_watch;
+ struct sk_buff *skb;
+ unsigned int bytecount;
+ u16 gso_segs;
+ u16 tx_flags;
+ DEFINE_DMA_UNMAP_ADDR(dma);
+ DEFINE_DMA_UNMAP_LEN(len);
+};
+
+struct fm10k_rx_buffer {
+ dma_addr_t dma;
+ struct page *page;
+ u32 page_offset;
+};
+
+struct fm10k_queue_stats {
+ u64 packets;
+ u64 bytes;
+};
+
+struct fm10k_tx_queue_stats {
+ u64 restart_queue;
+ u64 csum_err;
+ u64 tx_busy;
+ u64 tx_done_old;
+};
+
+struct fm10k_rx_queue_stats {
+ u64 alloc_failed;
+ u64 csum_err;
+ u64 errors;
+};
+
+struct fm10k_ring {
+ struct fm10k_q_vector *q_vector;/* backpointer to host q_vector */
+ struct net_device *netdev; /* netdev ring belongs to */
+ struct device *dev; /* device for DMA mapping */
+ struct fm10k_l2_accel __rcu *l2_accel; /* L2 acceleration list */
+ void *desc; /* descriptor ring memory */
+ union {
+ struct fm10k_tx_buffer *tx_buffer;
+ struct fm10k_rx_buffer *rx_buffer;
+ };
+ u32 __iomem *tail;
+ unsigned long state;
+ dma_addr_t dma; /* phys. address of descriptor ring */
+ unsigned int size; /* length in bytes */
+
+ u8 queue_index; /* needed for queue management */
+ u8 reg_idx; /* holds the special value that gets
+ * the hardware register offset
+ * associated with this ring, which is
+ * different for DCB and RSS modes
+ */
+ u8 qos_pc; /* priority class of queue */
+ u16 vid; /* default vlan ID of queue */
+ u16 count; /* amount of descriptors */
+
+ u16 next_to_alloc;
+ u16 next_to_use;
+ u16 next_to_clean;
+
+ struct fm10k_queue_stats stats;
+ struct u64_stats_sync syncp;
+ union {
+ /* Tx */
+ struct fm10k_tx_queue_stats tx_stats;
+ /* Rx */
+ struct {
+ struct fm10k_rx_queue_stats rx_stats;
+ struct sk_buff *skb;
+ };
+ };
+} ____cacheline_internodealigned_in_smp;
+
+struct fm10k_ring_container {
+ struct fm10k_ring *ring; /* pointer to linked list of rings */
+ unsigned int total_bytes; /* total bytes processed this int */
+ unsigned int total_packets; /* total packets processed this int */
+ u16 work_limit; /* total work allowed per interrupt */
+ u16 itr; /* interrupt throttle rate value */
+ u8 count; /* total number of rings in vector */
+};
+
+#define FM10K_ITR_MAX 0x0FFF /* maximum value for ITR */
+#define FM10K_ITR_10K 100 /* 100us */
+#define FM10K_ITR_20K 50 /* 50us */
+#define FM10K_ITR_ADAPTIVE 0x8000 /* adaptive interrupt moderation flag */
+
+#define FM10K_ITR_ENABLE (FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR)
+
+static inline struct netdev_queue *txring_txq(const struct fm10k_ring *ring)
+{
+ return &ring->netdev->_tx[ring->queue_index];
+}
+
+/* iterator for handling rings in ring container */
+#define fm10k_for_each_ring(pos, head) \
+ for (pos = &(head).ring[(head).count]; (--pos) >= (head).ring;)
+
+#define MAX_Q_VECTORS 256
+#define MIN_Q_VECTORS 1
+enum fm10k_non_q_vectors {
+ FM10K_MBX_VECTOR,
+#define NON_Q_VECTORS_VF NON_Q_VECTORS_PF
+ NON_Q_VECTORS_PF
+};
+
+#define NON_Q_VECTORS(hw) (((hw)->mac.type == fm10k_mac_pf) ? \
+ NON_Q_VECTORS_PF : \
+ NON_Q_VECTORS_VF)
+#define MIN_MSIX_COUNT(hw) (MIN_Q_VECTORS + NON_Q_VECTORS(hw))
+
+struct fm10k_q_vector {
+ struct fm10k_intfc *interface;
+ u32 __iomem *itr; /* pointer to ITR register for this vector */
+ u16 v_idx; /* index of q_vector within interface array */
+ struct fm10k_ring_container rx, tx;
+
+ struct napi_struct napi;
+ char name[IFNAMSIZ + 9];
+
+#ifdef CONFIG_DEBUG_FS
+ struct dentry *dbg_q_vector;
+#endif /* CONFIG_DEBUG_FS */
+ struct rcu_head rcu; /* to avoid race with update stats on free */
+
+ /* for dynamic allocation of rings associated with this q_vector */
+ struct fm10k_ring ring[0] ____cacheline_internodealigned_in_smp;
+};
+
+enum fm10k_ring_f_enum {
+ RING_F_RSS,
+ RING_F_QOS,
+ RING_F_ARRAY_SIZE /* must be last in enum set */
+};
+
+struct fm10k_ring_feature {
+ u16 limit; /* upper limit on feature indices */
+ u16 indices; /* current value of indices */
+ u16 mask; /* Mask used for feature to ring mapping */
+ u16 offset; /* offset to start of feature */
+};
+
+struct fm10k_iov_data {
+ unsigned int num_vfs;
+ unsigned int next_vf_mbx;
+ struct rcu_head rcu;
+ struct fm10k_vf_info vf_info[0];
+};
+
+#define fm10k_vxlan_port_for_each(vp, intfc) \
+ list_for_each_entry(vp, &(intfc)->vxlan_port, list)
+struct fm10k_vxlan_port {
+ struct list_head list;
+ sa_family_t sa_family;
+ __be16 port;
+};
+
+struct fm10k_intfc {
+ unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
+ struct net_device *netdev;
+ struct fm10k_l2_accel *l2_accel; /* pointer to L2 acceleration list */
+ struct pci_dev *pdev;
+ unsigned long state;
+
+ u32 flags;
+#define FM10K_FLAG_RESET_REQUESTED (u32)(1 << 0)
+#define FM10K_FLAG_RSS_FIELD_IPV4_UDP (u32)(1 << 1)
+#define FM10K_FLAG_RSS_FIELD_IPV6_UDP (u32)(1 << 2)
+#define FM10K_FLAG_RX_TS_ENABLED (u32)(1 << 3)
+#define FM10K_FLAG_SWPRI_CONFIG (u32)(1 << 4)
+ int xcast_mode;
+
+ /* Tx fast path data */
+ int num_tx_queues;
+ u16 tx_itr;
+
+ /* Rx fast path data */
+ int num_rx_queues;
+ u16 rx_itr;
+
+ /* TX */
+ struct fm10k_ring *tx_ring[MAX_QUEUES] ____cacheline_aligned_in_smp;
+
+ u64 restart_queue;
+ u64 tx_busy;
+ u64 tx_csum_errors;
+ u64 alloc_failed;
+ u64 rx_csum_errors;
+ u64 rx_errors;
+
+ u64 tx_bytes_nic;
+ u64 tx_packets_nic;
+ u64 rx_bytes_nic;
+ u64 rx_packets_nic;
+ u64 rx_drops_nic;
+ u64 rx_overrun_pf;
+ u64 rx_overrun_vf;
+ u32 tx_timeout_count;
+
+ /* RX */
+ struct fm10k_ring *rx_ring[MAX_QUEUES];
+
+ /* Queueing vectors */
+ struct fm10k_q_vector *q_vector[MAX_Q_VECTORS];
+ struct msix_entry *msix_entries;
+ int num_q_vectors; /* current number of q_vectors for device */
+ struct fm10k_ring_feature ring_feature[RING_F_ARRAY_SIZE];
+
+ /* SR-IOV information management structure */
+ struct fm10k_iov_data *iov_data;
+
+ struct fm10k_hw_stats stats;
+ struct fm10k_hw hw;
+ u32 __iomem *uc_addr;
+ u32 __iomem *sw_addr;
+ u16 msg_enable;
+ u16 tx_ring_count;
+ u16 rx_ring_count;
+ struct timer_list service_timer;
+ struct work_struct service_task;
+ unsigned long next_stats_update;
+ unsigned long next_tx_hang_check;
+ unsigned long last_reset;
+ unsigned long link_down_event;
+ bool host_ready;
+
+ u32 reta[FM10K_RETA_SIZE];
+ u32 rssrk[FM10K_RSSRK_SIZE];
+
+ /* VXLAN port tracking information */
+ struct list_head vxlan_port;
+
+#ifdef CONFIG_DEBUG_FS
+ struct dentry *dbg_intfc;
+
+#endif /* CONFIG_DEBUG_FS */
+ struct ptp_clock_info ptp_caps;
+ struct ptp_clock *ptp_clock;
+
+ struct sk_buff_head ts_tx_skb_queue;
+ u32 tx_hwtstamp_timeouts;
+
+ struct hwtstamp_config ts_config;
+ /* We are unable to actually adjust the clock beyond the frequency
+ * value. Once the clock is started there is no resetting it. As
+ * such we maintain a separate offset from the actual hardware clock
+ * to allow for offset adjustment.
+ */
+ s64 ptp_adjust;
+ rwlock_t systime_lock;
+#ifdef CONFIG_DCB
+ u8 pfc_en;
+#endif
+ u8 rx_pause;
+
+ /* GLORT resources in use by PF */
+ u16 glort;
+ u16 glort_count;
+
+ /* VLAN ID for updating multicast/unicast lists */
+ u16 vid;
+};
+
+enum fm10k_state_t {
+ __FM10K_RESETTING,
+ __FM10K_DOWN,
+ __FM10K_SERVICE_SCHED,
+ __FM10K_SERVICE_DISABLE,
+ __FM10K_MBX_LOCK,
+ __FM10K_LINK_DOWN,
+};
+
+static inline void fm10k_mbx_lock(struct fm10k_intfc *interface)
+{
+ /* busy loop if we cannot obtain the lock as some calls
+ * such as ndo_set_rx_mode may be made in atomic context
+ */
+ while (test_and_set_bit(__FM10K_MBX_LOCK, &interface->state))
+ udelay(20);
+}
+
+static inline void fm10k_mbx_unlock(struct fm10k_intfc *interface)
+{
+ /* flush memory to make sure state is correct */
+ smp_mb__before_atomic();
+ clear_bit(__FM10K_MBX_LOCK, &interface->state);
+}
+
+static inline int fm10k_mbx_trylock(struct fm10k_intfc *interface)
+{
+ return !test_and_set_bit(__FM10K_MBX_LOCK, &interface->state);
+}
+
+/* fm10k_test_staterr - test bits in Rx descriptor status and error fields */
+static inline __le32 fm10k_test_staterr(union fm10k_rx_desc *rx_desc,
+ const u32 stat_err_bits)
+{
+ return rx_desc->d.staterr & cpu_to_le32(stat_err_bits);
+}
+
+/* fm10k_desc_unused - calculate if we have unused descriptors */
+static inline u16 fm10k_desc_unused(struct fm10k_ring *ring)
+{
+ s16 unused = ring->next_to_clean - ring->next_to_use - 1;
+
+ return likely(unused < 0) ? unused + ring->count : unused;
+}
+
+#define FM10K_TX_DESC(R, i) \
+ (&(((struct fm10k_tx_desc *)((R)->desc))[i]))
+#define FM10K_RX_DESC(R, i) \
+ (&(((union fm10k_rx_desc *)((R)->desc))[i]))
+
+#define FM10K_MAX_TXD_PWR 14
+#define FM10K_MAX_DATA_PER_TXD (1 << FM10K_MAX_TXD_PWR)
+
+/* Tx Descriptors needed, worst case */
+#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), FM10K_MAX_DATA_PER_TXD)
+#define DESC_NEEDED (MAX_SKB_FRAGS + 4)
+
+enum fm10k_tx_flags {
+ /* Tx offload flags */
+ FM10K_TX_FLAGS_CSUM = 0x01,
+};
+
+/* This structure is stored as little endian values as that is the native
+ * format of the Rx descriptor. The ordering of these fields is reversed
+ * from the actual ftag header to allow for a single bswap to take care
+ * of placing all of the values in network order
+ */
+union fm10k_ftag_info {
+ __le64 ftag;
+ struct {
+ /* dglort and sglort combined into a single 32bit desc read */
+ __le32 glort;
+ /* upper 16 bits of vlan are reserved 0 for swpri_type_user */
+ __le32 vlan;
+ } d;
+ struct {
+ __le16 dglort;
+ __le16 sglort;
+ __le16 vlan;
+ __le16 swpri_type_user;
+ } w;
+};
+
+struct fm10k_cb {
+ union {
+ __le64 tstamp;
+ unsigned long ts_tx_timeout;
+ };
+ union fm10k_ftag_info fi;
+};
+
+#define FM10K_CB(skb) ((struct fm10k_cb *)(skb)->cb)
+
+/* main */
+extern char fm10k_driver_name[];
+extern const char fm10k_driver_version[];
+int fm10k_init_queueing_scheme(struct fm10k_intfc *interface);
+void fm10k_clear_queueing_scheme(struct fm10k_intfc *interface);
+netdev_tx_t fm10k_xmit_frame_ring(struct sk_buff *skb,
+ struct fm10k_ring *tx_ring);
+void fm10k_tx_timeout_reset(struct fm10k_intfc *interface);
+bool fm10k_check_tx_hang(struct fm10k_ring *tx_ring);
+void fm10k_alloc_rx_buffers(struct fm10k_ring *rx_ring, u16 cleaned_count);
+
+/* PCI */
+void fm10k_mbx_free_irq(struct fm10k_intfc *);
+int fm10k_mbx_request_irq(struct fm10k_intfc *);
+void fm10k_qv_free_irq(struct fm10k_intfc *interface);
+int fm10k_qv_request_irq(struct fm10k_intfc *interface);
+int fm10k_register_pci_driver(void);
+void fm10k_unregister_pci_driver(void);
+void fm10k_up(struct fm10k_intfc *interface);
+void fm10k_down(struct fm10k_intfc *interface);
+void fm10k_update_stats(struct fm10k_intfc *interface);
+void fm10k_service_event_schedule(struct fm10k_intfc *interface);
+void fm10k_update_rx_drop_en(struct fm10k_intfc *interface);
+
+/* Netdev */
+struct net_device *fm10k_alloc_netdev(void);
+int fm10k_setup_rx_resources(struct fm10k_ring *);
+int fm10k_setup_tx_resources(struct fm10k_ring *);
+void fm10k_free_rx_resources(struct fm10k_ring *);
+void fm10k_free_tx_resources(struct fm10k_ring *);
+void fm10k_clean_all_rx_rings(struct fm10k_intfc *);
+void fm10k_clean_all_tx_rings(struct fm10k_intfc *);
+void fm10k_unmap_and_free_tx_resource(struct fm10k_ring *,
+ struct fm10k_tx_buffer *);
+void fm10k_restore_rx_state(struct fm10k_intfc *);
+void fm10k_reset_rx_state(struct fm10k_intfc *);
+int fm10k_setup_tc(struct net_device *dev, u8 tc);
+int fm10k_open(struct net_device *netdev);
+int fm10k_close(struct net_device *netdev);
+
+/* Ethtool */
+void fm10k_set_ethtool_ops(struct net_device *dev);
+
+/* IOV */
+s32 fm10k_iov_event(struct fm10k_intfc *interface);
+s32 fm10k_iov_mbx(struct fm10k_intfc *interface);
+void fm10k_iov_suspend(struct pci_dev *pdev);
+int fm10k_iov_resume(struct pci_dev *pdev);
+void fm10k_iov_disable(struct pci_dev *pdev);
+int fm10k_iov_configure(struct pci_dev *pdev, int num_vfs);
+s32 fm10k_iov_update_pvid(struct fm10k_intfc *interface, u16 glort, u16 pvid);
+int fm10k_ndo_set_vf_mac(struct net_device *netdev, int vf_idx, u8 *mac);
+int fm10k_ndo_set_vf_vlan(struct net_device *netdev,
+ int vf_idx, u16 vid, u8 qos);
+int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx, int rate,
+ int unused);
+int fm10k_ndo_get_vf_config(struct net_device *netdev,
+ int vf_idx, struct ifla_vf_info *ivi);
+
+/* DebugFS */
+#ifdef CONFIG_DEBUG_FS
+void fm10k_dbg_q_vector_init(struct fm10k_q_vector *q_vector);
+void fm10k_dbg_q_vector_exit(struct fm10k_q_vector *q_vector);
+void fm10k_dbg_intfc_init(struct fm10k_intfc *interface);
+void fm10k_dbg_intfc_exit(struct fm10k_intfc *interface);
+void fm10k_dbg_init(void);
+void fm10k_dbg_exit(void);
+#else
+static inline void fm10k_dbg_q_vector_init(struct fm10k_q_vector *q_vector) {}
+static inline void fm10k_dbg_q_vector_exit(struct fm10k_q_vector *q_vector) {}
+static inline void fm10k_dbg_intfc_init(struct fm10k_intfc *interface) {}
+static inline void fm10k_dbg_intfc_exit(struct fm10k_intfc *interface) {}
+static inline void fm10k_dbg_init(void) {}
+static inline void fm10k_dbg_exit(void) {}
+#endif /* CONFIG_DEBUG_FS */
+
+/* Time Stamping */
+void fm10k_systime_to_hwtstamp(struct fm10k_intfc *interface,
+ struct skb_shared_hwtstamps *hwtstamp,
+ u64 systime);
+void fm10k_ts_tx_enqueue(struct fm10k_intfc *interface, struct sk_buff *skb);
+void fm10k_ts_tx_hwtstamp(struct fm10k_intfc *interface, __le16 dglort,
+ u64 systime);
+void fm10k_ts_reset(struct fm10k_intfc *interface);
+void fm10k_ts_init(struct fm10k_intfc *interface);
+void fm10k_ts_tx_subtask(struct fm10k_intfc *interface);
+void fm10k_ptp_register(struct fm10k_intfc *interface);
+void fm10k_ptp_unregister(struct fm10k_intfc *interface);
+int fm10k_get_ts_config(struct net_device *netdev, struct ifreq *ifr);
+int fm10k_set_ts_config(struct net_device *netdev, struct ifreq *ifr);
+
+/* DCB */
+void fm10k_dcbnl_set_ops(struct net_device *dev);
+#endif /* _FM10K_H_ */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_common.c b/drivers/net/ethernet/intel/fm10k/fm10k_common.c
new file mode 100644
index 0000000..bf19dcc
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_common.c
@@ -0,0 +1,534 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k_common.h"
+
+/**
+ * fm10k_get_bus_info_generic - Generic set PCI bus info
+ * @hw: pointer to hardware structure
+ *
+ * Gets the PCI bus info (speed, width, type) then calls helper function to
+ * store this data within the fm10k_hw structure.
+ **/
+s32 fm10k_get_bus_info_generic(struct fm10k_hw *hw)
+{
+ u16 link_cap, link_status, device_cap, device_control;
+
+ /* Get the maximum link width and speed from PCIe config space */
+ link_cap = fm10k_read_pci_cfg_word(hw, FM10K_PCIE_LINK_CAP);
+
+ switch (link_cap & FM10K_PCIE_LINK_WIDTH) {
+ case FM10K_PCIE_LINK_WIDTH_1:
+ hw->bus_caps.width = fm10k_bus_width_pcie_x1;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_2:
+ hw->bus_caps.width = fm10k_bus_width_pcie_x2;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_4:
+ hw->bus_caps.width = fm10k_bus_width_pcie_x4;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_8:
+ hw->bus_caps.width = fm10k_bus_width_pcie_x8;
+ break;
+ default:
+ hw->bus_caps.width = fm10k_bus_width_unknown;
+ break;
+ }
+
+ switch (link_cap & FM10K_PCIE_LINK_SPEED) {
+ case FM10K_PCIE_LINK_SPEED_2500:
+ hw->bus_caps.speed = fm10k_bus_speed_2500;
+ break;
+ case FM10K_PCIE_LINK_SPEED_5000:
+ hw->bus_caps.speed = fm10k_bus_speed_5000;
+ break;
+ case FM10K_PCIE_LINK_SPEED_8000:
+ hw->bus_caps.speed = fm10k_bus_speed_8000;
+ break;
+ default:
+ hw->bus_caps.speed = fm10k_bus_speed_unknown;
+ break;
+ }
+
+ /* Get the PCIe maximum payload size for the PCIe function */
+ device_cap = fm10k_read_pci_cfg_word(hw, FM10K_PCIE_DEV_CAP);
+
+ switch (device_cap & FM10K_PCIE_DEV_CAP_PAYLOAD) {
+ case FM10K_PCIE_DEV_CAP_PAYLOAD_128:
+ hw->bus_caps.payload = fm10k_bus_payload_128;
+ break;
+ case FM10K_PCIE_DEV_CAP_PAYLOAD_256:
+ hw->bus_caps.payload = fm10k_bus_payload_256;
+ break;
+ case FM10K_PCIE_DEV_CAP_PAYLOAD_512:
+ hw->bus_caps.payload = fm10k_bus_payload_512;
+ break;
+ default:
+ hw->bus_caps.payload = fm10k_bus_payload_unknown;
+ break;
+ }
+
+ /* Get the negotiated link width and speed from PCIe config space */
+ link_status = fm10k_read_pci_cfg_word(hw, FM10K_PCIE_LINK_STATUS);
+
+ switch (link_status & FM10K_PCIE_LINK_WIDTH) {
+ case FM10K_PCIE_LINK_WIDTH_1:
+ hw->bus.width = fm10k_bus_width_pcie_x1;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_2:
+ hw->bus.width = fm10k_bus_width_pcie_x2;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_4:
+ hw->bus.width = fm10k_bus_width_pcie_x4;
+ break;
+ case FM10K_PCIE_LINK_WIDTH_8:
+ hw->bus.width = fm10k_bus_width_pcie_x8;
+ break;
+ default:
+ hw->bus.width = fm10k_bus_width_unknown;
+ break;
+ }
+
+ switch (link_status & FM10K_PCIE_LINK_SPEED) {
+ case FM10K_PCIE_LINK_SPEED_2500:
+ hw->bus.speed = fm10k_bus_speed_2500;
+ break;
+ case FM10K_PCIE_LINK_SPEED_5000:
+ hw->bus.speed = fm10k_bus_speed_5000;
+ break;
+ case FM10K_PCIE_LINK_SPEED_8000:
+ hw->bus.speed = fm10k_bus_speed_8000;
+ break;
+ default:
+ hw->bus.speed = fm10k_bus_speed_unknown;
+ break;
+ }
+
+ /* Get the negotiated PCIe maximum payload size for the PCIe function */
+ device_control = fm10k_read_pci_cfg_word(hw, FM10K_PCIE_DEV_CTRL);
+
+ switch (device_control & FM10K_PCIE_DEV_CTRL_PAYLOAD) {
+ case FM10K_PCIE_DEV_CTRL_PAYLOAD_128:
+ hw->bus.payload = fm10k_bus_payload_128;
+ break;
+ case FM10K_PCIE_DEV_CTRL_PAYLOAD_256:
+ hw->bus.payload = fm10k_bus_payload_256;
+ break;
+ case FM10K_PCIE_DEV_CTRL_PAYLOAD_512:
+ hw->bus.payload = fm10k_bus_payload_512;
+ break;
+ default:
+ hw->bus.payload = fm10k_bus_payload_unknown;
+ break;
+ }
+
+ return 0;
+}
+
+static u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw)
+{
+ u16 msix_count;
+
+ /* read in value from MSI-X capability register */
+ msix_count = fm10k_read_pci_cfg_word(hw, FM10K_PCI_MSIX_MSG_CTRL);
+ msix_count &= FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK;
+
+ /* MSI-X count is zero-based in HW */
+ msix_count++;
+
+ if (msix_count > FM10K_MAX_MSIX_VECTORS)
+ msix_count = FM10K_MAX_MSIX_VECTORS;
+
+ return msix_count;
+}
+
+/**
+ * fm10k_get_invariants_generic - Inits constant values
+ * @hw: pointer to the hardware structure
+ *
+ * Initialize the common invariants for the device.
+ **/
+s32 fm10k_get_invariants_generic(struct fm10k_hw *hw)
+{
+ struct fm10k_mac_info *mac = &hw->mac;
+
+ /* initialize GLORT state to avoid any false hits */
+ mac->dglort_map = FM10K_DGLORTMAP_NONE;
+
+ /* record maximum number of MSI-X vectors */
+ mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw);
+
+ return 0;
+}
+
+/**
+ * fm10k_start_hw_generic - Prepare hardware for Tx/Rx
+ * @hw: pointer to hardware structure
+ *
+ * This function sets the Tx ready flag to indicate that the Tx path has
+ * been initialized.
+ **/
+s32 fm10k_start_hw_generic(struct fm10k_hw *hw)
+{
+ /* set flag indicating we are beginning Tx */
+ hw->mac.tx_ready = true;
+
+ return 0;
+}
+
+/**
+ * fm10k_disable_queues_generic - Stop Tx/Rx queues
+ * @hw: pointer to hardware structure
+ * @q_cnt: number of queues to be disabled
+ *
+ **/
+s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt)
+{
+ u32 reg;
+ u16 i, time;
+
+ /* clear tx_ready to prevent any false hits for reset */
+ hw->mac.tx_ready = false;
+
+ /* clear the enable bit for all rings */
+ for (i = 0; i < q_cnt; i++) {
+ reg = fm10k_read_reg(hw, FM10K_TXDCTL(i));
+ fm10k_write_reg(hw, FM10K_TXDCTL(i),
+ reg & ~FM10K_TXDCTL_ENABLE);
+ reg = fm10k_read_reg(hw, FM10K_RXQCTL(i));
+ fm10k_write_reg(hw, FM10K_RXQCTL(i),
+ reg & ~FM10K_RXQCTL_ENABLE);
+ }
+
+ fm10k_write_flush(hw);
+ udelay(1);
+
+ /* loop through all queues to verify that they are all disabled */
+ for (i = 0, time = FM10K_QUEUE_DISABLE_TIMEOUT; time;) {
+ /* if we are at end of rings all rings are disabled */
+ if (i == q_cnt)
+ return 0;
+
+ /* if queue enables cleared, then move to next ring pair */
+ reg = fm10k_read_reg(hw, FM10K_TXDCTL(i));
+ if (!~reg || !(reg & FM10K_TXDCTL_ENABLE)) {
+ reg = fm10k_read_reg(hw, FM10K_RXQCTL(i));
+ if (!~reg || !(reg & FM10K_RXQCTL_ENABLE)) {
+ i++;
+ continue;
+ }
+ }
+
+ /* decrement time and wait 1 usec */
+ time--;
+ if (time)
+ udelay(1);
+ }
+
+ return FM10K_ERR_REQUESTS_PENDING;
+}
+
+/**
+ * fm10k_stop_hw_generic - Stop Tx/Rx units
+ * @hw: pointer to hardware structure
+ *
+ **/
+s32 fm10k_stop_hw_generic(struct fm10k_hw *hw)
+{
+ return fm10k_disable_queues_generic(hw, hw->mac.max_queues);
+}
+
+/**
+ * fm10k_read_hw_stats_32b - Reads value of 32-bit registers
+ * @hw: pointer to the hardware structure
+ * @addr: address of register containing a 32-bit value
+ *
+ * Function reads the content of the register and returns the delta
+ * between the base and the current value.
+ * **/
+u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr,
+ struct fm10k_hw_stat *stat)
+{
+ u32 delta = fm10k_read_reg(hw, addr) - stat->base_l;
+
+ if (FM10K_REMOVED(hw->hw_addr))
+ stat->base_h = 0;
+
+ return delta;
+}
+
+/**
+ * fm10k_read_hw_stats_48b - Reads value of 48-bit registers
+ * @hw: pointer to the hardware structure
+ * @addr: address of register containing the lower 32-bit value
+ *
+ * Function reads the content of 2 registers, combined to represent a 48-bit
+ * statistical value. Extra processing is required to handle overflowing.
+ * Finally, a delta value is returned representing the difference between the
+ * values stored in registers and values stored in the statistic counters.
+ * **/
+static u64 fm10k_read_hw_stats_48b(struct fm10k_hw *hw, u32 addr,
+ struct fm10k_hw_stat *stat)
+{
+ u32 count_l;
+ u32 count_h;
+ u32 count_tmp;
+ u64 delta;
+
+ count_h = fm10k_read_reg(hw, addr + 1);
+
+ /* Check for overflow */
+ do {
+ count_tmp = count_h;
+ count_l = fm10k_read_reg(hw, addr);
+ count_h = fm10k_read_reg(hw, addr + 1);
+ } while (count_h != count_tmp);
+
+ delta = ((u64)(count_h - stat->base_h) << 32) + count_l;
+ delta -= stat->base_l;
+
+ return delta & FM10K_48_BIT_MASK;
+}
+
+/**
+ * fm10k_update_hw_base_48b - Updates 48-bit statistic base value
+ * @stat: pointer to the hardware statistic structure
+ * @delta: value to be updated into the hardware statistic structure
+ *
+ * Function receives a value and determines if an update is required based on
+ * a delta calculation. Only the base value will be updated.
+ **/
+static void fm10k_update_hw_base_48b(struct fm10k_hw_stat *stat, u64 delta)
+{
+ if (!delta)
+ return;
+
+ /* update lower 32 bits */
+ delta += stat->base_l;
+ stat->base_l = (u32)delta;
+
+ /* update upper 32 bits */
+ stat->base_h += (u32)(delta >> 32);
+}
+
+/**
+ * fm10k_update_hw_stats_tx_q - Updates TX queue statistics counters
+ * @hw: pointer to the hardware structure
+ * @q: pointer to the ring of hardware statistics queue
+ * @idx: index pointing to the start of the ring iteration
+ *
+ * Function updates the TX queue statistics counters that are related to the
+ * hardware.
+ **/
+static void fm10k_update_hw_stats_tx_q(struct fm10k_hw *hw,
+ struct fm10k_hw_stats_q *q,
+ u32 idx)
+{
+ u32 id_tx, id_tx_prev, tx_packets;
+ u64 tx_bytes = 0;
+
+ /* Retrieve TX Owner Data */
+ id_tx = fm10k_read_reg(hw, FM10K_TXQCTL(idx));
+
+ /* Process TX Ring */
+ do {
+ tx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPTC(idx),
+ &q->tx_packets);
+
+ if (tx_packets)
+ tx_bytes = fm10k_read_hw_stats_48b(hw,
+ FM10K_QBTC_L(idx),
+ &q->tx_bytes);
+
+ /* Re-Check Owner Data */
+ id_tx_prev = id_tx;
+ id_tx = fm10k_read_reg(hw, FM10K_TXQCTL(idx));
+ } while ((id_tx ^ id_tx_prev) & FM10K_TXQCTL_ID_MASK);
+
+ /* drop non-ID bits and set VALID ID bit */
+ id_tx &= FM10K_TXQCTL_ID_MASK;
+ id_tx |= FM10K_STAT_VALID;
+
+ /* update packet counts */
+ if (q->tx_stats_idx == id_tx) {
+ q->tx_packets.count += tx_packets;
+ q->tx_bytes.count += tx_bytes;
+ }
+
+ /* update bases and record ID */
+ fm10k_update_hw_base_32b(&q->tx_packets, tx_packets);
+ fm10k_update_hw_base_48b(&q->tx_bytes, tx_bytes);
+
+ q->tx_stats_idx = id_tx;
+}
+
+/**
+ * fm10k_update_hw_stats_rx_q - Updates RX queue statistics counters
+ * @hw: pointer to the hardware structure
+ * @q: pointer to the ring of hardware statistics queue
+ * @idx: index pointing to the start of the ring iteration
+ *
+ * Function updates the RX queue statistics counters that are related to the
+ * hardware.
+ **/
+static void fm10k_update_hw_stats_rx_q(struct fm10k_hw *hw,
+ struct fm10k_hw_stats_q *q,
+ u32 idx)
+{
+ u32 id_rx, id_rx_prev, rx_packets, rx_drops;
+ u64 rx_bytes = 0;
+
+ /* Retrieve RX Owner Data */
+ id_rx = fm10k_read_reg(hw, FM10K_RXQCTL(idx));
+
+ /* Process RX Ring*/
+ do {
+ rx_drops = fm10k_read_hw_stats_32b(hw, FM10K_QPRDC(idx),
+ &q->rx_drops);
+
+ rx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPRC(idx),
+ &q->rx_packets);
+
+ if (rx_packets)
+ rx_bytes = fm10k_read_hw_stats_48b(hw,
+ FM10K_QBRC_L(idx),
+ &q->rx_bytes);
+
+ /* Re-Check Owner Data */
+ id_rx_prev = id_rx;
+ id_rx = fm10k_read_reg(hw, FM10K_RXQCTL(idx));
+ } while ((id_rx ^ id_rx_prev) & FM10K_RXQCTL_ID_MASK);
+
+ /* drop non-ID bits and set VALID ID bit */
+ id_rx &= FM10K_RXQCTL_ID_MASK;
+ id_rx |= FM10K_STAT_VALID;
+
+ /* update packet counts */
+ if (q->rx_stats_idx == id_rx) {
+ q->rx_drops.count += rx_drops;
+ q->rx_packets.count += rx_packets;
+ q->rx_bytes.count += rx_bytes;
+ }
+
+ /* update bases and record ID */
+ fm10k_update_hw_base_32b(&q->rx_drops, rx_drops);
+ fm10k_update_hw_base_32b(&q->rx_packets, rx_packets);
+ fm10k_update_hw_base_48b(&q->rx_bytes, rx_bytes);
+
+ q->rx_stats_idx = id_rx;
+}
+
+/**
+ * fm10k_update_hw_stats_q - Updates queue statistics counters
+ * @hw: pointer to the hardware structure
+ * @q: pointer to the ring of hardware statistics queue
+ * @idx: index pointing to the start of the ring iteration
+ * @count: number of queues to iterate over
+ *
+ * Function updates the queue statistics counters that are related to the
+ * hardware.
+ **/
+void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q,
+ u32 idx, u32 count)
+{
+ u32 i;
+
+ for (i = 0; i < count; i++, idx++, q++) {
+ fm10k_update_hw_stats_tx_q(hw, q, idx);
+ fm10k_update_hw_stats_rx_q(hw, q, idx);
+ }
+}
+
+/**
+ * fm10k_unbind_hw_stats_q - Unbind the queue counters from their queues
+ * @hw: pointer to the hardware structure
+ * @q: pointer to the ring of hardware statistics queue
+ * @idx: index pointing to the start of the ring iteration
+ * @count: number of queues to iterate over
+ *
+ * Function invalidates the index values for the queues so any updates that
+ * may have happened are ignored and the base for the queue stats is reset.
+ **/
+
+void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count)
+{
+ u32 i;
+
+ for (i = 0; i < count; i++, idx++, q++) {
+ q->rx_stats_idx = 0;
+ q->tx_stats_idx = 0;
+ }
+}
+
+/**
+ * fm10k_get_host_state_generic - Returns the state of the host
+ * @hw: pointer to hardware structure
+ * @host_ready: pointer to boolean value that will record host state
+ *
+ * This function will check the health of the mailbox and Tx queue 0
+ * in order to determine if we should report that the link is up or not.
+ **/
+s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ struct fm10k_mac_info *mac = &hw->mac;
+ s32 ret_val = 0;
+ u32 txdctl = fm10k_read_reg(hw, FM10K_TXDCTL(0));
+
+ /* process upstream mailbox in case interrupts were disabled */
+ mbx->ops.process(hw, mbx);
+
+ /* If Tx is no longer enabled link should come down */
+ if (!(~txdctl) || !(txdctl & FM10K_TXDCTL_ENABLE))
+ mac->get_host_state = true;
+
+ /* exit if not checking for link, or link cannot be changed */
+ if (!mac->get_host_state || !(~txdctl))
+ goto out;
+
+ /* if we somehow dropped the Tx enable we should reset */
+ if (hw->mac.tx_ready && !(txdctl & FM10K_TXDCTL_ENABLE)) {
+ ret_val = FM10K_ERR_RESET_REQUESTED;
+ goto out;
+ }
+
+ /* if Mailbox timed out we should request reset */
+ if (!mbx->timeout) {
+ ret_val = FM10K_ERR_RESET_REQUESTED;
+ goto out;
+ }
+
+ /* verify Mailbox is still valid */
+ if (!mbx->ops.tx_ready(mbx, FM10K_VFMBX_MSG_MTU))
+ goto out;
+
+ /* interface cannot receive traffic without logical ports */
+ if (mac->dglort_map == FM10K_DGLORTMAP_NONE)
+ goto out;
+
+ /* if we passed all the tests above then the switch is ready and we no
+ * longer need to check for link
+ */
+ mac->get_host_state = false;
+
+out:
+ *host_ready = !mac->get_host_state;
+ return ret_val;
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_common.h b/drivers/net/ethernet/intel/fm10k/fm10k_common.h
new file mode 100644
index 0000000..45e4e5b
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_common.h
@@ -0,0 +1,65 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_COMMON_H_
+#define _FM10K_COMMON_H_
+
+#include "fm10k_type.h"
+
+#define FM10K_REMOVED(hw_addr) unlikely(!(hw_addr))
+
+/* PCI configuration read */
+u16 fm10k_read_pci_cfg_word(struct fm10k_hw *hw, u32 reg);
+
+/* read operations, indexed using DWORDS */
+u32 fm10k_read_reg(struct fm10k_hw *hw, int reg);
+
+/* write operations, indexed using DWORDS */
+#define fm10k_write_reg(hw, reg, val) \
+do { \
+ u32 __iomem *hw_addr = ACCESS_ONCE((hw)->hw_addr); \
+ if (!FM10K_REMOVED(hw_addr)) \
+ writel((val), &hw_addr[(reg)]); \
+} while (0)
+
+/* Switch register write operations, index using DWORDS */
+#define fm10k_write_sw_reg(hw, reg, val) \
+do { \
+ u32 __iomem *sw_addr = ACCESS_ONCE((hw)->sw_addr); \
+ if (!FM10K_REMOVED(sw_addr)) \
+ writel((val), &sw_addr[(reg)]); \
+} while (0)
+
+/* read ctrl register which has no clear on read fields as PCIe flush */
+#define fm10k_write_flush(hw) fm10k_read_reg((hw), FM10K_CTRL)
+s32 fm10k_get_bus_info_generic(struct fm10k_hw *hw);
+s32 fm10k_get_invariants_generic(struct fm10k_hw *hw);
+s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt);
+s32 fm10k_start_hw_generic(struct fm10k_hw *hw);
+s32 fm10k_stop_hw_generic(struct fm10k_hw *hw);
+u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr,
+ struct fm10k_hw_stat *stat);
+#define fm10k_update_hw_base_32b(stat, delta) ((stat)->base_l += (delta))
+void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q,
+ u32 idx, u32 count);
+#define fm10k_unbind_hw_stats_32b(s) ((s)->base_h = 0)
+void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count);
+s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready);
+#endif /* _FM10K_COMMON_H_ */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c b/drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c
new file mode 100644
index 0000000..212a92d
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c
@@ -0,0 +1,174 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k.h"
+
+#ifdef CONFIG_DCB
+/**
+ * fm10k_dcbnl_ieee_getets - get the ETS configuration for the device
+ * @dev: netdev interface for the device
+ * @ets: ETS structure to push configuration to
+ **/
+static int fm10k_dcbnl_ieee_getets(struct net_device *dev, struct ieee_ets *ets)
+{
+ int i;
+
+ /* we support 8 TCs in all modes */
+ ets->ets_cap = IEEE_8021QAZ_MAX_TCS;
+ ets->cbs = 0;
+
+ /* we only support strict priority and cannot do traffic shaping */
+ memset(ets->tc_tx_bw, 0, sizeof(ets->tc_tx_bw));
+ memset(ets->tc_rx_bw, 0, sizeof(ets->tc_rx_bw));
+ memset(ets->tc_tsa, IEEE_8021QAZ_TSA_STRICT, sizeof(ets->tc_tsa));
+
+ /* populate the prio map based on the netdev */
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+ ets->prio_tc[i] = netdev_get_prio_tc_map(dev, i);
+
+ return 0;
+}
+
+/**
+ * fm10k_dcbnl_ieee_setets - set the ETS configuration for the device
+ * @dev: netdev interface for the device
+ * @ets: ETS structure to pull configuration from
+ **/
+static int fm10k_dcbnl_ieee_setets(struct net_device *dev, struct ieee_ets *ets)
+{
+ u8 num_tc = 0;
+ int i, err;
+
+ /* verify type and determine num_tcs needed */
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ if (ets->tc_tx_bw[i] || ets->tc_rx_bw[i])
+ return -EINVAL;
+ if (ets->tc_tsa[i] != IEEE_8021QAZ_TSA_STRICT)
+ return -EINVAL;
+ if (ets->prio_tc[i] > num_tc)
+ num_tc = ets->prio_tc[i];
+ }
+
+ /* if requested TC is greater than 0 then num_tcs is max + 1 */
+ if (num_tc)
+ num_tc++;
+
+ if (num_tc > IEEE_8021QAZ_MAX_TCS)
+ return -EINVAL;
+
+ /* update TC hardware mapping if necessary */
+ if (num_tc != netdev_get_num_tc(dev)) {
+ err = fm10k_setup_tc(dev, num_tc);
+ if (err)
+ return err;
+ }
+
+ /* update priority mapping */
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+ netdev_set_prio_tc_map(dev, i, ets->prio_tc[i]);
+
+ return 0;
+}
+
+/**
+ * fm10k_dcbnl_ieee_getpfc - get the PFC configuration for the device
+ * @dev: netdev interface for the device
+ * @pfc: PFC structure to push configuration to
+ **/
+static int fm10k_dcbnl_ieee_getpfc(struct net_device *dev, struct ieee_pfc *pfc)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ /* record flow control max count and state of TCs */
+ pfc->pfc_cap = IEEE_8021QAZ_MAX_TCS;
+ pfc->pfc_en = interface->pfc_en;
+
+ return 0;
+}
+
+/**
+ * fm10k_dcbnl_ieee_setpfc - set the PFC configuration for the device
+ * @dev: netdev interface for the device
+ * @pfc: PFC structure to pull configuration from
+ **/
+static int fm10k_dcbnl_ieee_setpfc(struct net_device *dev, struct ieee_pfc *pfc)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ /* record PFC configuration to interface */
+ interface->pfc_en = pfc->pfc_en;
+
+ /* if we are running update the drop_en state for all queues */
+ if (netif_running(dev))
+ fm10k_update_rx_drop_en(interface);
+
+ return 0;
+}
+
+/**
+ * fm10k_dcbnl_ieee_getdcbx - get the DCBX configuration for the device
+ * @dev: netdev interface for the device
+ *
+ * Returns that we support only IEEE DCB for this interface
+ **/
+static u8 fm10k_dcbnl_getdcbx(struct net_device *dev)
+{
+ return DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE;
+}
+
+/**
+ * fm10k_dcbnl_ieee_setdcbx - get the DCBX configuration for the device
+ * @dev: netdev interface for the device
+ * @mode: new mode for this device
+ *
+ * Returns error on attempt to enable anything but IEEE DCB for this interface
+ **/
+static u8 fm10k_dcbnl_setdcbx(struct net_device *dev, u8 mode)
+{
+ return (mode != (DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE)) ? 1 : 0;
+}
+
+static const struct dcbnl_rtnl_ops fm10k_dcbnl_ops = {
+ .ieee_getets = fm10k_dcbnl_ieee_getets,
+ .ieee_setets = fm10k_dcbnl_ieee_setets,
+ .ieee_getpfc = fm10k_dcbnl_ieee_getpfc,
+ .ieee_setpfc = fm10k_dcbnl_ieee_setpfc,
+
+ .getdcbx = fm10k_dcbnl_getdcbx,
+ .setdcbx = fm10k_dcbnl_setdcbx,
+};
+
+#endif /* CONFIG_DCB */
+/**
+ * fm10k_dcbnl_set_ops - Configures dcbnl ops pointer for netdev
+ * @dev: netdev interface for the device
+ *
+ * Enables PF for DCB by assigning DCBNL ops pointer.
+ **/
+void fm10k_dcbnl_set_ops(struct net_device *dev)
+{
+#ifdef CONFIG_DCB
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+
+ if (hw->mac.type == fm10k_mac_pf)
+ dev->dcbnl_ops = &fm10k_dcbnl_ops;
+#endif /* CONFIG_DCB */
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c b/drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c
new file mode 100644
index 0000000..4327f86
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c
@@ -0,0 +1,259 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifdef CONFIG_DEBUG_FS
+
+#include "fm10k.h"
+
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+
+static struct dentry *dbg_root;
+
+/* Descriptor Seq Functions */
+
+static void *fm10k_dbg_desc_seq_start(struct seq_file *s, loff_t *pos)
+{
+ struct fm10k_ring *ring = s->private;
+
+ return (*pos < ring->count) ? pos : NULL;
+}
+
+static void *fm10k_dbg_desc_seq_next(struct seq_file *s, void *v, loff_t *pos)
+{
+ struct fm10k_ring *ring = s->private;
+
+ return (++(*pos) < ring->count) ? pos : NULL;
+}
+
+static void fm10k_dbg_desc_seq_stop(struct seq_file *s, void *v)
+{
+ /* Do nothing. */
+}
+
+static void fm10k_dbg_desc_break(struct seq_file *s, int i)
+{
+ while (i--)
+ seq_puts(s, "-");
+
+ seq_puts(s, "\n");
+}
+
+static int fm10k_dbg_tx_desc_seq_show(struct seq_file *s, void *v)
+{
+ struct fm10k_ring *ring = s->private;
+ int i = *(loff_t *)v;
+ static const char tx_desc_hdr[] =
+ "DES BUFFER_ADDRESS LENGTH VLAN MSS HDRLEN FLAGS\n";
+
+ /* Generate header */
+ if (!i) {
+ seq_printf(s, tx_desc_hdr);
+ fm10k_dbg_desc_break(s, sizeof(tx_desc_hdr) - 1);
+ }
+
+ /* Validate descriptor allocation */
+ if (!ring->desc) {
+ seq_printf(s, "%03X Descriptor ring not allocated.\n", i);
+ } else {
+ struct fm10k_tx_desc *txd = FM10K_TX_DESC(ring, i);
+
+ seq_printf(s, "%03X %#018llx %#06x %#06x %#06x %#06x %#04x\n",
+ i, txd->buffer_addr, txd->buflen, txd->vlan,
+ txd->mss, txd->hdrlen, txd->flags);
+ }
+
+ return 0;
+}
+
+static int fm10k_dbg_rx_desc_seq_show(struct seq_file *s, void *v)
+{
+ struct fm10k_ring *ring = s->private;
+ int i = *(loff_t *)v;
+ static const char rx_desc_hdr[] =
+ "DES DATA RSS STATERR LENGTH VLAN DGLORT SGLORT TIMESTAMP\n";
+
+ /* Generate header */
+ if (!i) {
+ seq_printf(s, rx_desc_hdr);
+ fm10k_dbg_desc_break(s, sizeof(rx_desc_hdr) - 1);
+ }
+
+ /* Validate descriptor allocation */
+ if (!ring->desc) {
+ seq_printf(s, "%03X Descriptor ring not allocated.\n", i);
+ } else {
+ union fm10k_rx_desc *rxd = FM10K_RX_DESC(ring, i);
+
+ seq_printf(s,
+ "%03X %#010x %#010x %#010x %#06x %#06x %#06x %#06x %#018llx\n",
+ i, rxd->d.data, rxd->d.rss, rxd->d.staterr,
+ rxd->w.length, rxd->w.vlan, rxd->w.dglort,
+ rxd->w.sglort, rxd->q.timestamp);
+ }
+
+ return 0;
+}
+
+static const struct seq_operations fm10k_dbg_tx_desc_seq_ops = {
+ .start = fm10k_dbg_desc_seq_start,
+ .next = fm10k_dbg_desc_seq_next,
+ .stop = fm10k_dbg_desc_seq_stop,
+ .show = fm10k_dbg_tx_desc_seq_show,
+};
+
+static const struct seq_operations fm10k_dbg_rx_desc_seq_ops = {
+ .start = fm10k_dbg_desc_seq_start,
+ .next = fm10k_dbg_desc_seq_next,
+ .stop = fm10k_dbg_desc_seq_stop,
+ .show = fm10k_dbg_rx_desc_seq_show,
+};
+
+static int fm10k_dbg_desc_open(struct inode *inode, struct file *filep)
+{
+ struct fm10k_ring *ring = inode->i_private;
+ struct fm10k_q_vector *q_vector = ring->q_vector;
+ const struct seq_operations *desc_seq_ops;
+ int err;
+
+ if (ring < q_vector->rx.ring)
+ desc_seq_ops = &fm10k_dbg_tx_desc_seq_ops;
+ else
+ desc_seq_ops = &fm10k_dbg_rx_desc_seq_ops;
+
+ err = seq_open(filep, desc_seq_ops);
+ if (err)
+ return err;
+
+ ((struct seq_file *)filep->private_data)->private = ring;
+
+ return 0;
+}
+
+static const struct file_operations fm10k_dbg_desc_fops = {
+ .owner = THIS_MODULE,
+ .open = fm10k_dbg_desc_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
+/**
+ * fm10k_dbg_q_vector_init - setup debugfs for the q_vectors
+ * @q_vector: q_vector to allocate directories for
+ *
+ * A folder is created for each q_vector found. In each q_vector
+ * folder, a debugfs file is created for each tx and rx ring
+ * allocated to the q_vector.
+ **/
+void fm10k_dbg_q_vector_init(struct fm10k_q_vector *q_vector)
+{
+ struct fm10k_intfc *interface = q_vector->interface;
+ char name[16];
+ int i;
+
+ if (!interface->dbg_intfc)
+ return;
+
+ /* Generate a folder for each q_vector */
+ sprintf(name, "q_vector.%03d", q_vector->v_idx);
+
+ q_vector->dbg_q_vector = debugfs_create_dir(name, interface->dbg_intfc);
+ if (!q_vector->dbg_q_vector)
+ return;
+
+ /* Generate a file for each rx ring in the q_vector */
+ for (i = 0; i < q_vector->tx.count; i++) {
+ struct fm10k_ring *ring = &q_vector->tx.ring[i];
+
+ sprintf(name, "tx_ring.%03d", ring->queue_index);
+
+ debugfs_create_file(name, 0600,
+ q_vector->dbg_q_vector, ring,
+ &fm10k_dbg_desc_fops);
+ }
+
+ /* Generate a file for each rx ring in the q_vector */
+ for (i = 0; i < q_vector->rx.count; i++) {
+ struct fm10k_ring *ring = &q_vector->rx.ring[i];
+
+ sprintf(name, "rx_ring.%03d", ring->queue_index);
+
+ debugfs_create_file(name, 0600,
+ q_vector->dbg_q_vector, ring,
+ &fm10k_dbg_desc_fops);
+ }
+}
+
+/**
+ * fm10k_dbg_free_q_vector_dir - setup debugfs for the q_vectors
+ * @q_vector: q_vector to allocate directories for
+ **/
+void fm10k_dbg_q_vector_exit(struct fm10k_q_vector *q_vector)
+{
+ struct fm10k_intfc *interface = q_vector->interface;
+
+ if (interface->dbg_intfc)
+ debugfs_remove_recursive(q_vector->dbg_q_vector);
+ q_vector->dbg_q_vector = NULL;
+}
+
+/**
+ * fm10k_dbg_intfc_init - setup the debugfs directory for the intferface
+ * @interface: the interface that is starting up
+ **/
+
+void fm10k_dbg_intfc_init(struct fm10k_intfc *interface)
+{
+ const char *name = pci_name(interface->pdev);
+
+ if (dbg_root)
+ interface->dbg_intfc = debugfs_create_dir(name, dbg_root);
+}
+
+/**
+ * fm10k_dbg_intfc_exit - clean out the interface's debugfs entries
+ * @interface: the interface that is stopping
+ **/
+void fm10k_dbg_intfc_exit(struct fm10k_intfc *interface)
+{
+ if (dbg_root)
+ debugfs_remove_recursive(interface->dbg_intfc);
+ interface->dbg_intfc = NULL;
+}
+
+/**
+ * fm10k_dbg_init - start up debugfs for the driver
+ **/
+void fm10k_dbg_init(void)
+{
+ dbg_root = debugfs_create_dir(fm10k_driver_name, NULL);
+}
+
+/**
+ * fm10k_dbg_exit - clean out the driver's debugfs entries
+ **/
+void fm10k_dbg_exit(void)
+{
+ debugfs_remove_recursive(dbg_root);
+ dbg_root = NULL;
+}
+
+#endif /* CONFIG_DEBUG_FS */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c b/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c
new file mode 100644
index 0000000..2d04464
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c
@@ -0,0 +1,1071 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include <linux/vmalloc.h>
+
+#include "fm10k.h"
+
+struct fm10k_stats {
+ char stat_string[ETH_GSTRING_LEN];
+ int sizeof_stat;
+ int stat_offset;
+};
+
+#define FM10K_NETDEV_STAT(_net_stat) { \
+ .stat_string = #_net_stat, \
+ .sizeof_stat = FIELD_SIZEOF(struct net_device_stats, _net_stat), \
+ .stat_offset = offsetof(struct net_device_stats, _net_stat) \
+}
+
+static const struct fm10k_stats fm10k_gstrings_net_stats[] = {
+ FM10K_NETDEV_STAT(tx_packets),
+ FM10K_NETDEV_STAT(tx_bytes),
+ FM10K_NETDEV_STAT(tx_errors),
+ FM10K_NETDEV_STAT(rx_packets),
+ FM10K_NETDEV_STAT(rx_bytes),
+ FM10K_NETDEV_STAT(rx_errors),
+ FM10K_NETDEV_STAT(rx_dropped),
+
+ /* detailed Rx errors */
+ FM10K_NETDEV_STAT(rx_length_errors),
+ FM10K_NETDEV_STAT(rx_crc_errors),
+ FM10K_NETDEV_STAT(rx_fifo_errors),
+};
+
+#define FM10K_NETDEV_STATS_LEN ARRAY_SIZE(fm10k_gstrings_net_stats)
+
+#define FM10K_STAT(_name, _stat) { \
+ .stat_string = _name, \
+ .sizeof_stat = FIELD_SIZEOF(struct fm10k_intfc, _stat), \
+ .stat_offset = offsetof(struct fm10k_intfc, _stat) \
+}
+
+static const struct fm10k_stats fm10k_gstrings_stats[] = {
+ FM10K_STAT("tx_restart_queue", restart_queue),
+ FM10K_STAT("tx_busy", tx_busy),
+ FM10K_STAT("tx_csum_errors", tx_csum_errors),
+ FM10K_STAT("rx_alloc_failed", alloc_failed),
+ FM10K_STAT("rx_csum_errors", rx_csum_errors),
+ FM10K_STAT("rx_errors", rx_errors),
+
+ FM10K_STAT("tx_packets_nic", tx_packets_nic),
+ FM10K_STAT("tx_bytes_nic", tx_bytes_nic),
+ FM10K_STAT("rx_packets_nic", rx_packets_nic),
+ FM10K_STAT("rx_bytes_nic", rx_bytes_nic),
+ FM10K_STAT("rx_drops_nic", rx_drops_nic),
+ FM10K_STAT("rx_overrun_pf", rx_overrun_pf),
+ FM10K_STAT("rx_overrun_vf", rx_overrun_vf),
+
+ FM10K_STAT("timeout", stats.timeout.count),
+ FM10K_STAT("ur", stats.ur.count),
+ FM10K_STAT("ca", stats.ca.count),
+ FM10K_STAT("um", stats.um.count),
+ FM10K_STAT("xec", stats.xec.count),
+ FM10K_STAT("vlan_drop", stats.vlan_drop.count),
+ FM10K_STAT("loopback_drop", stats.loopback_drop.count),
+ FM10K_STAT("nodesc_drop", stats.nodesc_drop.count),
+
+ FM10K_STAT("swapi_status", hw.swapi.status),
+ FM10K_STAT("mac_rules_used", hw.swapi.mac.used),
+ FM10K_STAT("mac_rules_avail", hw.swapi.mac.avail),
+
+ FM10K_STAT("mbx_tx_busy", hw.mbx.tx_busy),
+ FM10K_STAT("mbx_tx_dropped", hw.mbx.tx_dropped),
+ FM10K_STAT("mbx_tx_messages", hw.mbx.tx_messages),
+ FM10K_STAT("mbx_tx_dwords", hw.mbx.tx_dwords),
+ FM10K_STAT("mbx_rx_messages", hw.mbx.rx_messages),
+ FM10K_STAT("mbx_rx_dwords", hw.mbx.rx_dwords),
+ FM10K_STAT("mbx_rx_parse_err", hw.mbx.rx_parse_err),
+
+ FM10K_STAT("tx_hwtstamp_timeouts", tx_hwtstamp_timeouts),
+};
+
+#define FM10K_GLOBAL_STATS_LEN ARRAY_SIZE(fm10k_gstrings_stats)
+
+#define FM10K_QUEUE_STATS_LEN \
+ (MAX_QUEUES * 2 * (sizeof(struct fm10k_queue_stats) / sizeof(u64)))
+
+#define FM10K_STATS_LEN (FM10K_GLOBAL_STATS_LEN + \
+ FM10K_NETDEV_STATS_LEN + \
+ FM10K_QUEUE_STATS_LEN)
+
+static const char fm10k_gstrings_test[][ETH_GSTRING_LEN] = {
+ "Mailbox test (on/offline)"
+};
+
+#define FM10K_TEST_LEN (sizeof(fm10k_gstrings_test) / ETH_GSTRING_LEN)
+
+enum fm10k_self_test_types {
+ FM10K_TEST_MBX,
+ FM10K_TEST_MAX = FM10K_TEST_LEN
+};
+
+static void fm10k_get_strings(struct net_device *dev, u32 stringset,
+ u8 *data)
+{
+ char *p = (char *)data;
+ int i;
+
+ switch (stringset) {
+ case ETH_SS_TEST:
+ memcpy(data, *fm10k_gstrings_test,
+ FM10K_TEST_LEN * ETH_GSTRING_LEN);
+ break;
+ case ETH_SS_STATS:
+ for (i = 0; i < FM10K_NETDEV_STATS_LEN; i++) {
+ memcpy(p, fm10k_gstrings_net_stats[i].stat_string,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ }
+ for (i = 0; i < FM10K_GLOBAL_STATS_LEN; i++) {
+ memcpy(p, fm10k_gstrings_stats[i].stat_string,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ }
+
+ for (i = 0; i < MAX_QUEUES; i++) {
+ sprintf(p, "tx_queue_%u_packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_queue_%u_bytes", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "rx_queue_%u_packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "rx_queue_%u_bytes", i);
+ p += ETH_GSTRING_LEN;
+ }
+ break;
+ }
+}
+
+static int fm10k_get_sset_count(struct net_device *dev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_TEST:
+ return FM10K_TEST_LEN;
+ case ETH_SS_STATS:
+ return FM10K_STATS_LEN;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void fm10k_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ const int stat_count = sizeof(struct fm10k_queue_stats) / sizeof(u64);
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct net_device_stats *net_stats = &netdev->stats;
+ char *p;
+ int i, j;
+
+ fm10k_update_stats(interface);
+
+ for (i = 0; i < FM10K_NETDEV_STATS_LEN; i++) {
+ p = (char *)net_stats + fm10k_gstrings_net_stats[i].stat_offset;
+ *(data++) = (fm10k_gstrings_net_stats[i].sizeof_stat ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ for (i = 0; i < FM10K_GLOBAL_STATS_LEN; i++) {
+ p = (char *)interface + fm10k_gstrings_stats[i].stat_offset;
+ *(data++) = (fm10k_gstrings_stats[i].sizeof_stat ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ for (i = 0; i < MAX_QUEUES; i++) {
+ struct fm10k_ring *ring;
+ u64 *queue_stat;
+
+ ring = interface->tx_ring[i];
+ if (ring)
+ queue_stat = (u64 *)&ring->stats;
+ for (j = 0; j < stat_count; j++)
+ *(data++) = ring ? queue_stat[j] : 0;
+
+ ring = interface->rx_ring[i];
+ if (ring)
+ queue_stat = (u64 *)&ring->stats;
+ for (j = 0; j < stat_count; j++)
+ *(data++) = ring ? queue_stat[j] : 0;
+ }
+}
+
+/* If function below adds more registers this define needs to be updated */
+#define FM10K_REGS_LEN_Q 29
+
+static void fm10k_get_reg_q(struct fm10k_hw *hw, u32 *buff, int i)
+{
+ int idx = 0;
+
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RDBAL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RDBAH(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RDLEN(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TPH_RXCTRL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RDH(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RDT(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RXQCTL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RXDCTL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RXINT(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_SRRCTL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QPRC(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QPRDC(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QBRC_L(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QBRC_H(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TDBAL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TDBAH(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TDLEN(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TPH_TXCTRL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TDH(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TDT(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TXDCTL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TXQCTL(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TXINT(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QPTC(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QBTC_L(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_QBTC_H(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TQDLOC(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_TX_SGLORT(i));
+ buff[idx++] = fm10k_read_reg(hw, FM10K_PFVTCTL(i));
+
+ BUG_ON(idx != FM10K_REGS_LEN_Q);
+}
+
+/* If function above adds more registers this define needs to be updated */
+#define FM10K_REGS_LEN_VSI 43
+
+static void fm10k_get_reg_vsi(struct fm10k_hw *hw, u32 *buff, int i)
+{
+ int idx = 0, j;
+
+ buff[idx++] = fm10k_read_reg(hw, FM10K_MRQC(i));
+ for (j = 0; j < 10; j++)
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RSSRK(i, j));
+ for (j = 0; j < 32; j++)
+ buff[idx++] = fm10k_read_reg(hw, FM10K_RETA(i, j));
+
+ BUG_ON(idx != FM10K_REGS_LEN_VSI);
+}
+
+static void fm10k_get_regs(struct net_device *netdev,
+ struct ethtool_regs *regs, void *p)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ u32 *buff = p;
+ u16 i;
+
+ regs->version = (1 << 24) | (hw->revision_id << 16) | hw->device_id;
+
+ switch (hw->mac.type) {
+ case fm10k_mac_pf:
+ /* General PF Registers */
+ *(buff++) = fm10k_read_reg(hw, FM10K_CTRL);
+ *(buff++) = fm10k_read_reg(hw, FM10K_CTRL_EXT);
+ *(buff++) = fm10k_read_reg(hw, FM10K_GCR);
+ *(buff++) = fm10k_read_reg(hw, FM10K_GCR_EXT);
+
+ for (i = 0; i < 8; i++) {
+ *(buff++) = fm10k_read_reg(hw, FM10K_DGLORTMAP(i));
+ *(buff++) = fm10k_read_reg(hw, FM10K_DGLORTDEC(i));
+ }
+
+ for (i = 0; i < 65; i++) {
+ fm10k_get_reg_vsi(hw, buff, i);
+ buff += FM10K_REGS_LEN_VSI;
+ }
+
+ *(buff++) = fm10k_read_reg(hw, FM10K_DMA_CTRL);
+ *(buff++) = fm10k_read_reg(hw, FM10K_DMA_CTRL2);
+
+ for (i = 0; i < FM10K_MAX_QUEUES_PF; i++) {
+ fm10k_get_reg_q(hw, buff, i);
+ buff += FM10K_REGS_LEN_Q;
+ }
+
+ *(buff++) = fm10k_read_reg(hw, FM10K_TPH_CTRL);
+
+ for (i = 0; i < 8; i++)
+ *(buff++) = fm10k_read_reg(hw, FM10K_INT_MAP(i));
+
+ /* Interrupt Throttling Registers */
+ for (i = 0; i < 130; i++)
+ *(buff++) = fm10k_read_reg(hw, FM10K_ITR(i));
+
+ break;
+ case fm10k_mac_vf:
+ /* General VF registers */
+ *(buff++) = fm10k_read_reg(hw, FM10K_VFCTRL);
+ *(buff++) = fm10k_read_reg(hw, FM10K_VFINT_MAP);
+ *(buff++) = fm10k_read_reg(hw, FM10K_VFSYSTIME);
+
+ /* Interrupt Throttling Registers */
+ for (i = 0; i < 8; i++)
+ *(buff++) = fm10k_read_reg(hw, FM10K_VFITR(i));
+
+ fm10k_get_reg_vsi(hw, buff, 0);
+ buff += FM10K_REGS_LEN_VSI;
+
+ for (i = 0; i < FM10K_MAX_QUEUES_POOL; i++) {
+ if (i < hw->mac.max_queues)
+ fm10k_get_reg_q(hw, buff, i);
+ else
+ memset(buff, 0, sizeof(u32) * FM10K_REGS_LEN_Q);
+ buff += FM10K_REGS_LEN_Q;
+ }
+
+ break;
+ default:
+ return;
+ }
+}
+
+/* If function above adds more registers these define need to be updated */
+#define FM10K_REGS_LEN_PF \
+(162 + (65 * FM10K_REGS_LEN_VSI) + (FM10K_MAX_QUEUES_PF * FM10K_REGS_LEN_Q))
+#define FM10K_REGS_LEN_VF \
+(11 + FM10K_REGS_LEN_VSI + (FM10K_MAX_QUEUES_POOL * FM10K_REGS_LEN_Q))
+
+static int fm10k_get_regs_len(struct net_device *netdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+
+ switch (hw->mac.type) {
+ case fm10k_mac_pf:
+ return FM10K_REGS_LEN_PF * sizeof(u32);
+ case fm10k_mac_vf:
+ return FM10K_REGS_LEN_VF * sizeof(u32);
+ default:
+ return 0;
+ }
+}
+
+static void fm10k_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ strncpy(info->driver, fm10k_driver_name,
+ sizeof(info->driver) - 1);
+ strncpy(info->version, fm10k_driver_version,
+ sizeof(info->version) - 1);
+ strncpy(info->bus_info, pci_name(interface->pdev),
+ sizeof(info->bus_info) - 1);
+
+ info->n_stats = FM10K_STATS_LEN;
+
+ info->regdump_len = fm10k_get_regs_len(dev);
+}
+
+static void fm10k_get_pauseparam(struct net_device *dev,
+ struct ethtool_pauseparam *pause)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ /* record fixed values for autoneg and tx pause */
+ pause->autoneg = 0;
+ pause->tx_pause = 1;
+
+ pause->rx_pause = interface->rx_pause ? 1 : 0;
+}
+
+static int fm10k_set_pauseparam(struct net_device *dev,
+ struct ethtool_pauseparam *pause)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+
+ if (pause->autoneg || !pause->tx_pause)
+ return -EINVAL;
+
+ /* we can only support pause on the PF to avoid head-of-line blocking */
+ if (hw->mac.type == fm10k_mac_pf)
+ interface->rx_pause = pause->rx_pause ? ~0 : 0;
+ else if (pause->rx_pause)
+ return -EINVAL;
+
+ if (netif_running(dev))
+ fm10k_update_rx_drop_en(interface);
+
+ return 0;
+}
+
+static u32 fm10k_get_msglevel(struct net_device *netdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+
+ return interface->msg_enable;
+}
+
+static void fm10k_set_msglevel(struct net_device *netdev, u32 data)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+
+ interface->msg_enable = data;
+}
+
+static void fm10k_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+
+ ring->rx_max_pending = FM10K_MAX_RXD;
+ ring->tx_max_pending = FM10K_MAX_TXD;
+ ring->rx_mini_max_pending = 0;
+ ring->rx_jumbo_max_pending = 0;
+ ring->rx_pending = interface->rx_ring_count;
+ ring->tx_pending = interface->tx_ring_count;
+ ring->rx_mini_pending = 0;
+ ring->rx_jumbo_pending = 0;
+}
+
+static int fm10k_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_ring *temp_ring;
+ int i, err = 0;
+ u32 new_rx_count, new_tx_count;
+
+ if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
+ return -EINVAL;
+
+ new_tx_count = clamp_t(u32, ring->tx_pending,
+ FM10K_MIN_TXD, FM10K_MAX_TXD);
+ new_tx_count = ALIGN(new_tx_count, FM10K_REQ_TX_DESCRIPTOR_MULTIPLE);
+
+ new_rx_count = clamp_t(u32, ring->rx_pending,
+ FM10K_MIN_RXD, FM10K_MAX_RXD);
+ new_rx_count = ALIGN(new_rx_count, FM10K_REQ_RX_DESCRIPTOR_MULTIPLE);
+
+ if ((new_tx_count == interface->tx_ring_count) &&
+ (new_rx_count == interface->rx_ring_count)) {
+ /* nothing to do */
+ return 0;
+ }
+
+ while (test_and_set_bit(__FM10K_RESETTING, &interface->state))
+ usleep_range(1000, 2000);
+
+ if (!netif_running(interface->netdev)) {
+ for (i = 0; i < interface->num_tx_queues; i++)
+ interface->tx_ring[i]->count = new_tx_count;
+ for (i = 0; i < interface->num_rx_queues; i++)
+ interface->rx_ring[i]->count = new_rx_count;
+ interface->tx_ring_count = new_tx_count;
+ interface->rx_ring_count = new_rx_count;
+ goto clear_reset;
+ }
+
+ /* allocate temporary buffer to store rings in */
+ i = max_t(int, interface->num_tx_queues, interface->num_rx_queues);
+ temp_ring = vmalloc(i * sizeof(struct fm10k_ring));
+
+ if (!temp_ring) {
+ err = -ENOMEM;
+ goto clear_reset;
+ }
+
+ fm10k_down(interface);
+
+ /* Setup new Tx resources and free the old Tx resources in that order.
+ * We can then assign the new resources to the rings via a memcpy.
+ * The advantage to this approach is that we are guaranteed to still
+ * have resources even in the case of an allocation failure.
+ */
+ if (new_tx_count != interface->tx_ring_count) {
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ memcpy(&temp_ring[i], interface->tx_ring[i],
+ sizeof(struct fm10k_ring));
+
+ temp_ring[i].count = new_tx_count;
+ err = fm10k_setup_tx_resources(&temp_ring[i]);
+ if (err) {
+ while (i) {
+ i--;
+ fm10k_free_tx_resources(&temp_ring[i]);
+ }
+ goto err_setup;
+ }
+ }
+
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ fm10k_free_tx_resources(interface->tx_ring[i]);
+
+ memcpy(interface->tx_ring[i], &temp_ring[i],
+ sizeof(struct fm10k_ring));
+ }
+
+ interface->tx_ring_count = new_tx_count;
+ }
+
+ /* Repeat the process for the Rx rings if needed */
+ if (new_rx_count != interface->rx_ring_count) {
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ memcpy(&temp_ring[i], interface->rx_ring[i],
+ sizeof(struct fm10k_ring));
+
+ temp_ring[i].count = new_rx_count;
+ err = fm10k_setup_rx_resources(&temp_ring[i]);
+ if (err) {
+ while (i) {
+ i--;
+ fm10k_free_rx_resources(&temp_ring[i]);
+ }
+ goto err_setup;
+ }
+ }
+
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ fm10k_free_rx_resources(interface->rx_ring[i]);
+
+ memcpy(interface->rx_ring[i], &temp_ring[i],
+ sizeof(struct fm10k_ring));
+ }
+
+ interface->rx_ring_count = new_rx_count;
+ }
+
+err_setup:
+ fm10k_up(interface);
+ vfree(temp_ring);
+clear_reset:
+ clear_bit(__FM10K_RESETTING, &interface->state);
+ return err;
+}
+
+static int fm10k_get_coalesce(struct net_device *dev,
+ struct ethtool_coalesce *ec)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ ec->use_adaptive_tx_coalesce =
+ !!(interface->tx_itr & FM10K_ITR_ADAPTIVE);
+ ec->tx_coalesce_usecs = interface->tx_itr & ~FM10K_ITR_ADAPTIVE;
+
+ ec->use_adaptive_rx_coalesce =
+ !!(interface->rx_itr & FM10K_ITR_ADAPTIVE);
+ ec->rx_coalesce_usecs = interface->rx_itr & ~FM10K_ITR_ADAPTIVE;
+
+ return 0;
+}
+
+static int fm10k_set_coalesce(struct net_device *dev,
+ struct ethtool_coalesce *ec)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_q_vector *qv;
+ u16 tx_itr, rx_itr;
+ int i;
+
+ /* verify limits */
+ if ((ec->rx_coalesce_usecs > FM10K_ITR_MAX) ||
+ (ec->tx_coalesce_usecs > FM10K_ITR_MAX))
+ return -EINVAL;
+
+ /* record settings */
+ tx_itr = ec->tx_coalesce_usecs;
+ rx_itr = ec->rx_coalesce_usecs;
+
+ /* set initial values for adaptive ITR */
+ if (ec->use_adaptive_tx_coalesce)
+ tx_itr = FM10K_ITR_ADAPTIVE | FM10K_ITR_10K;
+
+ if (ec->use_adaptive_rx_coalesce)
+ rx_itr = FM10K_ITR_ADAPTIVE | FM10K_ITR_20K;
+
+ /* update interface */
+ interface->tx_itr = tx_itr;
+ interface->rx_itr = rx_itr;
+
+ /* update q_vectors */
+ for (i = 0; i < interface->num_q_vectors; i++) {
+ qv = interface->q_vector[i];
+ qv->tx.itr = tx_itr;
+ qv->rx.itr = rx_itr;
+ }
+
+ return 0;
+}
+
+static int fm10k_get_rss_hash_opts(struct fm10k_intfc *interface,
+ struct ethtool_rxnfc *cmd)
+{
+ cmd->data = 0;
+
+ /* Report default options for RSS on fm10k */
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ case TCP_V6_FLOW:
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ /* fall through */
+ case UDP_V4_FLOW:
+ if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV4_UDP)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ /* fall through */
+ case SCTP_V4_FLOW:
+ case SCTP_V6_FLOW:
+ case AH_ESP_V4_FLOW:
+ case AH_ESP_V6_FLOW:
+ case AH_V4_FLOW:
+ case AH_V6_FLOW:
+ case ESP_V4_FLOW:
+ case ESP_V6_FLOW:
+ case IPV4_FLOW:
+ case IPV6_FLOW:
+ cmd->data |= RXH_IP_SRC | RXH_IP_DST;
+ break;
+ case UDP_V6_FLOW:
+ if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV6_UDP)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ cmd->data |= RXH_IP_SRC | RXH_IP_DST;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int fm10k_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+ u32 *rule_locs)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ int ret = -EOPNOTSUPP;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_GRXRINGS:
+ cmd->data = interface->num_rx_queues;
+ ret = 0;
+ break;
+ case ETHTOOL_GRXFH:
+ ret = fm10k_get_rss_hash_opts(interface, cmd);
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+#define UDP_RSS_FLAGS (FM10K_FLAG_RSS_FIELD_IPV4_UDP | \
+ FM10K_FLAG_RSS_FIELD_IPV6_UDP)
+static int fm10k_set_rss_hash_opt(struct fm10k_intfc *interface,
+ struct ethtool_rxnfc *nfc)
+{
+ u32 flags = interface->flags;
+
+ /* RSS does not support anything other than hashing
+ * to queues on src and dst IPs and ports
+ */
+ if (nfc->data & ~(RXH_IP_SRC | RXH_IP_DST |
+ RXH_L4_B_0_1 | RXH_L4_B_2_3))
+ return -EINVAL;
+
+ switch (nfc->flow_type) {
+ case TCP_V4_FLOW:
+ case TCP_V6_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST) ||
+ !(nfc->data & RXH_L4_B_0_1) ||
+ !(nfc->data & RXH_L4_B_2_3))
+ return -EINVAL;
+ break;
+ case UDP_V4_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST))
+ return -EINVAL;
+ switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ case 0:
+ flags &= ~FM10K_FLAG_RSS_FIELD_IPV4_UDP;
+ break;
+ case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+ flags |= FM10K_FLAG_RSS_FIELD_IPV4_UDP;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case UDP_V6_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST))
+ return -EINVAL;
+ switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ case 0:
+ flags &= ~FM10K_FLAG_RSS_FIELD_IPV6_UDP;
+ break;
+ case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+ flags |= FM10K_FLAG_RSS_FIELD_IPV6_UDP;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case AH_ESP_V4_FLOW:
+ case AH_V4_FLOW:
+ case ESP_V4_FLOW:
+ case SCTP_V4_FLOW:
+ case AH_ESP_V6_FLOW:
+ case AH_V6_FLOW:
+ case ESP_V6_FLOW:
+ case SCTP_V6_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST) ||
+ (nfc->data & RXH_L4_B_0_1) ||
+ (nfc->data & RXH_L4_B_2_3))
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* if we changed something we need to update flags */
+ if (flags != interface->flags) {
+ struct fm10k_hw *hw = &interface->hw;
+ u32 mrqc;
+
+ if ((flags & UDP_RSS_FLAGS) &&
+ !(interface->flags & UDP_RSS_FLAGS))
+ netif_warn(interface, drv, interface->netdev,
+ "enabling UDP RSS: fragmented packets may arrive out of order to the stack above\n");
+
+ interface->flags = flags;
+
+ /* Perform hash on these packet types */
+ mrqc = FM10K_MRQC_IPV4 |
+ FM10K_MRQC_TCP_IPV4 |
+ FM10K_MRQC_IPV6 |
+ FM10K_MRQC_TCP_IPV6;
+
+ if (flags & FM10K_FLAG_RSS_FIELD_IPV4_UDP)
+ mrqc |= FM10K_MRQC_UDP_IPV4;
+ if (flags & FM10K_FLAG_RSS_FIELD_IPV6_UDP)
+ mrqc |= FM10K_MRQC_UDP_IPV6;
+
+ fm10k_write_reg(hw, FM10K_MRQC(0), mrqc);
+ }
+
+ return 0;
+}
+
+static int fm10k_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ int ret = -EOPNOTSUPP;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_SRXFH:
+ ret = fm10k_set_rss_hash_opt(interface, cmd);
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+static int fm10k_mbx_test(struct fm10k_intfc *interface, u64 *data)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 attr_flag, test_msg[6];
+ unsigned long timeout;
+ int err;
+
+ /* For now this is a VF only feature */
+ if (hw->mac.type != fm10k_mac_vf)
+ return 0;
+
+ /* loop through both nested and unnested attribute types */
+ for (attr_flag = (1 << FM10K_TEST_MSG_UNSET);
+ attr_flag < (1 << (2 * FM10K_TEST_MSG_NESTED));
+ attr_flag += attr_flag) {
+ /* generate message to be tested */
+ fm10k_tlv_msg_test_create(test_msg, attr_flag);
+
+ fm10k_mbx_lock(interface);
+ mbx->test_result = FM10K_NOT_IMPLEMENTED;
+ err = mbx->ops.enqueue_tx(hw, mbx, test_msg);
+ fm10k_mbx_unlock(interface);
+
+ /* wait up to 1 second for response */
+ timeout = jiffies + HZ;
+ do {
+ if (err < 0)
+ goto err_out;
+
+ usleep_range(500, 1000);
+
+ fm10k_mbx_lock(interface);
+ mbx->ops.process(hw, mbx);
+ fm10k_mbx_unlock(interface);
+
+ err = mbx->test_result;
+ if (!err)
+ break;
+ } while (time_is_after_jiffies(timeout));
+
+ /* reporting errors */
+ if (err)
+ goto err_out;
+ }
+
+err_out:
+ *data = err < 0 ? (attr_flag) : (err > 0);
+ return err;
+}
+
+static void fm10k_self_test(struct net_device *dev,
+ struct ethtool_test *eth_test, u64 *data)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+
+ memset(data, 0, sizeof(*data) * FM10K_TEST_LEN);
+
+ if (FM10K_REMOVED(hw)) {
+ netif_err(interface, drv, dev,
+ "Interface removed - test blocked\n");
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ return;
+ }
+
+ if (fm10k_mbx_test(interface, &data[FM10K_TEST_MBX]))
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+}
+
+static u32 fm10k_get_reta_size(struct net_device *netdev)
+{
+ return FM10K_RETA_SIZE * FM10K_RETA_ENTRIES_PER_REG;
+}
+
+static int fm10k_get_reta(struct net_device *netdev, u32 *indir)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ int i;
+
+ if (!indir)
+ return 0;
+
+ for (i = 0; i < FM10K_RETA_SIZE; i++, indir += 4) {
+ u32 reta = interface->reta[i];
+
+ indir[0] = (reta << 24) >> 24;
+ indir[1] = (reta << 16) >> 24;
+ indir[2] = (reta << 8) >> 24;
+ indir[3] = (reta) >> 24;
+ }
+
+ return 0;
+}
+
+static int fm10k_set_reta(struct net_device *netdev, const u32 *indir)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ int i;
+ u16 rss_i;
+
+ if (!indir)
+ return 0;
+
+ /* Verify user input. */
+ rss_i = interface->ring_feature[RING_F_RSS].indices;
+ for (i = fm10k_get_reta_size(netdev); i--;) {
+ if (indir[i] < rss_i)
+ continue;
+ return -EINVAL;
+ }
+
+ /* record entries to reta table */
+ for (i = 0; i < FM10K_RETA_SIZE; i++, indir += 4) {
+ u32 reta = indir[0] |
+ (indir[1] << 8) |
+ (indir[2] << 16) |
+ (indir[3] << 24);
+
+ if (interface->reta[i] == reta)
+ continue;
+
+ interface->reta[i] = reta;
+ fm10k_write_reg(hw, FM10K_RETA(0, i), reta);
+ }
+
+ return 0;
+}
+
+static u32 fm10k_get_rssrk_size(struct net_device *netdev)
+{
+ return FM10K_RSSRK_SIZE * FM10K_RSSRK_ENTRIES_PER_REG;
+}
+
+static int fm10k_get_rssh(struct net_device *netdev, u32 *indir, u8 *key)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ int i, err;
+
+ err = fm10k_get_reta(netdev, indir);
+ if (err || !key)
+ return err;
+
+ for (i = 0; i < FM10K_RSSRK_SIZE; i++, key += 4)
+ *(__le32 *)key = cpu_to_le32(interface->rssrk[i]);
+
+ return 0;
+}
+
+static int fm10k_set_rssh(struct net_device *netdev, const u32 *indir,
+ const u8 *key)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ int i, err;
+
+ err = fm10k_set_reta(netdev, indir);
+ if (err || !key)
+ return err;
+
+ for (i = 0; i < FM10K_RSSRK_SIZE; i++, key += 4) {
+ u32 rssrk = le32_to_cpu(*(__le32 *)key);
+
+ if (interface->rssrk[i] == rssrk)
+ continue;
+
+ interface->rssrk[i] = rssrk;
+ fm10k_write_reg(hw, FM10K_RSSRK(0, i), rssrk);
+ }
+
+ return 0;
+}
+
+static unsigned int fm10k_max_channels(struct net_device *dev)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ unsigned int max_combined = interface->hw.mac.max_queues;
+ u8 tcs = netdev_get_num_tc(dev);
+
+ /* For QoS report channels per traffic class */
+ if (tcs > 1)
+ max_combined = 1 << (fls(max_combined / tcs) - 1);
+
+ return max_combined;
+}
+
+static void fm10k_get_channels(struct net_device *dev,
+ struct ethtool_channels *ch)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+
+ /* report maximum channels */
+ ch->max_combined = fm10k_max_channels(dev);
+
+ /* report info for other vector */
+ ch->max_other = NON_Q_VECTORS(hw);
+ ch->other_count = ch->max_other;
+
+ /* record RSS queues */
+ ch->combined_count = interface->ring_feature[RING_F_RSS].indices;
+}
+
+static int fm10k_set_channels(struct net_device *dev,
+ struct ethtool_channels *ch)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ unsigned int count = ch->combined_count;
+ struct fm10k_hw *hw = &interface->hw;
+
+ /* verify they are not requesting separate vectors */
+ if (!count || ch->rx_count || ch->tx_count)
+ return -EINVAL;
+
+ /* verify other_count has not changed */
+ if (ch->other_count != NON_Q_VECTORS(hw))
+ return -EINVAL;
+
+ /* verify the number of channels does not exceed hardware limits */
+ if (count > fm10k_max_channels(dev))
+ return -EINVAL;
+
+ interface->ring_feature[RING_F_RSS].limit = count;
+
+ /* use setup TC to update any traffic class queue mapping */
+ return fm10k_setup_tc(dev, netdev_get_num_tc(dev));
+}
+
+static int fm10k_get_ts_info(struct net_device *dev,
+ struct ethtool_ts_info *info)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ info->so_timestamping =
+ SOF_TIMESTAMPING_TX_SOFTWARE |
+ SOF_TIMESTAMPING_RX_SOFTWARE |
+ SOF_TIMESTAMPING_SOFTWARE |
+ SOF_TIMESTAMPING_TX_HARDWARE |
+ SOF_TIMESTAMPING_RX_HARDWARE |
+ SOF_TIMESTAMPING_RAW_HARDWARE;
+
+ if (interface->ptp_clock)
+ info->phc_index = ptp_clock_index(interface->ptp_clock);
+ else
+ info->phc_index = -1;
+
+ info->tx_types = (1 << HWTSTAMP_TX_OFF) |
+ (1 << HWTSTAMP_TX_ON);
+
+ info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+ (1 << HWTSTAMP_FILTER_ALL);
+
+ return 0;
+}
+
+static const struct ethtool_ops fm10k_ethtool_ops = {
+ .get_strings = fm10k_get_strings,
+ .get_sset_count = fm10k_get_sset_count,
+ .get_ethtool_stats = fm10k_get_ethtool_stats,
+ .get_drvinfo = fm10k_get_drvinfo,
+ .get_link = ethtool_op_get_link,
+ .get_pauseparam = fm10k_get_pauseparam,
+ .set_pauseparam = fm10k_set_pauseparam,
+ .get_msglevel = fm10k_get_msglevel,
+ .set_msglevel = fm10k_set_msglevel,
+ .get_ringparam = fm10k_get_ringparam,
+ .set_ringparam = fm10k_set_ringparam,
+ .get_coalesce = fm10k_get_coalesce,
+ .set_coalesce = fm10k_set_coalesce,
+ .get_rxnfc = fm10k_get_rxnfc,
+ .set_rxnfc = fm10k_set_rxnfc,
+ .get_regs = fm10k_get_regs,
+ .get_regs_len = fm10k_get_regs_len,
+ .self_test = fm10k_self_test,
+ .get_rxfh_indir_size = fm10k_get_reta_size,
+ .get_rxfh_key_size = fm10k_get_rssrk_size,
+ .get_rxfh = fm10k_get_rssh,
+ .set_rxfh = fm10k_set_rssh,
+ .get_channels = fm10k_get_channels,
+ .set_channels = fm10k_set_channels,
+ .get_ts_info = fm10k_get_ts_info,
+};
+
+void fm10k_set_ethtool_ops(struct net_device *dev)
+{
+ dev->ethtool_ops = &fm10k_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_iov.c b/drivers/net/ethernet/intel/fm10k/fm10k_iov.c
new file mode 100644
index 0000000..0601908
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_iov.c
@@ -0,0 +1,536 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k.h"
+#include "fm10k_vf.h"
+#include "fm10k_pf.h"
+
+static s32 fm10k_iov_msg_error(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx;
+ struct fm10k_intfc *interface = hw->back;
+ struct pci_dev *pdev = interface->pdev;
+
+ dev_err(&pdev->dev, "Unknown message ID %u on VF %d\n",
+ **results & FM10K_TLV_ID_MASK, vf_info->vf_idx);
+
+ return fm10k_tlv_msg_error(hw, results, mbx);
+}
+
+static const struct fm10k_msg_data iov_mbx_data[] = {
+ FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test),
+ FM10K_VF_MSG_MSIX_HANDLER(fm10k_iov_msg_msix_pf),
+ FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_mac_vlan_pf),
+ FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_iov_msg_lport_state_pf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_iov_msg_error),
+};
+
+s32 fm10k_iov_event(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_iov_data *iov_data;
+ s64 mbicr, vflre;
+ int i;
+
+ /* if there is no iov_data then there is no mailboxes to process */
+ if (!ACCESS_ONCE(interface->iov_data))
+ return 0;
+
+ rcu_read_lock();
+
+ iov_data = interface->iov_data;
+
+ /* check again now that we are in the RCU block */
+ if (!iov_data)
+ goto read_unlock;
+
+ if (!(fm10k_read_reg(hw, FM10K_EICR) & FM10K_EICR_VFLR))
+ goto process_mbx;
+
+ /* read VFLRE to determine if any VFs have been reset */
+ do {
+ vflre = fm10k_read_reg(hw, FM10K_PFVFLRE(0));
+ vflre <<= 32;
+ vflre |= fm10k_read_reg(hw, FM10K_PFVFLRE(1));
+ vflre = (vflre << 32) | (vflre >> 32);
+ vflre |= fm10k_read_reg(hw, FM10K_PFVFLRE(0));
+
+ i = iov_data->num_vfs;
+
+ for (vflre <<= 64 - i; vflre && i--; vflre += vflre) {
+ struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
+
+ if (vflre >= 0)
+ continue;
+
+ hw->iov.ops.reset_resources(hw, vf_info);
+ vf_info->mbx.ops.connect(hw, &vf_info->mbx);
+ }
+ } while (i != iov_data->num_vfs);
+
+process_mbx:
+ /* read MBICR to determine which VFs require attention */
+ mbicr = fm10k_read_reg(hw, FM10K_MBICR(1));
+ mbicr <<= 32;
+ mbicr |= fm10k_read_reg(hw, FM10K_MBICR(0));
+
+ i = iov_data->next_vf_mbx ? : iov_data->num_vfs;
+
+ for (mbicr <<= 64 - i; i--; mbicr += mbicr) {
+ struct fm10k_mbx_info *mbx = &iov_data->vf_info[i].mbx;
+
+ if (mbicr >= 0)
+ continue;
+
+ if (!hw->mbx.ops.tx_ready(&hw->mbx, FM10K_VFMBX_MSG_MTU))
+ break;
+
+ mbx->ops.process(hw, mbx);
+ }
+
+ if (i >= 0) {
+ iov_data->next_vf_mbx = i + 1;
+ } else if (iov_data->next_vf_mbx) {
+ iov_data->next_vf_mbx = 0;
+ goto process_mbx;
+ }
+read_unlock:
+ rcu_read_unlock();
+
+ return 0;
+}
+
+s32 fm10k_iov_mbx(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_iov_data *iov_data;
+ int i;
+
+ /* if there is no iov_data then there is no mailboxes to process */
+ if (!ACCESS_ONCE(interface->iov_data))
+ return 0;
+
+ rcu_read_lock();
+
+ iov_data = interface->iov_data;
+
+ /* check again now that we are in the RCU block */
+ if (!iov_data)
+ goto read_unlock;
+
+ /* lock the mailbox for transmit and receive */
+ fm10k_mbx_lock(interface);
+
+process_mbx:
+ for (i = iov_data->next_vf_mbx ? : iov_data->num_vfs; i--;) {
+ struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
+ struct fm10k_mbx_info *mbx = &vf_info->mbx;
+ u16 glort = vf_info->glort;
+
+ /* verify port mapping is valid, if not reset port */
+ if (vf_info->vf_flags && !fm10k_glort_valid_pf(hw, glort))
+ hw->iov.ops.reset_lport(hw, vf_info);
+
+ /* reset VFs that have mailbox timed out */
+ if (!mbx->timeout) {
+ hw->iov.ops.reset_resources(hw, vf_info);
+ mbx->ops.connect(hw, mbx);
+ }
+
+ /* no work pending, then just continue */
+ if (mbx->ops.tx_complete(mbx) && !mbx->ops.rx_ready(mbx))
+ continue;
+
+ /* guarantee we have free space in the SM mailbox */
+ if (!hw->mbx.ops.tx_ready(&hw->mbx, FM10K_VFMBX_MSG_MTU))
+ break;
+
+ /* cleanup mailbox and process received messages */
+ mbx->ops.process(hw, mbx);
+ }
+
+ if (i >= 0) {
+ iov_data->next_vf_mbx = i + 1;
+ } else if (iov_data->next_vf_mbx) {
+ iov_data->next_vf_mbx = 0;
+ goto process_mbx;
+ }
+
+ /* free the lock */
+ fm10k_mbx_unlock(interface);
+
+read_unlock:
+ rcu_read_unlock();
+
+ return 0;
+}
+
+void fm10k_iov_suspend(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+ int num_vfs, i;
+
+ /* pull out num_vfs from iov_data */
+ num_vfs = iov_data ? iov_data->num_vfs : 0;
+
+ /* shut down queue mapping for VFs */
+ fm10k_write_reg(hw, FM10K_DGLORTMAP(fm10k_dglort_vf_rss),
+ FM10K_DGLORTMAP_NONE);
+
+ /* Stop any active VFs and reset their resources */
+ for (i = 0; i < num_vfs; i++) {
+ struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
+
+ hw->iov.ops.reset_resources(hw, vf_info);
+ hw->iov.ops.reset_lport(hw, vf_info);
+ }
+}
+
+int fm10k_iov_resume(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_dglort_cfg dglort = { 0 };
+ struct fm10k_hw *hw = &interface->hw;
+ int num_vfs, i;
+
+ /* pull out num_vfs from iov_data */
+ num_vfs = iov_data ? iov_data->num_vfs : 0;
+
+ /* return error if iov_data is not already populated */
+ if (!iov_data)
+ return -ENOMEM;
+
+ /* allocate hardware resources for the VFs */
+ hw->iov.ops.assign_resources(hw, num_vfs, num_vfs);
+
+ /* configure DGLORT mapping for RSS */
+ dglort.glort = hw->mac.dglort_map & FM10K_DGLORTMAP_NONE;
+ dglort.idx = fm10k_dglort_vf_rss;
+ dglort.inner_rss = 1;
+ dglort.rss_l = fls(fm10k_queues_per_pool(hw) - 1);
+ dglort.queue_b = fm10k_vf_queue_index(hw, 0);
+ dglort.vsi_l = fls(hw->iov.total_vfs - 1);
+ dglort.vsi_b = 1;
+
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+
+ /* assign resources to the device */
+ for (i = 0; i < num_vfs; i++) {
+ struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
+
+ /* allocate all but the last GLORT to the VFs */
+ if (i == ((~hw->mac.dglort_map) >> FM10K_DGLORTMAP_MASK_SHIFT))
+ break;
+
+ /* assign GLORT to VF, and restrict it to multicast */
+ hw->iov.ops.set_lport(hw, vf_info, i,
+ FM10K_VF_FLAG_MULTI_CAPABLE);
+
+ /* assign our default vid to the VF following reset */
+ vf_info->sw_vid = hw->mac.default_vid;
+
+ /* mailbox is disconnected so we don't send a message */
+ hw->iov.ops.assign_default_mac_vlan(hw, vf_info);
+
+ /* now we are ready so we can connect */
+ vf_info->mbx.ops.connect(hw, &vf_info->mbx);
+ }
+
+ return 0;
+}
+
+s32 fm10k_iov_update_pvid(struct fm10k_intfc *interface, u16 glort, u16 pvid)
+{
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_vf_info *vf_info;
+ u16 vf_idx = (glort - hw->mac.dglort_map) & FM10K_DGLORTMAP_NONE;
+
+ /* no IOV support, not our message to process */
+ if (!iov_data)
+ return FM10K_ERR_PARAM;
+
+ /* glort outside our range, not our message to process */
+ if (vf_idx >= iov_data->num_vfs)
+ return FM10K_ERR_PARAM;
+
+ /* determine if an update has occured and if so notify the VF */
+ vf_info = &iov_data->vf_info[vf_idx];
+ if (vf_info->sw_vid != pvid) {
+ vf_info->sw_vid = pvid;
+ hw->iov.ops.assign_default_mac_vlan(hw, vf_info);
+ }
+
+ return 0;
+}
+
+static void fm10k_iov_free_data(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+
+ if (!interface->iov_data)
+ return;
+
+ /* reclaim hardware resources */
+ fm10k_iov_suspend(pdev);
+
+ /* drop iov_data from interface */
+ kfree_rcu(interface->iov_data, rcu);
+ interface->iov_data = NULL;
+}
+
+static s32 fm10k_iov_alloc_data(struct pci_dev *pdev, int num_vfs)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+ size_t size;
+ int i, err;
+
+ /* return error if iov_data is already populated */
+ if (iov_data)
+ return -EBUSY;
+
+ /* The PF should always be able to assign resources */
+ if (!hw->iov.ops.assign_resources)
+ return -ENODEV;
+
+ /* nothing to do if no VFs are requested */
+ if (!num_vfs)
+ return 0;
+
+ /* allocate memory for VF storage */
+ size = offsetof(struct fm10k_iov_data, vf_info[num_vfs]);
+ iov_data = kzalloc(size, GFP_KERNEL);
+ if (!iov_data)
+ return -ENOMEM;
+
+ /* record number of VFs */
+ iov_data->num_vfs = num_vfs;
+
+ /* loop through vf_info structures initializing each entry */
+ for (i = 0; i < num_vfs; i++) {
+ struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
+
+ /* Record VF VSI value */
+ vf_info->vsi = i + 1;
+ vf_info->vf_idx = i;
+
+ /* initialize mailbox memory */
+ err = fm10k_pfvf_mbx_init(hw, &vf_info->mbx, iov_mbx_data, i);
+ if (err) {
+ dev_err(&pdev->dev,
+ "Unable to initialize SR-IOV mailbox\n");
+ kfree(iov_data);
+ return err;
+ }
+ }
+
+ /* assign iov_data to interface */
+ interface->iov_data = iov_data;
+
+ /* allocate hardware resources for the VFs */
+ fm10k_iov_resume(pdev);
+
+ return 0;
+}
+
+void fm10k_iov_disable(struct pci_dev *pdev)
+{
+ if (pci_num_vf(pdev) && pci_vfs_assigned(pdev))
+ dev_err(&pdev->dev,
+ "Cannot disable SR-IOV while VFs are assigned\n");
+ else
+ pci_disable_sriov(pdev);
+
+ fm10k_iov_free_data(pdev);
+}
+
+static void fm10k_disable_aer_comp_abort(struct pci_dev *pdev)
+{
+ u32 err_sev;
+ int pos;
+
+ pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR);
+ if (!pos)
+ return;
+
+ pci_read_config_dword(pdev, pos + PCI_ERR_UNCOR_SEVER, &err_sev);
+ err_sev &= ~PCI_ERR_UNC_COMP_ABORT;
+ pci_write_config_dword(pdev, pos + PCI_ERR_UNCOR_SEVER, err_sev);
+}
+
+int fm10k_iov_configure(struct pci_dev *pdev, int num_vfs)
+{
+ int current_vfs = pci_num_vf(pdev);
+ int err = 0;
+
+ if (current_vfs && pci_vfs_assigned(pdev)) {
+ dev_err(&pdev->dev,
+ "Cannot modify SR-IOV while VFs are assigned\n");
+ num_vfs = current_vfs;
+ } else {
+ pci_disable_sriov(pdev);
+ fm10k_iov_free_data(pdev);
+ }
+
+ /* allocate resources for the VFs */
+ err = fm10k_iov_alloc_data(pdev, num_vfs);
+ if (err)
+ return err;
+
+ /* allocate VFs if not already allocated */
+ if (num_vfs && (num_vfs != current_vfs)) {
+ /* Disable completer abort error reporting as
+ * the VFs can trigger this any time they read a queue
+ * that they don't own.
+ */
+ fm10k_disable_aer_comp_abort(pdev);
+
+ err = pci_enable_sriov(pdev, num_vfs);
+ if (err) {
+ dev_err(&pdev->dev,
+ "Enable PCI SR-IOV failed: %d\n", err);
+ return err;
+ }
+ }
+
+ return num_vfs;
+}
+
+int fm10k_ndo_set_vf_mac(struct net_device *netdev, int vf_idx, u8 *mac)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_vf_info *vf_info;
+
+ /* verify SR-IOV is active and that vf idx is valid */
+ if (!iov_data || vf_idx >= iov_data->num_vfs)
+ return -EINVAL;
+
+ /* verify MAC addr is valid */
+ if (!is_zero_ether_addr(mac) && !is_valid_ether_addr(mac))
+ return -EINVAL;
+
+ /* record new MAC address */
+ vf_info = &iov_data->vf_info[vf_idx];
+ ether_addr_copy(vf_info->mac, mac);
+
+ /* assigning the MAC will send a mailbox message so lock is needed */
+ fm10k_mbx_lock(interface);
+
+ /* assign MAC address to VF */
+ hw->iov.ops.assign_default_mac_vlan(hw, vf_info);
+
+ fm10k_mbx_unlock(interface);
+
+ return 0;
+}
+
+int fm10k_ndo_set_vf_vlan(struct net_device *netdev, int vf_idx, u16 vid,
+ u8 qos)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_vf_info *vf_info;
+
+ /* verify SR-IOV is active and that vf idx is valid */
+ if (!iov_data || vf_idx >= iov_data->num_vfs)
+ return -EINVAL;
+
+ /* QOS is unsupported and VLAN IDs accepted range 0-4094 */
+ if (qos || (vid > (VLAN_VID_MASK - 1)))
+ return -EINVAL;
+
+ vf_info = &iov_data->vf_info[vf_idx];
+
+ /* exit if there is nothing to do */
+ if (vf_info->pf_vid == vid)
+ return 0;
+
+ /* record default VLAN ID for VF */
+ vf_info->pf_vid = vid;
+
+ /* assigning the VLAN will send a mailbox message so lock is needed */
+ fm10k_mbx_lock(interface);
+
+ /* Clear the VLAN table for the VF */
+ hw->mac.ops.update_vlan(hw, FM10K_VLAN_ALL, vf_info->vsi, false);
+
+ /* Update VF assignment and trigger reset */
+ hw->iov.ops.assign_default_mac_vlan(hw, vf_info);
+
+ fm10k_mbx_unlock(interface);
+
+ return 0;
+}
+
+int fm10k_ndo_set_vf_bw(struct net_device *netdev, int vf_idx, int unused,
+ int rate)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_hw *hw = &interface->hw;
+
+ /* verify SR-IOV is active and that vf idx is valid */
+ if (!iov_data || vf_idx >= iov_data->num_vfs)
+ return -EINVAL;
+
+ /* rate limit cannot be less than 10Mbs or greater than link speed */
+ if (rate && ((rate < FM10K_VF_TC_MIN) || rate > FM10K_VF_TC_MAX))
+ return -EINVAL;
+
+ /* store values */
+ iov_data->vf_info[vf_idx].rate = rate;
+
+ /* update hardware configuration */
+ hw->iov.ops.configure_tc(hw, vf_idx, rate);
+
+ return 0;
+}
+
+int fm10k_ndo_get_vf_config(struct net_device *netdev,
+ int vf_idx, struct ifla_vf_info *ivi)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_iov_data *iov_data = interface->iov_data;
+ struct fm10k_vf_info *vf_info;
+
+ /* verify SR-IOV is active and that vf idx is valid */
+ if (!iov_data || vf_idx >= iov_data->num_vfs)
+ return -EINVAL;
+
+ vf_info = &iov_data->vf_info[vf_idx];
+
+ ivi->vf = vf_idx;
+ ivi->max_tx_rate = vf_info->rate;
+ ivi->min_tx_rate = 0;
+ ether_addr_copy(ivi->mac, vf_info->mac);
+ ivi->vlan = vf_info->pf_vid;
+ ivi->qos = 0;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_main.c b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
new file mode 100644
index 0000000..6c800a3
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
@@ -0,0 +1,1979 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <net/ipv6.h>
+#include <net/ip.h>
+#include <net/tcp.h>
+#include <linux/if_macvlan.h>
+#include <linux/prefetch.h>
+
+#include "fm10k.h"
+
+#define DRV_VERSION "0.12.2-k"
+const char fm10k_driver_version[] = DRV_VERSION;
+char fm10k_driver_name[] = "fm10k";
+static const char fm10k_driver_string[] =
+ "Intel(R) Ethernet Switch Host Interface Driver";
+static const char fm10k_copyright[] =
+ "Copyright (c) 2013 Intel Corporation.";
+
+MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
+MODULE_DESCRIPTION("Intel(R) Ethernet Switch Host Interface Driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(DRV_VERSION);
+
+/**
+ * fm10k_init_module - Driver Registration Routine
+ *
+ * fm10k_init_module is the first routine called when the driver is
+ * loaded. All it does is register with the PCI subsystem.
+ **/
+static int __init fm10k_init_module(void)
+{
+ pr_info("%s - version %s\n", fm10k_driver_string, fm10k_driver_version);
+ pr_info("%s\n", fm10k_copyright);
+
+ fm10k_dbg_init();
+
+ return fm10k_register_pci_driver();
+}
+module_init(fm10k_init_module);
+
+/**
+ * fm10k_exit_module - Driver Exit Cleanup Routine
+ *
+ * fm10k_exit_module is called just before the driver is removed
+ * from memory.
+ **/
+static void __exit fm10k_exit_module(void)
+{
+ fm10k_unregister_pci_driver();
+
+ fm10k_dbg_exit();
+}
+module_exit(fm10k_exit_module);
+
+static bool fm10k_alloc_mapped_page(struct fm10k_ring *rx_ring,
+ struct fm10k_rx_buffer *bi)
+{
+ struct page *page = bi->page;
+ dma_addr_t dma;
+
+ /* Only page will be NULL if buffer was consumed */
+ if (likely(page))
+ return true;
+
+ /* alloc new page for storage */
+ page = alloc_page(GFP_ATOMIC | __GFP_COLD);
+ if (unlikely(!page)) {
+ rx_ring->rx_stats.alloc_failed++;
+ return false;
+ }
+
+ /* map page for use */
+ dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
+
+ /* if mapping failed free memory back to system since
+ * there isn't much point in holding memory we can't use
+ */
+ if (dma_mapping_error(rx_ring->dev, dma)) {
+ __free_page(page);
+ bi->page = NULL;
+
+ rx_ring->rx_stats.alloc_failed++;
+ return false;
+ }
+
+ bi->dma = dma;
+ bi->page = page;
+ bi->page_offset = 0;
+
+ return true;
+}
+
+/**
+ * fm10k_alloc_rx_buffers - Replace used receive buffers
+ * @rx_ring: ring to place buffers on
+ * @cleaned_count: number of buffers to replace
+ **/
+void fm10k_alloc_rx_buffers(struct fm10k_ring *rx_ring, u16 cleaned_count)
+{
+ union fm10k_rx_desc *rx_desc;
+ struct fm10k_rx_buffer *bi;
+ u16 i = rx_ring->next_to_use;
+
+ /* nothing to do */
+ if (!cleaned_count)
+ return;
+
+ rx_desc = FM10K_RX_DESC(rx_ring, i);
+ bi = &rx_ring->rx_buffer[i];
+ i -= rx_ring->count;
+
+ do {
+ if (!fm10k_alloc_mapped_page(rx_ring, bi))
+ break;
+
+ /* Refresh the desc even if buffer_addrs didn't change
+ * because each write-back erases this info.
+ */
+ rx_desc->q.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset);
+
+ rx_desc++;
+ bi++;
+ i++;
+ if (unlikely(!i)) {
+ rx_desc = FM10K_RX_DESC(rx_ring, 0);
+ bi = rx_ring->rx_buffer;
+ i -= rx_ring->count;
+ }
+
+ /* clear the hdr_addr for the next_to_use descriptor */
+ rx_desc->q.hdr_addr = 0;
+
+ cleaned_count--;
+ } while (cleaned_count);
+
+ i += rx_ring->count;
+
+ if (rx_ring->next_to_use != i) {
+ /* record the next descriptor to use */
+ rx_ring->next_to_use = i;
+
+ /* update next to alloc since we have filled the ring */
+ rx_ring->next_to_alloc = i;
+
+ /* Force memory writes to complete before letting h/w
+ * know there are new descriptors to fetch. (Only
+ * applicable for weak-ordered memory model archs,
+ * such as IA-64).
+ */
+ wmb();
+
+ /* notify hardware of new descriptors */
+ writel(i, rx_ring->tail);
+ }
+}
+
+/**
+ * fm10k_reuse_rx_page - page flip buffer and store it back on the ring
+ * @rx_ring: rx descriptor ring to store buffers on
+ * @old_buff: donor buffer to have page reused
+ *
+ * Synchronizes page for reuse by the interface
+ **/
+static void fm10k_reuse_rx_page(struct fm10k_ring *rx_ring,
+ struct fm10k_rx_buffer *old_buff)
+{
+ struct fm10k_rx_buffer *new_buff;
+ u16 nta = rx_ring->next_to_alloc;
+
+ new_buff = &rx_ring->rx_buffer[nta];
+
+ /* update, and store next to alloc */
+ nta++;
+ rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
+
+ /* transfer page from old buffer to new buffer */
+ memcpy(new_buff, old_buff, sizeof(struct fm10k_rx_buffer));
+
+ /* sync the buffer for use by the device */
+ dma_sync_single_range_for_device(rx_ring->dev, old_buff->dma,
+ old_buff->page_offset,
+ FM10K_RX_BUFSZ,
+ DMA_FROM_DEVICE);
+}
+
+static bool fm10k_can_reuse_rx_page(struct fm10k_rx_buffer *rx_buffer,
+ struct page *page,
+ unsigned int truesize)
+{
+ /* avoid re-using remote pages */
+ if (unlikely(page_to_nid(page) != numa_mem_id()))
+ return false;
+
+#if (PAGE_SIZE < 8192)
+ /* if we are only owner of page we can reuse it */
+ if (unlikely(page_count(page) != 1))
+ return false;
+
+ /* flip page offset to other buffer */
+ rx_buffer->page_offset ^= FM10K_RX_BUFSZ;
+
+ /* since we are the only owner of the page and we need to
+ * increment it, just set the value to 2 in order to avoid
+ * an unnecessary locked operation
+ */
+ atomic_set(&page->_count, 2);
+#else
+ /* move offset up to the next cache line */
+ rx_buffer->page_offset += truesize;
+
+ if (rx_buffer->page_offset > (PAGE_SIZE - FM10K_RX_BUFSZ))
+ return false;
+
+ /* bump ref count on page before it is given to the stack */
+ get_page(page);
+#endif
+
+ return true;
+}
+
+/**
+ * fm10k_add_rx_frag - Add contents of Rx buffer to sk_buff
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_buffer: buffer containing page to add
+ * @rx_desc: descriptor containing length of buffer written by hardware
+ * @skb: sk_buff to place the data into
+ *
+ * This function will add the data contained in rx_buffer->page to the skb.
+ * This is done either through a direct copy if the data in the buffer is
+ * less than the skb header size, otherwise it will just attach the page as
+ * a frag to the skb.
+ *
+ * The function will then update the page offset if necessary and return
+ * true if the buffer can be reused by the interface.
+ **/
+static bool fm10k_add_rx_frag(struct fm10k_ring *rx_ring,
+ struct fm10k_rx_buffer *rx_buffer,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct page *page = rx_buffer->page;
+ unsigned int size = le16_to_cpu(rx_desc->w.length);
+#if (PAGE_SIZE < 8192)
+ unsigned int truesize = FM10K_RX_BUFSZ;
+#else
+ unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
+#endif
+
+ if ((size <= FM10K_RX_HDR_LEN) && !skb_is_nonlinear(skb)) {
+ unsigned char *va = page_address(page) + rx_buffer->page_offset;
+
+ memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
+
+ /* we can reuse buffer as-is, just make sure it is local */
+ if (likely(page_to_nid(page) == numa_mem_id()))
+ return true;
+
+ /* this page cannot be reused so discard it */
+ put_page(page);
+ return false;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ rx_buffer->page_offset, size, truesize);
+
+ return fm10k_can_reuse_rx_page(rx_buffer, page, truesize);
+}
+
+static struct sk_buff *fm10k_fetch_rx_buffer(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct fm10k_rx_buffer *rx_buffer;
+ struct page *page;
+
+ rx_buffer = &rx_ring->rx_buffer[rx_ring->next_to_clean];
+
+ page = rx_buffer->page;
+ prefetchw(page);
+
+ if (likely(!skb)) {
+ void *page_addr = page_address(page) +
+ rx_buffer->page_offset;
+
+ /* prefetch first cache line of first page */
+ prefetch(page_addr);
+#if L1_CACHE_BYTES < 128
+ prefetch(page_addr + L1_CACHE_BYTES);
+#endif
+
+ /* allocate a skb to store the frags */
+ skb = netdev_alloc_skb_ip_align(rx_ring->netdev,
+ FM10K_RX_HDR_LEN);
+ if (unlikely(!skb)) {
+ rx_ring->rx_stats.alloc_failed++;
+ return NULL;
+ }
+
+ /* we will be copying header into skb->data in
+ * pskb_may_pull so it is in our interest to prefetch
+ * it now to avoid a possible cache miss
+ */
+ prefetchw(skb->data);
+ }
+
+ /* we are reusing so sync this buffer for CPU use */
+ dma_sync_single_range_for_cpu(rx_ring->dev,
+ rx_buffer->dma,
+ rx_buffer->page_offset,
+ FM10K_RX_BUFSZ,
+ DMA_FROM_DEVICE);
+
+ /* pull page into skb */
+ if (fm10k_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) {
+ /* hand second half of page back to the ring */
+ fm10k_reuse_rx_page(rx_ring, rx_buffer);
+ } else {
+ /* we are not reusing the buffer so unmap it */
+ dma_unmap_page(rx_ring->dev, rx_buffer->dma,
+ PAGE_SIZE, DMA_FROM_DEVICE);
+ }
+
+ /* clear contents of rx_buffer */
+ rx_buffer->page = NULL;
+
+ return skb;
+}
+
+static inline void fm10k_rx_checksum(struct fm10k_ring *ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ skb_checksum_none_assert(skb);
+
+ /* Rx checksum disabled via ethtool */
+ if (!(ring->netdev->features & NETIF_F_RXCSUM))
+ return;
+
+ /* TCP/UDP checksum error bit is set */
+ if (fm10k_test_staterr(rx_desc,
+ FM10K_RXD_STATUS_L4E |
+ FM10K_RXD_STATUS_L4E2 |
+ FM10K_RXD_STATUS_IPE |
+ FM10K_RXD_STATUS_IPE2)) {
+ ring->rx_stats.csum_err++;
+ return;
+ }
+
+ /* It must be a TCP or UDP packet with a valid checksum */
+ if (fm10k_test_staterr(rx_desc, FM10K_RXD_STATUS_L4CS2))
+ skb->encapsulation = true;
+ else if (!fm10k_test_staterr(rx_desc, FM10K_RXD_STATUS_L4CS))
+ return;
+
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+}
+
+#define FM10K_RSS_L4_TYPES_MASK \
+ ((1ul << FM10K_RSSTYPE_IPV4_TCP) | \
+ (1ul << FM10K_RSSTYPE_IPV4_UDP) | \
+ (1ul << FM10K_RSSTYPE_IPV6_TCP) | \
+ (1ul << FM10K_RSSTYPE_IPV6_UDP))
+
+static inline void fm10k_rx_hash(struct fm10k_ring *ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ u16 rss_type;
+
+ if (!(ring->netdev->features & NETIF_F_RXHASH))
+ return;
+
+ rss_type = le16_to_cpu(rx_desc->w.pkt_info) & FM10K_RXD_RSSTYPE_MASK;
+ if (!rss_type)
+ return;
+
+ skb_set_hash(skb, le32_to_cpu(rx_desc->d.rss),
+ (FM10K_RSS_L4_TYPES_MASK & (1ul << rss_type)) ?
+ PKT_HASH_TYPE_L4 : PKT_HASH_TYPE_L3);
+}
+
+static void fm10k_rx_hwtstamp(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct fm10k_intfc *interface = rx_ring->q_vector->interface;
+
+ FM10K_CB(skb)->tstamp = rx_desc->q.timestamp;
+
+ if (unlikely(interface->flags & FM10K_FLAG_RX_TS_ENABLED))
+ fm10k_systime_to_hwtstamp(interface, skb_hwtstamps(skb),
+ le64_to_cpu(rx_desc->q.timestamp));
+}
+
+static void fm10k_type_trans(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct net_device *dev = rx_ring->netdev;
+ struct fm10k_l2_accel *l2_accel = rcu_dereference_bh(rx_ring->l2_accel);
+
+ /* check to see if DGLORT belongs to a MACVLAN */
+ if (l2_accel) {
+ u16 idx = le16_to_cpu(FM10K_CB(skb)->fi.w.dglort) - 1;
+
+ idx -= l2_accel->dglort;
+ if (idx < l2_accel->size && l2_accel->macvlan[idx])
+ dev = l2_accel->macvlan[idx];
+ else
+ l2_accel = NULL;
+ }
+
+ skb->protocol = eth_type_trans(skb, dev);
+
+ if (!l2_accel)
+ return;
+
+ /* update MACVLAN statistics */
+ macvlan_count_rx(netdev_priv(dev), skb->len + ETH_HLEN, 1,
+ !!(rx_desc->w.hdr_info &
+ cpu_to_le16(FM10K_RXD_HDR_INFO_XC_MASK)));
+}
+
+/**
+ * fm10k_process_skb_fields - Populate skb header fields from Rx descriptor
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being populated
+ *
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, VLAN, timestamp, protocol, and
+ * other fields within the skb.
+ **/
+static unsigned int fm10k_process_skb_fields(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ unsigned int len = skb->len;
+
+ fm10k_rx_hash(rx_ring, rx_desc, skb);
+
+ fm10k_rx_checksum(rx_ring, rx_desc, skb);
+
+ fm10k_rx_hwtstamp(rx_ring, rx_desc, skb);
+
+ FM10K_CB(skb)->fi.w.vlan = rx_desc->w.vlan;
+
+ skb_record_rx_queue(skb, rx_ring->queue_index);
+
+ FM10K_CB(skb)->fi.d.glort = rx_desc->d.glort;
+
+ if (rx_desc->w.vlan) {
+ u16 vid = le16_to_cpu(rx_desc->w.vlan);
+
+ if (vid != rx_ring->vid)
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vid);
+ }
+
+ fm10k_type_trans(rx_ring, rx_desc, skb);
+
+ return len;
+}
+
+/**
+ * fm10k_is_non_eop - process handling of non-EOP buffers
+ * @rx_ring: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ *
+ * This function updates next to clean. If the buffer is an EOP buffer
+ * this function exits returning false, otherwise it will place the
+ * sk_buff in the next buffer to be chained and return true indicating
+ * that this is in fact a non-EOP buffer.
+ **/
+static bool fm10k_is_non_eop(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc)
+{
+ u32 ntc = rx_ring->next_to_clean + 1;
+
+ /* fetch, update, and store next to clean */
+ ntc = (ntc < rx_ring->count) ? ntc : 0;
+ rx_ring->next_to_clean = ntc;
+
+ prefetch(FM10K_RX_DESC(rx_ring, ntc));
+
+ if (likely(fm10k_test_staterr(rx_desc, FM10K_RXD_STATUS_EOP)))
+ return false;
+
+ return true;
+}
+
+/**
+ * fm10k_pull_tail - fm10k specific version of skb_pull_tail
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being adjusted
+ *
+ * This function is an fm10k specific version of __pskb_pull_tail. The
+ * main difference between this version and the original function is that
+ * this function can make several assumptions about the state of things
+ * that allow for significant optimizations versus the standard function.
+ * As a result we can do things like drop a frag and maintain an accurate
+ * truesize for the skb.
+ */
+static void fm10k_pull_tail(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
+ unsigned char *va;
+ unsigned int pull_len;
+
+ /* it is valid to use page_address instead of kmap since we are
+ * working with pages allocated out of the lomem pool per
+ * alloc_page(GFP_ATOMIC)
+ */
+ va = skb_frag_address(frag);
+
+ /* we need the header to contain the greater of either ETH_HLEN or
+ * 60 bytes if the skb->len is less than 60 for skb_pad.
+ */
+ pull_len = eth_get_headlen(va, FM10K_RX_HDR_LEN);
+
+ /* align pull length to size of long to optimize memcpy performance */
+ skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
+
+ /* update all of the pointers */
+ skb_frag_size_sub(frag, pull_len);
+ frag->page_offset += pull_len;
+ skb->data_len -= pull_len;
+ skb->tail += pull_len;
+}
+
+/**
+ * fm10k_cleanup_headers - Correct corrupted or empty headers
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being fixed
+ *
+ * Address the case where we are pulling data in on pages only
+ * and as such no data is present in the skb header.
+ *
+ * In addition if skb is not at least 60 bytes we need to pad it so that
+ * it is large enough to qualify as a valid Ethernet frame.
+ *
+ * Returns true if an error was encountered and skb was freed.
+ **/
+static bool fm10k_cleanup_headers(struct fm10k_ring *rx_ring,
+ union fm10k_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ if (unlikely((fm10k_test_staterr(rx_desc,
+ FM10K_RXD_STATUS_RXE)))) {
+ dev_kfree_skb_any(skb);
+ rx_ring->rx_stats.errors++;
+ return true;
+ }
+
+ /* place header in linear portion of buffer */
+ if (skb_is_nonlinear(skb))
+ fm10k_pull_tail(rx_ring, rx_desc, skb);
+
+ /* if skb_pad returns an error the skb was freed */
+ if (unlikely(skb->len < 60)) {
+ int pad_len = 60 - skb->len;
+
+ if (skb_pad(skb, pad_len))
+ return true;
+ __skb_put(skb, pad_len);
+ }
+
+ return false;
+}
+
+/**
+ * fm10k_receive_skb - helper function to handle rx indications
+ * @q_vector: structure containing interrupt and ring information
+ * @skb: packet to send up
+ **/
+static void fm10k_receive_skb(struct fm10k_q_vector *q_vector,
+ struct sk_buff *skb)
+{
+ napi_gro_receive(&q_vector->napi, skb);
+}
+
+static bool fm10k_clean_rx_irq(struct fm10k_q_vector *q_vector,
+ struct fm10k_ring *rx_ring,
+ int budget)
+{
+ struct sk_buff *skb = rx_ring->skb;
+ unsigned int total_bytes = 0, total_packets = 0;
+ u16 cleaned_count = fm10k_desc_unused(rx_ring);
+
+ do {
+ union fm10k_rx_desc *rx_desc;
+
+ /* return some buffers to hardware, one at a time is too slow */
+ if (cleaned_count >= FM10K_RX_BUFFER_WRITE) {
+ fm10k_alloc_rx_buffers(rx_ring, cleaned_count);
+ cleaned_count = 0;
+ }
+
+ rx_desc = FM10K_RX_DESC(rx_ring, rx_ring->next_to_clean);
+
+ if (!fm10k_test_staterr(rx_desc, FM10K_RXD_STATUS_DD))
+ break;
+
+ /* This memory barrier is needed to keep us from reading
+ * any other fields out of the rx_desc until we know the
+ * RXD_STATUS_DD bit is set
+ */
+ rmb();
+
+ /* retrieve a buffer from the ring */
+ skb = fm10k_fetch_rx_buffer(rx_ring, rx_desc, skb);
+
+ /* exit if we failed to retrieve a buffer */
+ if (!skb)
+ break;
+
+ cleaned_count++;
+
+ /* fetch next buffer in frame if non-eop */
+ if (fm10k_is_non_eop(rx_ring, rx_desc))
+ continue;
+
+ /* verify the packet layout is correct */
+ if (fm10k_cleanup_headers(rx_ring, rx_desc, skb)) {
+ skb = NULL;
+ continue;
+ }
+
+ /* populate checksum, timestamp, VLAN, and protocol */
+ total_bytes += fm10k_process_skb_fields(rx_ring, rx_desc, skb);
+
+ fm10k_receive_skb(q_vector, skb);
+
+ /* reset skb pointer */
+ skb = NULL;
+
+ /* update budget accounting */
+ total_packets++;
+ } while (likely(total_packets < budget));
+
+ /* place incomplete frames back on ring for completion */
+ rx_ring->skb = skb;
+
+ u64_stats_update_begin(&rx_ring->syncp);
+ rx_ring->stats.packets += total_packets;
+ rx_ring->stats.bytes += total_bytes;
+ u64_stats_update_end(&rx_ring->syncp);
+ q_vector->rx.total_packets += total_packets;
+ q_vector->rx.total_bytes += total_bytes;
+
+ return total_packets < budget;
+}
+
+#define VXLAN_HLEN (sizeof(struct udphdr) + 8)
+static struct ethhdr *fm10k_port_is_vxlan(struct sk_buff *skb)
+{
+ struct fm10k_intfc *interface = netdev_priv(skb->dev);
+ struct fm10k_vxlan_port *vxlan_port;
+
+ /* we can only offload a vxlan if we recognize it as such */
+ vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
+ struct fm10k_vxlan_port, list);
+
+ if (!vxlan_port)
+ return NULL;
+ if (vxlan_port->port != udp_hdr(skb)->dest)
+ return NULL;
+
+ /* return offset of udp_hdr plus 8 bytes for VXLAN header */
+ return (struct ethhdr *)(skb_transport_header(skb) + VXLAN_HLEN);
+}
+
+#define FM10K_NVGRE_RESERVED0_FLAGS htons(0x9FFF)
+#define NVGRE_TNI htons(0x2000)
+struct fm10k_nvgre_hdr {
+ __be16 flags;
+ __be16 proto;
+ __be32 tni;
+};
+
+static struct ethhdr *fm10k_gre_is_nvgre(struct sk_buff *skb)
+{
+ struct fm10k_nvgre_hdr *nvgre_hdr;
+ int hlen = ip_hdrlen(skb);
+
+ /* currently only IPv4 is supported due to hlen above */
+ if (vlan_get_protocol(skb) != htons(ETH_P_IP))
+ return NULL;
+
+ /* our transport header should be NVGRE */
+ nvgre_hdr = (struct fm10k_nvgre_hdr *)(skb_network_header(skb) + hlen);
+
+ /* verify all reserved flags are 0 */
+ if (nvgre_hdr->flags & FM10K_NVGRE_RESERVED0_FLAGS)
+ return NULL;
+
+ /* verify protocol is transparent Ethernet bridging */
+ if (nvgre_hdr->proto != htons(ETH_P_TEB))
+ return NULL;
+
+ /* report start of ethernet header */
+ if (nvgre_hdr->flags & NVGRE_TNI)
+ return (struct ethhdr *)(nvgre_hdr + 1);
+
+ return (struct ethhdr *)(&nvgre_hdr->tni);
+}
+
+static __be16 fm10k_tx_encap_offload(struct sk_buff *skb)
+{
+ struct ethhdr *eth_hdr;
+ u8 l4_hdr = 0;
+
+ switch (vlan_get_protocol(skb)) {
+ case htons(ETH_P_IP):
+ l4_hdr = ip_hdr(skb)->protocol;
+ break;
+ case htons(ETH_P_IPV6):
+ l4_hdr = ipv6_hdr(skb)->nexthdr;
+ break;
+ default:
+ return 0;
+ }
+
+ switch (l4_hdr) {
+ case IPPROTO_UDP:
+ eth_hdr = fm10k_port_is_vxlan(skb);
+ break;
+ case IPPROTO_GRE:
+ eth_hdr = fm10k_gre_is_nvgre(skb);
+ break;
+ default:
+ return 0;
+ }
+
+ if (!eth_hdr)
+ return 0;
+
+ switch (eth_hdr->h_proto) {
+ case htons(ETH_P_IP):
+ case htons(ETH_P_IPV6):
+ break;
+ default:
+ return 0;
+ }
+
+ return eth_hdr->h_proto;
+}
+
+static int fm10k_tso(struct fm10k_ring *tx_ring,
+ struct fm10k_tx_buffer *first)
+{
+ struct sk_buff *skb = first->skb;
+ struct fm10k_tx_desc *tx_desc;
+ unsigned char *th;
+ u8 hdrlen;
+
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 0;
+
+ if (!skb_is_gso(skb))
+ return 0;
+
+ /* compute header lengths */
+ if (skb->encapsulation) {
+ if (!fm10k_tx_encap_offload(skb))
+ goto err_vxlan;
+ th = skb_inner_transport_header(skb);
+ } else {
+ th = skb_transport_header(skb);
+ }
+
+ /* compute offset from SOF to transport header and add header len */
+ hdrlen = (th - skb->data) + (((struct tcphdr *)th)->doff << 2);
+
+ first->tx_flags |= FM10K_TX_FLAGS_CSUM;
+
+ /* update gso size and bytecount with header size */
+ first->gso_segs = skb_shinfo(skb)->gso_segs;
+ first->bytecount += (first->gso_segs - 1) * hdrlen;
+
+ /* populate Tx descriptor header size and mss */
+ tx_desc = FM10K_TX_DESC(tx_ring, tx_ring->next_to_use);
+ tx_desc->hdrlen = hdrlen;
+ tx_desc->mss = cpu_to_le16(skb_shinfo(skb)->gso_size);
+
+ return 1;
+err_vxlan:
+ tx_ring->netdev->features &= ~NETIF_F_GSO_UDP_TUNNEL;
+ if (!net_ratelimit())
+ netdev_err(tx_ring->netdev,
+ "TSO requested for unsupported tunnel, disabling offload\n");
+ return -1;
+}
+
+static void fm10k_tx_csum(struct fm10k_ring *tx_ring,
+ struct fm10k_tx_buffer *first)
+{
+ struct sk_buff *skb = first->skb;
+ struct fm10k_tx_desc *tx_desc;
+ union {
+ struct iphdr *ipv4;
+ struct ipv6hdr *ipv6;
+ u8 *raw;
+ } network_hdr;
+ __be16 protocol;
+ u8 l4_hdr = 0;
+
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ goto no_csum;
+
+ if (skb->encapsulation) {
+ protocol = fm10k_tx_encap_offload(skb);
+ if (!protocol) {
+ if (skb_checksum_help(skb)) {
+ dev_warn(tx_ring->dev,
+ "failed to offload encap csum!\n");
+ tx_ring->tx_stats.csum_err++;
+ }
+ goto no_csum;
+ }
+ network_hdr.raw = skb_inner_network_header(skb);
+ } else {
+ protocol = vlan_get_protocol(skb);
+ network_hdr.raw = skb_network_header(skb);
+ }
+
+ switch (protocol) {
+ case htons(ETH_P_IP):
+ l4_hdr = network_hdr.ipv4->protocol;
+ break;
+ case htons(ETH_P_IPV6):
+ l4_hdr = network_hdr.ipv6->nexthdr;
+ break;
+ default:
+ if (unlikely(net_ratelimit())) {
+ dev_warn(tx_ring->dev,
+ "partial checksum but ip version=%x!\n",
+ protocol);
+ }
+ tx_ring->tx_stats.csum_err++;
+ goto no_csum;
+ }
+
+ switch (l4_hdr) {
+ case IPPROTO_TCP:
+ case IPPROTO_UDP:
+ break;
+ case IPPROTO_GRE:
+ if (skb->encapsulation)
+ break;
+ default:
+ if (unlikely(net_ratelimit())) {
+ dev_warn(tx_ring->dev,
+ "partial checksum but l4 proto=%x!\n",
+ l4_hdr);
+ }
+ tx_ring->tx_stats.csum_err++;
+ goto no_csum;
+ }
+
+ /* update TX checksum flag */
+ first->tx_flags |= FM10K_TX_FLAGS_CSUM;
+
+no_csum:
+ /* populate Tx descriptor header size and mss */
+ tx_desc = FM10K_TX_DESC(tx_ring, tx_ring->next_to_use);
+ tx_desc->hdrlen = 0;
+ tx_desc->mss = 0;
+}
+
+#define FM10K_SET_FLAG(_input, _flag, _result) \
+ ((_flag <= _result) ? \
+ ((u32)(_input & _flag) * (_result / _flag)) : \
+ ((u32)(_input & _flag) / (_flag / _result)))
+
+static u8 fm10k_tx_desc_flags(struct sk_buff *skb, u32 tx_flags)
+{
+ /* set type for advanced descriptor with frame checksum insertion */
+ u32 desc_flags = 0;
+
+ /* set timestamping bits */
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+ likely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS))
+ desc_flags |= FM10K_TXD_FLAG_TIME;
+
+ /* set checksum offload bits */
+ desc_flags |= FM10K_SET_FLAG(tx_flags, FM10K_TX_FLAGS_CSUM,
+ FM10K_TXD_FLAG_CSUM);
+
+ return desc_flags;
+}
+
+static bool fm10k_tx_desc_push(struct fm10k_ring *tx_ring,
+ struct fm10k_tx_desc *tx_desc, u16 i,
+ dma_addr_t dma, unsigned int size, u8 desc_flags)
+{
+ /* set RS and INT for last frame in a cache line */
+ if ((++i & (FM10K_TXD_WB_FIFO_SIZE - 1)) == 0)
+ desc_flags |= FM10K_TXD_FLAG_RS | FM10K_TXD_FLAG_INT;
+
+ /* record values to descriptor */
+ tx_desc->buffer_addr = cpu_to_le64(dma);
+ tx_desc->flags = desc_flags;
+ tx_desc->buflen = cpu_to_le16(size);
+
+ /* return true if we just wrapped the ring */
+ return i == tx_ring->count;
+}
+
+static void fm10k_tx_map(struct fm10k_ring *tx_ring,
+ struct fm10k_tx_buffer *first)
+{
+ struct sk_buff *skb = first->skb;
+ struct fm10k_tx_buffer *tx_buffer;
+ struct fm10k_tx_desc *tx_desc;
+ struct skb_frag_struct *frag;
+ unsigned char *data;
+ dma_addr_t dma;
+ unsigned int data_len, size;
+ u32 tx_flags = first->tx_flags;
+ u16 i = tx_ring->next_to_use;
+ u8 flags = fm10k_tx_desc_flags(skb, tx_flags);
+
+ tx_desc = FM10K_TX_DESC(tx_ring, i);
+
+ /* add HW VLAN tag */
+ if (vlan_tx_tag_present(skb))
+ tx_desc->vlan = cpu_to_le16(vlan_tx_tag_get(skb));
+ else
+ tx_desc->vlan = 0;
+
+ size = skb_headlen(skb);
+ data = skb->data;
+
+ dma = dma_map_single(tx_ring->dev, data, size, DMA_TO_DEVICE);
+
+ data_len = skb->data_len;
+ tx_buffer = first;
+
+ for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+ if (dma_mapping_error(tx_ring->dev, dma))
+ goto dma_error;
+
+ /* record length, and DMA address */
+ dma_unmap_len_set(tx_buffer, len, size);
+ dma_unmap_addr_set(tx_buffer, dma, dma);
+
+ while (unlikely(size > FM10K_MAX_DATA_PER_TXD)) {
+ if (fm10k_tx_desc_push(tx_ring, tx_desc++, i++, dma,
+ FM10K_MAX_DATA_PER_TXD, flags)) {
+ tx_desc = FM10K_TX_DESC(tx_ring, 0);
+ i = 0;
+ }
+
+ dma += FM10K_MAX_DATA_PER_TXD;
+ size -= FM10K_MAX_DATA_PER_TXD;
+ }
+
+ if (likely(!data_len))
+ break;
+
+ if (fm10k_tx_desc_push(tx_ring, tx_desc++, i++,
+ dma, size, flags)) {
+ tx_desc = FM10K_TX_DESC(tx_ring, 0);
+ i = 0;
+ }
+
+ size = skb_frag_size(frag);
+ data_len -= size;
+
+ dma = skb_frag_dma_map(tx_ring->dev, frag, 0, size,
+ DMA_TO_DEVICE);
+
+ tx_buffer = &tx_ring->tx_buffer[i];
+ }
+
+ /* write last descriptor with LAST bit set */
+ flags |= FM10K_TXD_FLAG_LAST;
+
+ if (fm10k_tx_desc_push(tx_ring, tx_desc, i++, dma, size, flags))
+ i = 0;
+
+ /* record bytecount for BQL */
+ netdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount);
+
+ /* record SW timestamp if HW timestamp is not available */
+ skb_tx_timestamp(first->skb);
+
+ /* Force memory writes to complete before letting h/w know there
+ * are new descriptors to fetch. (Only applicable for weak-ordered
+ * memory model archs, such as IA-64).
+ *
+ * We also need this memory barrier to make certain all of the
+ * status bits have been updated before next_to_watch is written.
+ */
+ wmb();
+
+ /* set next_to_watch value indicating a packet is present */
+ first->next_to_watch = tx_desc;
+
+ tx_ring->next_to_use = i;
+
+ /* notify HW of packet */
+ writel(i, tx_ring->tail);
+
+ /* we need this if more than one processor can write to our tail
+ * at a time, it synchronizes IO on IA64/Altix systems
+ */
+ mmiowb();
+
+ return;
+dma_error:
+ dev_err(tx_ring->dev, "TX DMA map failed\n");
+
+ /* clear dma mappings for failed tx_buffer map */
+ for (;;) {
+ tx_buffer = &tx_ring->tx_buffer[i];
+ fm10k_unmap_and_free_tx_resource(tx_ring, tx_buffer);
+ if (tx_buffer == first)
+ break;
+ if (i == 0)
+ i = tx_ring->count;
+ i--;
+ }
+
+ tx_ring->next_to_use = i;
+}
+
+static int __fm10k_maybe_stop_tx(struct fm10k_ring *tx_ring, u16 size)
+{
+ netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+
+ smp_mb();
+
+ /* We need to check again in a case another CPU has just
+ * made room available. */
+ if (likely(fm10k_desc_unused(tx_ring) < size))
+ return -EBUSY;
+
+ /* A reprieve! - use start_queue because it doesn't call schedule */
+ netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
+ ++tx_ring->tx_stats.restart_queue;
+ return 0;
+}
+
+static inline int fm10k_maybe_stop_tx(struct fm10k_ring *tx_ring, u16 size)
+{
+ if (likely(fm10k_desc_unused(tx_ring) >= size))
+ return 0;
+ return __fm10k_maybe_stop_tx(tx_ring, size);
+}
+
+netdev_tx_t fm10k_xmit_frame_ring(struct sk_buff *skb,
+ struct fm10k_ring *tx_ring)
+{
+ struct fm10k_tx_buffer *first;
+ int tso;
+ u32 tx_flags = 0;
+#if PAGE_SIZE > FM10K_MAX_DATA_PER_TXD
+ unsigned short f;
+#endif
+ u16 count = TXD_USE_COUNT(skb_headlen(skb));
+
+ /* need: 1 descriptor per page * PAGE_SIZE/FM10K_MAX_DATA_PER_TXD,
+ * + 1 desc for skb_headlen/FM10K_MAX_DATA_PER_TXD,
+ * + 2 desc gap to keep tail from touching head
+ * otherwise try next time
+ */
+#if PAGE_SIZE > FM10K_MAX_DATA_PER_TXD
+ for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
+ count += TXD_USE_COUNT(skb_shinfo(skb)->frags[f].size);
+#else
+ count += skb_shinfo(skb)->nr_frags;
+#endif
+ if (fm10k_maybe_stop_tx(tx_ring, count + 3)) {
+ tx_ring->tx_stats.tx_busy++;
+ return NETDEV_TX_BUSY;
+ }
+
+ /* record the location of the first descriptor for this packet */
+ first = &tx_ring->tx_buffer[tx_ring->next_to_use];
+ first->skb = skb;
+ first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN);
+ first->gso_segs = 1;
+
+ /* record initial flags and protocol */
+ first->tx_flags = tx_flags;
+
+ tso = fm10k_tso(tx_ring, first);
+ if (tso < 0)
+ goto out_drop;
+ else if (!tso)
+ fm10k_tx_csum(tx_ring, first);
+
+ fm10k_tx_map(tx_ring, first);
+
+ fm10k_maybe_stop_tx(tx_ring, DESC_NEEDED);
+
+ return NETDEV_TX_OK;
+
+out_drop:
+ dev_kfree_skb_any(first->skb);
+ first->skb = NULL;
+
+ return NETDEV_TX_OK;
+}
+
+static u64 fm10k_get_tx_completed(struct fm10k_ring *ring)
+{
+ return ring->stats.packets;
+}
+
+static u64 fm10k_get_tx_pending(struct fm10k_ring *ring)
+{
+ /* use SW head and tail until we have real hardware */
+ u32 head = ring->next_to_clean;
+ u32 tail = ring->next_to_use;
+
+ return ((head <= tail) ? tail : tail + ring->count) - head;
+}
+
+bool fm10k_check_tx_hang(struct fm10k_ring *tx_ring)
+{
+ u32 tx_done = fm10k_get_tx_completed(tx_ring);
+ u32 tx_done_old = tx_ring->tx_stats.tx_done_old;
+ u32 tx_pending = fm10k_get_tx_pending(tx_ring);
+
+ clear_check_for_tx_hang(tx_ring);
+
+ /* Check for a hung queue, but be thorough. This verifies
+ * that a transmit has been completed since the previous
+ * check AND there is at least one packet pending. By
+ * requiring this to fail twice we avoid races with
+ * clearing the ARMED bit and conditions where we
+ * run the check_tx_hang logic with a transmit completion
+ * pending but without time to complete it yet.
+ */
+ if (!tx_pending || (tx_done_old != tx_done)) {
+ /* update completed stats and continue */
+ tx_ring->tx_stats.tx_done_old = tx_done;
+ /* reset the countdown */
+ clear_bit(__FM10K_HANG_CHECK_ARMED, &tx_ring->state);
+
+ return false;
+ }
+
+ /* make sure it is true for two checks in a row */
+ return test_and_set_bit(__FM10K_HANG_CHECK_ARMED, &tx_ring->state);
+}
+
+/**
+ * fm10k_tx_timeout_reset - initiate reset due to Tx timeout
+ * @interface: driver private struct
+ **/
+void fm10k_tx_timeout_reset(struct fm10k_intfc *interface)
+{
+ /* Do the reset outside of interrupt context */
+ if (!test_bit(__FM10K_DOWN, &interface->state)) {
+ netdev_err(interface->netdev, "Reset interface\n");
+ interface->tx_timeout_count++;
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+ fm10k_service_event_schedule(interface);
+ }
+}
+
+/**
+ * fm10k_clean_tx_irq - Reclaim resources after transmit completes
+ * @q_vector: structure containing interrupt and ring information
+ * @tx_ring: tx ring to clean
+ **/
+static bool fm10k_clean_tx_irq(struct fm10k_q_vector *q_vector,
+ struct fm10k_ring *tx_ring)
+{
+ struct fm10k_intfc *interface = q_vector->interface;
+ struct fm10k_tx_buffer *tx_buffer;
+ struct fm10k_tx_desc *tx_desc;
+ unsigned int total_bytes = 0, total_packets = 0;
+ unsigned int budget = q_vector->tx.work_limit;
+ unsigned int i = tx_ring->next_to_clean;
+
+ if (test_bit(__FM10K_DOWN, &interface->state))
+ return true;
+
+ tx_buffer = &tx_ring->tx_buffer[i];
+ tx_desc = FM10K_TX_DESC(tx_ring, i);
+ i -= tx_ring->count;
+
+ do {
+ struct fm10k_tx_desc *eop_desc = tx_buffer->next_to_watch;
+
+ /* if next_to_watch is not set then there is no work pending */
+ if (!eop_desc)
+ break;
+
+ /* prevent any other reads prior to eop_desc */
+ read_barrier_depends();
+
+ /* if DD is not set pending work has not been completed */
+ if (!(eop_desc->flags & FM10K_TXD_FLAG_DONE))
+ break;
+
+ /* clear next_to_watch to prevent false hangs */
+ tx_buffer->next_to_watch = NULL;
+
+ /* update the statistics for this packet */
+ total_bytes += tx_buffer->bytecount;
+ total_packets += tx_buffer->gso_segs;
+
+ /* free the skb */
+ dev_consume_skb_any(tx_buffer->skb);
+
+ /* unmap skb header data */
+ dma_unmap_single(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+
+ /* clear tx_buffer data */
+ tx_buffer->skb = NULL;
+ dma_unmap_len_set(tx_buffer, len, 0);
+
+ /* unmap remaining buffers */
+ while (tx_desc != eop_desc) {
+ tx_buffer++;
+ tx_desc++;
+ i++;
+ if (unlikely(!i)) {
+ i -= tx_ring->count;
+ tx_buffer = tx_ring->tx_buffer;
+ tx_desc = FM10K_TX_DESC(tx_ring, 0);
+ }
+
+ /* unmap any remaining paged data */
+ if (dma_unmap_len(tx_buffer, len)) {
+ dma_unmap_page(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buffer, len, 0);
+ }
+ }
+
+ /* move us one more past the eop_desc for start of next pkt */
+ tx_buffer++;
+ tx_desc++;
+ i++;
+ if (unlikely(!i)) {
+ i -= tx_ring->count;
+ tx_buffer = tx_ring->tx_buffer;
+ tx_desc = FM10K_TX_DESC(tx_ring, 0);
+ }
+
+ /* issue prefetch for next Tx descriptor */
+ prefetch(tx_desc);
+
+ /* update budget accounting */
+ budget--;
+ } while (likely(budget));
+
+ i += tx_ring->count;
+ tx_ring->next_to_clean = i;
+ u64_stats_update_begin(&tx_ring->syncp);
+ tx_ring->stats.bytes += total_bytes;
+ tx_ring->stats.packets += total_packets;
+ u64_stats_update_end(&tx_ring->syncp);
+ q_vector->tx.total_bytes += total_bytes;
+ q_vector->tx.total_packets += total_packets;
+
+ if (check_for_tx_hang(tx_ring) && fm10k_check_tx_hang(tx_ring)) {
+ /* schedule immediate reset if we believe we hung */
+ struct fm10k_hw *hw = &interface->hw;
+
+ netif_err(interface, drv, tx_ring->netdev,
+ "Detected Tx Unit Hang\n"
+ " Tx Queue <%d>\n"
+ " TDH, TDT <%x>, <%x>\n"
+ " next_to_use <%x>\n"
+ " next_to_clean <%x>\n",
+ tx_ring->queue_index,
+ fm10k_read_reg(hw, FM10K_TDH(tx_ring->reg_idx)),
+ fm10k_read_reg(hw, FM10K_TDT(tx_ring->reg_idx)),
+ tx_ring->next_to_use, i);
+
+ netif_stop_subqueue(tx_ring->netdev,
+ tx_ring->queue_index);
+
+ netif_info(interface, probe, tx_ring->netdev,
+ "tx hang %d detected on queue %d, resetting interface\n",
+ interface->tx_timeout_count + 1,
+ tx_ring->queue_index);
+
+ fm10k_tx_timeout_reset(interface);
+
+ /* the netdev is about to reset, no point in enabling stuff */
+ return true;
+ }
+
+ /* notify netdev of completed buffers */
+ netdev_tx_completed_queue(txring_txq(tx_ring),
+ total_packets, total_bytes);
+
+#define TX_WAKE_THRESHOLD min_t(u16, FM10K_MIN_TXD - 1, DESC_NEEDED * 2)
+ if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) &&
+ (fm10k_desc_unused(tx_ring) >= TX_WAKE_THRESHOLD))) {
+ /* Make sure that anybody stopping the queue after this
+ * sees the new next_to_clean.
+ */
+ smp_mb();
+ if (__netif_subqueue_stopped(tx_ring->netdev,
+ tx_ring->queue_index) &&
+ !test_bit(__FM10K_DOWN, &interface->state)) {
+ netif_wake_subqueue(tx_ring->netdev,
+ tx_ring->queue_index);
+ ++tx_ring->tx_stats.restart_queue;
+ }
+ }
+
+ return !!budget;
+}
+
+/**
+ * fm10k_update_itr - update the dynamic ITR value based on packet size
+ *
+ * Stores a new ITR value based on strictly on packet size. The
+ * divisors and thresholds used by this function were determined based
+ * on theoretical maximum wire speed and testing data, in order to
+ * minimize response time while increasing bulk throughput.
+ *
+ * @ring_container: Container for rings to have ITR updated
+ **/
+static void fm10k_update_itr(struct fm10k_ring_container *ring_container)
+{
+ unsigned int avg_wire_size, packets;
+
+ /* Only update ITR if we are using adaptive setting */
+ if (!(ring_container->itr & FM10K_ITR_ADAPTIVE))
+ goto clear_counts;
+
+ packets = ring_container->total_packets;
+ if (!packets)
+ goto clear_counts;
+
+ avg_wire_size = ring_container->total_bytes / packets;
+
+ /* Add 24 bytes to size to account for CRC, preamble, and gap */
+ avg_wire_size += 24;
+
+ /* Don't starve jumbo frames */
+ if (avg_wire_size > 3000)
+ avg_wire_size = 3000;
+
+ /* Give a little boost to mid-size frames */
+ if ((avg_wire_size > 300) && (avg_wire_size < 1200))
+ avg_wire_size /= 3;
+ else
+ avg_wire_size /= 2;
+
+ /* write back value and retain adaptive flag */
+ ring_container->itr = avg_wire_size | FM10K_ITR_ADAPTIVE;
+
+clear_counts:
+ ring_container->total_bytes = 0;
+ ring_container->total_packets = 0;
+}
+
+static void fm10k_qv_enable(struct fm10k_q_vector *q_vector)
+{
+ /* Enable auto-mask and clear the current mask */
+ u32 itr = FM10K_ITR_ENABLE;
+
+ /* Update Tx ITR */
+ fm10k_update_itr(&q_vector->tx);
+
+ /* Update Rx ITR */
+ fm10k_update_itr(&q_vector->rx);
+
+ /* Store Tx itr in timer slot 0 */
+ itr |= (q_vector->tx.itr & FM10K_ITR_MAX);
+
+ /* Shift Rx itr to timer slot 1 */
+ itr |= (q_vector->rx.itr & FM10K_ITR_MAX) << FM10K_ITR_INTERVAL1_SHIFT;
+
+ /* Write the final value to the ITR register */
+ writel(itr, q_vector->itr);
+}
+
+static int fm10k_poll(struct napi_struct *napi, int budget)
+{
+ struct fm10k_q_vector *q_vector =
+ container_of(napi, struct fm10k_q_vector, napi);
+ struct fm10k_ring *ring;
+ int per_ring_budget;
+ bool clean_complete = true;
+
+ fm10k_for_each_ring(ring, q_vector->tx)
+ clean_complete &= fm10k_clean_tx_irq(q_vector, ring);
+
+ /* attempt to distribute budget to each queue fairly, but don't
+ * allow the budget to go below 1 because we'll exit polling
+ */
+ if (q_vector->rx.count > 1)
+ per_ring_budget = max(budget/q_vector->rx.count, 1);
+ else
+ per_ring_budget = budget;
+
+ fm10k_for_each_ring(ring, q_vector->rx)
+ clean_complete &= fm10k_clean_rx_irq(q_vector, ring,
+ per_ring_budget);
+
+ /* If all work not completed, return budget and keep polling */
+ if (!clean_complete)
+ return budget;
+
+ /* all work done, exit the polling mode */
+ napi_complete(napi);
+
+ /* re-enable the q_vector */
+ fm10k_qv_enable(q_vector);
+
+ return 0;
+}
+
+/**
+ * fm10k_set_qos_queues: Allocate queues for a QOS-enabled device
+ * @interface: board private structure to initialize
+ *
+ * When QoS (Quality of Service) is enabled, allocate queues for
+ * each traffic class. If multiqueue isn't available,then abort QoS
+ * initialization.
+ *
+ * This function handles all combinations of Qos and RSS.
+ *
+ **/
+static bool fm10k_set_qos_queues(struct fm10k_intfc *interface)
+{
+ struct net_device *dev = interface->netdev;
+ struct fm10k_ring_feature *f;
+ int rss_i, i;
+ int pcs;
+
+ /* Map queue offset and counts onto allocated tx queues */
+ pcs = netdev_get_num_tc(dev);
+
+ if (pcs <= 1)
+ return false;
+
+ /* set QoS mask and indices */
+ f = &interface->ring_feature[RING_F_QOS];
+ f->indices = pcs;
+ f->mask = (1 << fls(pcs - 1)) - 1;
+
+ /* determine the upper limit for our current DCB mode */
+ rss_i = interface->hw.mac.max_queues / pcs;
+ rss_i = 1 << (fls(rss_i) - 1);
+
+ /* set RSS mask and indices */
+ f = &interface->ring_feature[RING_F_RSS];
+ rss_i = min_t(u16, rss_i, f->limit);
+ f->indices = rss_i;
+ f->mask = (1 << fls(rss_i - 1)) - 1;
+
+ /* configure pause class to queue mapping */
+ for (i = 0; i < pcs; i++)
+ netdev_set_tc_queue(dev, i, rss_i, rss_i * i);
+
+ interface->num_rx_queues = rss_i * pcs;
+ interface->num_tx_queues = rss_i * pcs;
+
+ return true;
+}
+
+/**
+ * fm10k_set_rss_queues: Allocate queues for RSS
+ * @interface: board private structure to initialize
+ *
+ * This is our "base" multiqueue mode. RSS (Receive Side Scaling) will try
+ * to allocate one Rx queue per CPU, and if available, one Tx queue per CPU.
+ *
+ **/
+static bool fm10k_set_rss_queues(struct fm10k_intfc *interface)
+{
+ struct fm10k_ring_feature *f;
+ u16 rss_i;
+
+ f = &interface->ring_feature[RING_F_RSS];
+ rss_i = min_t(u16, interface->hw.mac.max_queues, f->limit);
+
+ /* record indices and power of 2 mask for RSS */
+ f->indices = rss_i;
+ f->mask = (1 << fls(rss_i - 1)) - 1;
+
+ interface->num_rx_queues = rss_i;
+ interface->num_tx_queues = rss_i;
+
+ return true;
+}
+
+/**
+ * fm10k_set_num_queues: Allocate queues for device, feature dependent
+ * @interface: board private structure to initialize
+ *
+ * This is the top level queue allocation routine. The order here is very
+ * important, starting with the "most" number of features turned on at once,
+ * and ending with the smallest set of features. This way large combinations
+ * can be allocated if they're turned on, and smaller combinations are the
+ * fallthrough conditions.
+ *
+ **/
+static void fm10k_set_num_queues(struct fm10k_intfc *interface)
+{
+ /* Start with base case */
+ interface->num_rx_queues = 1;
+ interface->num_tx_queues = 1;
+
+ if (fm10k_set_qos_queues(interface))
+ return;
+
+ fm10k_set_rss_queues(interface);
+}
+
+/**
+ * fm10k_alloc_q_vector - Allocate memory for a single interrupt vector
+ * @interface: board private structure to initialize
+ * @v_count: q_vectors allocated on interface, used for ring interleaving
+ * @v_idx: index of vector in interface struct
+ * @txr_count: total number of Tx rings to allocate
+ * @txr_idx: index of first Tx ring to allocate
+ * @rxr_count: total number of Rx rings to allocate
+ * @rxr_idx: index of first Rx ring to allocate
+ *
+ * We allocate one q_vector. If allocation fails we return -ENOMEM.
+ **/
+static int fm10k_alloc_q_vector(struct fm10k_intfc *interface,
+ unsigned int v_count, unsigned int v_idx,
+ unsigned int txr_count, unsigned int txr_idx,
+ unsigned int rxr_count, unsigned int rxr_idx)
+{
+ struct fm10k_q_vector *q_vector;
+ struct fm10k_ring *ring;
+ int ring_count, size;
+
+ ring_count = txr_count + rxr_count;
+ size = sizeof(struct fm10k_q_vector) +
+ (sizeof(struct fm10k_ring) * ring_count);
+
+ /* allocate q_vector and rings */
+ q_vector = kzalloc(size, GFP_KERNEL);
+ if (!q_vector)
+ return -ENOMEM;
+
+ /* initialize NAPI */
+ netif_napi_add(interface->netdev, &q_vector->napi,
+ fm10k_poll, NAPI_POLL_WEIGHT);
+
+ /* tie q_vector and interface together */
+ interface->q_vector[v_idx] = q_vector;
+ q_vector->interface = interface;
+ q_vector->v_idx = v_idx;
+
+ /* initialize pointer to rings */
+ ring = q_vector->ring;
+
+ /* save Tx ring container info */
+ q_vector->tx.ring = ring;
+ q_vector->tx.work_limit = FM10K_DEFAULT_TX_WORK;
+ q_vector->tx.itr = interface->tx_itr;
+ q_vector->tx.count = txr_count;
+
+ while (txr_count) {
+ /* assign generic ring traits */
+ ring->dev = &interface->pdev->dev;
+ ring->netdev = interface->netdev;
+
+ /* configure backlink on ring */
+ ring->q_vector = q_vector;
+
+ /* apply Tx specific ring traits */
+ ring->count = interface->tx_ring_count;
+ ring->queue_index = txr_idx;
+
+ /* assign ring to interface */
+ interface->tx_ring[txr_idx] = ring;
+
+ /* update count and index */
+ txr_count--;
+ txr_idx += v_count;
+
+ /* push pointer to next ring */
+ ring++;
+ }
+
+ /* save Rx ring container info */
+ q_vector->rx.ring = ring;
+ q_vector->rx.itr = interface->rx_itr;
+ q_vector->rx.count = rxr_count;
+
+ while (rxr_count) {
+ /* assign generic ring traits */
+ ring->dev = &interface->pdev->dev;
+ ring->netdev = interface->netdev;
+ rcu_assign_pointer(ring->l2_accel, interface->l2_accel);
+
+ /* configure backlink on ring */
+ ring->q_vector = q_vector;
+
+ /* apply Rx specific ring traits */
+ ring->count = interface->rx_ring_count;
+ ring->queue_index = rxr_idx;
+
+ /* assign ring to interface */
+ interface->rx_ring[rxr_idx] = ring;
+
+ /* update count and index */
+ rxr_count--;
+ rxr_idx += v_count;
+
+ /* push pointer to next ring */
+ ring++;
+ }
+
+ fm10k_dbg_q_vector_init(q_vector);
+
+ return 0;
+}
+
+/**
+ * fm10k_free_q_vector - Free memory allocated for specific interrupt vector
+ * @interface: board private structure to initialize
+ * @v_idx: Index of vector to be freed
+ *
+ * This function frees the memory allocated to the q_vector. In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void fm10k_free_q_vector(struct fm10k_intfc *interface, int v_idx)
+{
+ struct fm10k_q_vector *q_vector = interface->q_vector[v_idx];
+ struct fm10k_ring *ring;
+
+ fm10k_dbg_q_vector_exit(q_vector);
+
+ fm10k_for_each_ring(ring, q_vector->tx)
+ interface->tx_ring[ring->queue_index] = NULL;
+
+ fm10k_for_each_ring(ring, q_vector->rx)
+ interface->rx_ring[ring->queue_index] = NULL;
+
+ interface->q_vector[v_idx] = NULL;
+ netif_napi_del(&q_vector->napi);
+ kfree_rcu(q_vector, rcu);
+}
+
+/**
+ * fm10k_alloc_q_vectors - Allocate memory for interrupt vectors
+ * @interface: board private structure to initialize
+ *
+ * We allocate one q_vector per queue interrupt. If allocation fails we
+ * return -ENOMEM.
+ **/
+static int fm10k_alloc_q_vectors(struct fm10k_intfc *interface)
+{
+ unsigned int q_vectors = interface->num_q_vectors;
+ unsigned int rxr_remaining = interface->num_rx_queues;
+ unsigned int txr_remaining = interface->num_tx_queues;
+ unsigned int rxr_idx = 0, txr_idx = 0, v_idx = 0;
+ int err;
+
+ if (q_vectors >= (rxr_remaining + txr_remaining)) {
+ for (; rxr_remaining; v_idx++) {
+ err = fm10k_alloc_q_vector(interface, q_vectors, v_idx,
+ 0, 0, 1, rxr_idx);
+ if (err)
+ goto err_out;
+
+ /* update counts and index */
+ rxr_remaining--;
+ rxr_idx++;
+ }
+ }
+
+ for (; v_idx < q_vectors; v_idx++) {
+ int rqpv = DIV_ROUND_UP(rxr_remaining, q_vectors - v_idx);
+ int tqpv = DIV_ROUND_UP(txr_remaining, q_vectors - v_idx);
+
+ err = fm10k_alloc_q_vector(interface, q_vectors, v_idx,
+ tqpv, txr_idx,
+ rqpv, rxr_idx);
+
+ if (err)
+ goto err_out;
+
+ /* update counts and index */
+ rxr_remaining -= rqpv;
+ txr_remaining -= tqpv;
+ rxr_idx++;
+ txr_idx++;
+ }
+
+ return 0;
+
+err_out:
+ interface->num_tx_queues = 0;
+ interface->num_rx_queues = 0;
+ interface->num_q_vectors = 0;
+
+ while (v_idx--)
+ fm10k_free_q_vector(interface, v_idx);
+
+ return -ENOMEM;
+}
+
+/**
+ * fm10k_free_q_vectors - Free memory allocated for interrupt vectors
+ * @interface: board private structure to initialize
+ *
+ * This function frees the memory allocated to the q_vectors. In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void fm10k_free_q_vectors(struct fm10k_intfc *interface)
+{
+ int v_idx = interface->num_q_vectors;
+
+ interface->num_tx_queues = 0;
+ interface->num_rx_queues = 0;
+ interface->num_q_vectors = 0;
+
+ while (v_idx--)
+ fm10k_free_q_vector(interface, v_idx);
+}
+
+/**
+ * f10k_reset_msix_capability - reset MSI-X capability
+ * @interface: board private structure to initialize
+ *
+ * Reset the MSI-X capability back to its starting state
+ **/
+static void fm10k_reset_msix_capability(struct fm10k_intfc *interface)
+{
+ pci_disable_msix(interface->pdev);
+ kfree(interface->msix_entries);
+ interface->msix_entries = NULL;
+}
+
+/**
+ * f10k_init_msix_capability - configure MSI-X capability
+ * @interface: board private structure to initialize
+ *
+ * Attempt to configure the interrupts using the best available
+ * capabilities of the hardware and the kernel.
+ **/
+static int fm10k_init_msix_capability(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ int v_budget, vector;
+
+ /* It's easy to be greedy for MSI-X vectors, but it really
+ * doesn't do us much good if we have a lot more vectors
+ * than CPU's. So let's be conservative and only ask for
+ * (roughly) the same number of vectors as there are CPU's.
+ * the default is to use pairs of vectors
+ */
+ v_budget = max(interface->num_rx_queues, interface->num_tx_queues);
+ v_budget = min_t(u16, v_budget, num_online_cpus());
+
+ /* account for vectors not related to queues */
+ v_budget += NON_Q_VECTORS(hw);
+
+ /* At the same time, hardware can only support a maximum of
+ * hw.mac->max_msix_vectors vectors. With features
+ * such as RSS and VMDq, we can easily surpass the number of Rx and Tx
+ * descriptor queues supported by our device. Thus, we cap it off in
+ * those rare cases where the cpu count also exceeds our vector limit.
+ */
+ v_budget = min_t(int, v_budget, hw->mac.max_msix_vectors);
+
+ /* A failure in MSI-X entry allocation is fatal. */
+ interface->msix_entries = kcalloc(v_budget, sizeof(struct msix_entry),
+ GFP_KERNEL);
+ if (!interface->msix_entries)
+ return -ENOMEM;
+
+ /* populate entry values */
+ for (vector = 0; vector < v_budget; vector++)
+ interface->msix_entries[vector].entry = vector;
+
+ /* Attempt to enable MSI-X with requested value */
+ v_budget = pci_enable_msix_range(interface->pdev,
+ interface->msix_entries,
+ MIN_MSIX_COUNT(hw),
+ v_budget);
+ if (v_budget < 0) {
+ kfree(interface->msix_entries);
+ interface->msix_entries = NULL;
+ return -ENOMEM;
+ }
+
+ /* record the number of queues available for q_vectors */
+ interface->num_q_vectors = v_budget - NON_Q_VECTORS(hw);
+
+ return 0;
+}
+
+/**
+ * fm10k_cache_ring_qos - Descriptor ring to register mapping for QoS
+ * @interface: Interface structure continaining rings and devices
+ *
+ * Cache the descriptor ring offsets for Qos
+ **/
+static bool fm10k_cache_ring_qos(struct fm10k_intfc *interface)
+{
+ struct net_device *dev = interface->netdev;
+ int pc, offset, rss_i, i, q_idx;
+ u16 pc_stride = interface->ring_feature[RING_F_QOS].mask + 1;
+ u8 num_pcs = netdev_get_num_tc(dev);
+
+ if (num_pcs <= 1)
+ return false;
+
+ rss_i = interface->ring_feature[RING_F_RSS].indices;
+
+ for (pc = 0, offset = 0; pc < num_pcs; pc++, offset += rss_i) {
+ q_idx = pc;
+ for (i = 0; i < rss_i; i++) {
+ interface->tx_ring[offset + i]->reg_idx = q_idx;
+ interface->tx_ring[offset + i]->qos_pc = pc;
+ interface->rx_ring[offset + i]->reg_idx = q_idx;
+ interface->rx_ring[offset + i]->qos_pc = pc;
+ q_idx += pc_stride;
+ }
+ }
+
+ return true;
+}
+
+/**
+ * fm10k_cache_ring_rss - Descriptor ring to register mapping for RSS
+ * @interface: Interface structure continaining rings and devices
+ *
+ * Cache the descriptor ring offsets for RSS
+ **/
+static void fm10k_cache_ring_rss(struct fm10k_intfc *interface)
+{
+ int i;
+
+ for (i = 0; i < interface->num_rx_queues; i++)
+ interface->rx_ring[i]->reg_idx = i;
+
+ for (i = 0; i < interface->num_tx_queues; i++)
+ interface->tx_ring[i]->reg_idx = i;
+}
+
+/**
+ * fm10k_assign_rings - Map rings to network devices
+ * @interface: Interface structure containing rings and devices
+ *
+ * This function is meant to go though and configure both the network
+ * devices so that they contain rings, and configure the rings so that
+ * they function with their network devices.
+ **/
+static void fm10k_assign_rings(struct fm10k_intfc *interface)
+{
+ if (fm10k_cache_ring_qos(interface))
+ return;
+
+ fm10k_cache_ring_rss(interface);
+}
+
+static void fm10k_init_reta(struct fm10k_intfc *interface)
+{
+ u16 i, rss_i = interface->ring_feature[RING_F_RSS].indices;
+ u32 reta, base;
+
+ /* If the netdev is initialized we have to maintain table if possible */
+ if (interface->netdev->reg_state) {
+ for (i = FM10K_RETA_SIZE; i--;) {
+ reta = interface->reta[i];
+ if ((((reta << 24) >> 24) < rss_i) &&
+ (((reta << 16) >> 24) < rss_i) &&
+ (((reta << 8) >> 24) < rss_i) &&
+ (((reta) >> 24) < rss_i))
+ continue;
+ goto repopulate_reta;
+ }
+
+ /* do nothing if all of the elements are in bounds */
+ return;
+ }
+
+repopulate_reta:
+ /* Populate the redirection table 4 entries at a time. To do this
+ * we are generating the results for n and n+2 and then interleaving
+ * those with the results with n+1 and n+3.
+ */
+ for (i = FM10K_RETA_SIZE; i--;) {
+ /* first pass generates n and n+2 */
+ base = ((i * 0x00040004) + 0x00020000) * rss_i;
+ reta = (base & 0x3F803F80) >> 7;
+
+ /* second pass generates n+1 and n+3 */
+ base += 0x00010001 * rss_i;
+ reta |= (base & 0x3F803F80) << 1;
+
+ interface->reta[i] = reta;
+ }
+}
+
+/**
+ * fm10k_init_queueing_scheme - Determine proper queueing scheme
+ * @interface: board private structure to initialize
+ *
+ * We determine which queueing scheme to use based on...
+ * - Hardware queue count (num_*_queues)
+ * - defined by miscellaneous hardware support/features (RSS, etc.)
+ **/
+int fm10k_init_queueing_scheme(struct fm10k_intfc *interface)
+{
+ int err;
+
+ /* Number of supported queues */
+ fm10k_set_num_queues(interface);
+
+ /* Configure MSI-X capability */
+ err = fm10k_init_msix_capability(interface);
+ if (err) {
+ dev_err(&interface->pdev->dev,
+ "Unable to initialize MSI-X capability\n");
+ return err;
+ }
+
+ /* Allocate memory for queues */
+ err = fm10k_alloc_q_vectors(interface);
+ if (err)
+ return err;
+
+ /* Map rings to devices, and map devices to physical queues */
+ fm10k_assign_rings(interface);
+
+ /* Initialize RSS redirection table */
+ fm10k_init_reta(interface);
+
+ return 0;
+}
+
+/**
+ * fm10k_clear_queueing_scheme - Clear the current queueing scheme settings
+ * @interface: board private structure to clear queueing scheme on
+ *
+ * We go through and clear queueing specific resources and reset the structure
+ * to pre-load conditions
+ **/
+void fm10k_clear_queueing_scheme(struct fm10k_intfc *interface)
+{
+ fm10k_free_q_vectors(interface);
+ fm10k_reset_msix_capability(interface);
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
new file mode 100644
index 0000000..14a4ea7
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
@@ -0,0 +1,2125 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k_common.h"
+
+/**
+ * fm10k_fifo_init - Initialize a message FIFO
+ * @fifo: pointer to FIFO
+ * @buffer: pointer to memory to be used to store FIFO
+ * @size: maximum message size to store in FIFO, must be 2^n - 1
+ **/
+static void fm10k_fifo_init(struct fm10k_mbx_fifo *fifo, u32 *buffer, u16 size)
+{
+ fifo->buffer = buffer;
+ fifo->size = size;
+ fifo->head = 0;
+ fifo->tail = 0;
+}
+
+/**
+ * fm10k_fifo_used - Retrieve used space in FIFO
+ * @fifo: pointer to FIFO
+ *
+ * This function returns the number of DWORDs used in the FIFO
+ **/
+static u16 fm10k_fifo_used(struct fm10k_mbx_fifo *fifo)
+{
+ return fifo->tail - fifo->head;
+}
+
+/**
+ * fm10k_fifo_unused - Retrieve unused space in FIFO
+ * @fifo: pointer to FIFO
+ *
+ * This function returns the number of unused DWORDs in the FIFO
+ **/
+static u16 fm10k_fifo_unused(struct fm10k_mbx_fifo *fifo)
+{
+ return fifo->size + fifo->head - fifo->tail;
+}
+
+/**
+ * fm10k_fifo_empty - Test to verify if fifo is empty
+ * @fifo: pointer to FIFO
+ *
+ * This function returns true if the FIFO is empty, else false
+ **/
+static bool fm10k_fifo_empty(struct fm10k_mbx_fifo *fifo)
+{
+ return fifo->head == fifo->tail;
+}
+
+/**
+ * fm10k_fifo_head_offset - returns indices of head with given offset
+ * @fifo: pointer to FIFO
+ * @offset: offset to add to head
+ *
+ * This function returns the indicies into the fifo based on head + offset
+ **/
+static u16 fm10k_fifo_head_offset(struct fm10k_mbx_fifo *fifo, u16 offset)
+{
+ return (fifo->head + offset) & (fifo->size - 1);
+}
+
+/**
+ * fm10k_fifo_tail_offset - returns indices of tail with given offset
+ * @fifo: pointer to FIFO
+ * @offset: offset to add to tail
+ *
+ * This function returns the indicies into the fifo based on tail + offset
+ **/
+static u16 fm10k_fifo_tail_offset(struct fm10k_mbx_fifo *fifo, u16 offset)
+{
+ return (fifo->tail + offset) & (fifo->size - 1);
+}
+
+/**
+ * fm10k_fifo_head_len - Retrieve length of first message in FIFO
+ * @fifo: pointer to FIFO
+ *
+ * This function returns the size of the first message in the FIFO
+ **/
+static u16 fm10k_fifo_head_len(struct fm10k_mbx_fifo *fifo)
+{
+ u32 *head = fifo->buffer + fm10k_fifo_head_offset(fifo, 0);
+
+ /* verify there is at least 1 DWORD in the fifo so *head is valid */
+ if (fm10k_fifo_empty(fifo))
+ return 0;
+
+ /* retieve the message length */
+ return FM10K_TLV_DWORD_LEN(*head);
+}
+
+/**
+ * fm10k_fifo_head_drop - Drop the first message in FIFO
+ * @fifo: pointer to FIFO
+ *
+ * This function returns the size of the message dropped from the FIFO
+ **/
+static u16 fm10k_fifo_head_drop(struct fm10k_mbx_fifo *fifo)
+{
+ u16 len = fm10k_fifo_head_len(fifo);
+
+ /* update head so it is at the start of next frame */
+ fifo->head += len;
+
+ return len;
+}
+
+/**
+ * fm10k_mbx_index_len - Convert a head/tail index into a length value
+ * @mbx: pointer to mailbox
+ * @head: head index
+ * @tail: head index
+ *
+ * This function takes the head and tail index and determines the length
+ * of the data indicated by this pair.
+ **/
+static u16 fm10k_mbx_index_len(struct fm10k_mbx_info *mbx, u16 head, u16 tail)
+{
+ u16 len = tail - head;
+
+ /* we wrapped so subtract 2, one for index 0, one for all 1s index */
+ if (len > tail)
+ len -= 2;
+
+ return len & ((mbx->mbmem_len << 1) - 1);
+}
+
+/**
+ * fm10k_mbx_tail_add - Determine new tail value with added offset
+ * @mbx: pointer to mailbox
+ * @offset: length to add to head offset
+ *
+ * This function takes the local tail index and recomputes it for
+ * a given length added as an offset.
+ **/
+static u16 fm10k_mbx_tail_add(struct fm10k_mbx_info *mbx, u16 offset)
+{
+ u16 tail = (mbx->tail + offset + 1) & ((mbx->mbmem_len << 1) - 1);
+
+ /* add/sub 1 because we cannot have offset 0 or all 1s */
+ return (tail > mbx->tail) ? --tail : ++tail;
+}
+
+/**
+ * fm10k_mbx_tail_sub - Determine new tail value with subtracted offset
+ * @mbx: pointer to mailbox
+ * @offset: length to add to head offset
+ *
+ * This function takes the local tail index and recomputes it for
+ * a given length added as an offset.
+ **/
+static u16 fm10k_mbx_tail_sub(struct fm10k_mbx_info *mbx, u16 offset)
+{
+ u16 tail = (mbx->tail - offset - 1) & ((mbx->mbmem_len << 1) - 1);
+
+ /* sub/add 1 because we cannot have offset 0 or all 1s */
+ return (tail < mbx->tail) ? ++tail : --tail;
+}
+
+/**
+ * fm10k_mbx_head_add - Determine new head value with added offset
+ * @mbx: pointer to mailbox
+ * @offset: length to add to head offset
+ *
+ * This function takes the local head index and recomputes it for
+ * a given length added as an offset.
+ **/
+static u16 fm10k_mbx_head_add(struct fm10k_mbx_info *mbx, u16 offset)
+{
+ u16 head = (mbx->head + offset + 1) & ((mbx->mbmem_len << 1) - 1);
+
+ /* add/sub 1 because we cannot have offset 0 or all 1s */
+ return (head > mbx->head) ? --head : ++head;
+}
+
+/**
+ * fm10k_mbx_head_sub - Determine new head value with subtracted offset
+ * @mbx: pointer to mailbox
+ * @offset: length to add to head offset
+ *
+ * This function takes the local head index and recomputes it for
+ * a given length added as an offset.
+ **/
+static u16 fm10k_mbx_head_sub(struct fm10k_mbx_info *mbx, u16 offset)
+{
+ u16 head = (mbx->head - offset - 1) & ((mbx->mbmem_len << 1) - 1);
+
+ /* sub/add 1 because we cannot have offset 0 or all 1s */
+ return (head < mbx->head) ? ++head : --head;
+}
+
+/**
+ * fm10k_mbx_pushed_tail_len - Retrieve the length of message being pushed
+ * @mbx: pointer to mailbox
+ *
+ * This function will return the length of the message currently being
+ * pushed onto the tail of the Rx queue.
+ **/
+static u16 fm10k_mbx_pushed_tail_len(struct fm10k_mbx_info *mbx)
+{
+ u32 *tail = mbx->rx.buffer + fm10k_fifo_tail_offset(&mbx->rx, 0);
+
+ /* pushed tail is only valid if pushed is set */
+ if (!mbx->pushed)
+ return 0;
+
+ return FM10K_TLV_DWORD_LEN(*tail);
+}
+
+/**
+ * fm10k_fifo_write_copy - pulls data off of msg and places it in fifo
+ * @fifo: pointer to FIFO
+ * @msg: message array to populate
+ * @tail_offset: additional offset to add to tail pointer
+ * @len: length of FIFO to copy into message header
+ *
+ * This function will take a message and copy it into a section of the
+ * FIFO. In order to get something into a location other than just
+ * the tail you can use tail_offset to adjust the pointer.
+ **/
+static void fm10k_fifo_write_copy(struct fm10k_mbx_fifo *fifo,
+ const u32 *msg, u16 tail_offset, u16 len)
+{
+ u16 end = fm10k_fifo_tail_offset(fifo, tail_offset);
+ u32 *tail = fifo->buffer + end;
+
+ /* track when we should cross the end of the FIFO */
+ end = fifo->size - end;
+
+ /* copy end of message before start of message */
+ if (end < len)
+ memcpy(fifo->buffer, msg + end, (len - end) << 2);
+ else
+ end = len;
+
+ /* Copy remaining message into Tx FIFO */
+ memcpy(tail, msg, end << 2);
+}
+
+/**
+ * fm10k_fifo_enqueue - Enqueues the message to the tail of the FIFO
+ * @fifo: pointer to FIFO
+ * @msg: message array to read
+ *
+ * This function enqueues a message up to the size specified by the length
+ * contained in the first DWORD of the message and will place at the tail
+ * of the FIFO. It will return 0 on success, or a negative value on error.
+ **/
+static s32 fm10k_fifo_enqueue(struct fm10k_mbx_fifo *fifo, const u32 *msg)
+{
+ u16 len = FM10K_TLV_DWORD_LEN(*msg);
+
+ /* verify parameters */
+ if (len > fifo->size)
+ return FM10K_MBX_ERR_SIZE;
+
+ /* verify there is room for the message */
+ if (len > fm10k_fifo_unused(fifo))
+ return FM10K_MBX_ERR_NO_SPACE;
+
+ /* Copy message into FIFO */
+ fm10k_fifo_write_copy(fifo, msg, 0, len);
+
+ /* memory barrier to guarantee FIFO is written before tail update */
+ wmb();
+
+ /* Update Tx FIFO tail */
+ fifo->tail += len;
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_validate_msg_size - Validate incoming message based on size
+ * @mbx: pointer to mailbox
+ * @len: length of data pushed onto buffer
+ *
+ * This function analyzes the frame and will return a non-zero value when
+ * the start of a message larger than the mailbox is detected.
+ **/
+static u16 fm10k_mbx_validate_msg_size(struct fm10k_mbx_info *mbx, u16 len)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->rx;
+ u16 total_len = 0, msg_len;
+ u32 *msg;
+
+ /* length should include previous amounts pushed */
+ len += mbx->pushed;
+
+ /* offset in message is based off of current message size */
+ do {
+ msg = fifo->buffer + fm10k_fifo_tail_offset(fifo, total_len);
+ msg_len = FM10K_TLV_DWORD_LEN(*msg);
+ total_len += msg_len;
+ } while (total_len < len);
+
+ /* message extends out of pushed section, but fits in FIFO */
+ if ((len < total_len) && (msg_len <= mbx->rx.size))
+ return 0;
+
+ /* return length of invalid section */
+ return (len < total_len) ? len : (len - total_len);
+}
+
+/**
+ * fm10k_mbx_write_copy - pulls data off of Tx FIFO and places it in mbmem
+ * @mbx: pointer to mailbox
+ *
+ * This function will take a seciton of the Rx FIFO and copy it into the
+ mbx->tail--;
+ * mailbox memory. The offset in mbmem is based on the lower bits of the
+ * tail and len determines the length to copy.
+ **/
+static void fm10k_mbx_write_copy(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->tx;
+ u32 mbmem = mbx->mbmem_reg;
+ u32 *head = fifo->buffer;
+ u16 end, len, tail, mask;
+
+ if (!mbx->tail_len)
+ return;
+
+ /* determine data length and mbmem tail index */
+ mask = mbx->mbmem_len - 1;
+ len = mbx->tail_len;
+ tail = fm10k_mbx_tail_sub(mbx, len);
+ if (tail > mask)
+ tail++;
+
+ /* determine offset in the ring */
+ end = fm10k_fifo_head_offset(fifo, mbx->pulled);
+ head += end;
+
+ /* memory barrier to guarantee data is ready to be read */
+ rmb();
+
+ /* Copy message from Tx FIFO */
+ for (end = fifo->size - end; len; head = fifo->buffer) {
+ do {
+ /* adjust tail to match offset for FIFO */
+ tail &= mask;
+ if (!tail)
+ tail++;
+
+ /* write message to hardware FIFO */
+ fm10k_write_reg(hw, mbmem + tail++, *(head++));
+ } while (--len && --end);
+ }
+}
+
+/**
+ * fm10k_mbx_pull_head - Pulls data off of head of Tx FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ * @head: acknowledgement number last received
+ *
+ * This function will push the tail index forward based on the remote
+ * head index. It will then pull up to mbmem_len DWORDs off of the
+ * head of the FIFO and will place it in the MBMEM registers
+ * associated with the mailbox.
+ **/
+static void fm10k_mbx_pull_head(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx, u16 head)
+{
+ u16 mbmem_len, len, ack = fm10k_mbx_index_len(mbx, head, mbx->tail);
+ struct fm10k_mbx_fifo *fifo = &mbx->tx;
+
+ /* update number of bytes pulled and update bytes in transit */
+ mbx->pulled += mbx->tail_len - ack;
+
+ /* determine length of data to pull, reserve space for mbmem header */
+ mbmem_len = mbx->mbmem_len - 1;
+ len = fm10k_fifo_used(fifo) - mbx->pulled;
+ if (len > mbmem_len)
+ len = mbmem_len;
+
+ /* update tail and record number of bytes in transit */
+ mbx->tail = fm10k_mbx_tail_add(mbx, len - ack);
+ mbx->tail_len = len;
+
+ /* drop pulled messages from the FIFO */
+ for (len = fm10k_fifo_head_len(fifo);
+ len && (mbx->pulled >= len);
+ len = fm10k_fifo_head_len(fifo)) {
+ mbx->pulled -= fm10k_fifo_head_drop(fifo);
+ mbx->tx_messages++;
+ mbx->tx_dwords += len;
+ }
+
+ /* Copy message out from the Tx FIFO */
+ fm10k_mbx_write_copy(hw, mbx);
+}
+
+/**
+ * fm10k_mbx_read_copy - pulls data off of mbmem and places it in Rx FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will take a seciton of the mailbox memory and copy it
+ * into the Rx FIFO. The offset is based on the lower bits of the
+ * head and len determines the length to copy.
+ **/
+static void fm10k_mbx_read_copy(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->rx;
+ u32 mbmem = mbx->mbmem_reg ^ mbx->mbmem_len;
+ u32 *tail = fifo->buffer;
+ u16 end, len, head;
+
+ /* determine data length and mbmem head index */
+ len = mbx->head_len;
+ head = fm10k_mbx_head_sub(mbx, len);
+ if (head >= mbx->mbmem_len)
+ head++;
+
+ /* determine offset in the ring */
+ end = fm10k_fifo_tail_offset(fifo, mbx->pushed);
+ tail += end;
+
+ /* Copy message into Rx FIFO */
+ for (end = fifo->size - end; len; tail = fifo->buffer) {
+ do {
+ /* adjust head to match offset for FIFO */
+ head &= mbx->mbmem_len - 1;
+ if (!head)
+ head++;
+
+ /* read message from hardware FIFO */
+ *(tail++) = fm10k_read_reg(hw, mbmem + head++);
+ } while (--len && --end);
+ }
+
+ /* memory barrier to guarantee FIFO is written before tail update */
+ wmb();
+}
+
+/**
+ * fm10k_mbx_push_tail - Pushes up to 15 DWORDs on to tail of FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ * @tail: tail index of message
+ *
+ * This function will first validate the tail index and size for the
+ * incoming message. It then updates the acknowlegment number and
+ * copies the data into the FIFO. It will return the number of messages
+ * dequeued on success and a negative value on error.
+ **/
+static s32 fm10k_mbx_push_tail(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx,
+ u16 tail)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->rx;
+ u16 len, seq = fm10k_mbx_index_len(mbx, mbx->head, tail);
+
+ /* determine length of data to push */
+ len = fm10k_fifo_unused(fifo) - mbx->pushed;
+ if (len > seq)
+ len = seq;
+
+ /* update head and record bytes received */
+ mbx->head = fm10k_mbx_head_add(mbx, len);
+ mbx->head_len = len;
+
+ /* nothing to do if there is no data */
+ if (!len)
+ return 0;
+
+ /* Copy msg into Rx FIFO */
+ fm10k_mbx_read_copy(hw, mbx);
+
+ /* determine if there are any invalid lengths in message */
+ if (fm10k_mbx_validate_msg_size(mbx, len))
+ return FM10K_MBX_ERR_SIZE;
+
+ /* Update pushed */
+ mbx->pushed += len;
+
+ /* flush any completed messages */
+ for (len = fm10k_mbx_pushed_tail_len(mbx);
+ len && (mbx->pushed >= len);
+ len = fm10k_mbx_pushed_tail_len(mbx)) {
+ fifo->tail += len;
+ mbx->pushed -= len;
+ mbx->rx_messages++;
+ mbx->rx_dwords += len;
+ }
+
+ return 0;
+}
+
+/* pre-generated data for generating the CRC based on the poly 0xAC9A. */
+static const u16 fm10k_crc_16b_table[256] = {
+ 0x0000, 0x7956, 0xF2AC, 0x8BFA, 0xBC6D, 0xC53B, 0x4EC1, 0x3797,
+ 0x21EF, 0x58B9, 0xD343, 0xAA15, 0x9D82, 0xE4D4, 0x6F2E, 0x1678,
+ 0x43DE, 0x3A88, 0xB172, 0xC824, 0xFFB3, 0x86E5, 0x0D1F, 0x7449,
+ 0x6231, 0x1B67, 0x909D, 0xE9CB, 0xDE5C, 0xA70A, 0x2CF0, 0x55A6,
+ 0x87BC, 0xFEEA, 0x7510, 0x0C46, 0x3BD1, 0x4287, 0xC97D, 0xB02B,
+ 0xA653, 0xDF05, 0x54FF, 0x2DA9, 0x1A3E, 0x6368, 0xE892, 0x91C4,
+ 0xC462, 0xBD34, 0x36CE, 0x4F98, 0x780F, 0x0159, 0x8AA3, 0xF3F5,
+ 0xE58D, 0x9CDB, 0x1721, 0x6E77, 0x59E0, 0x20B6, 0xAB4C, 0xD21A,
+ 0x564D, 0x2F1B, 0xA4E1, 0xDDB7, 0xEA20, 0x9376, 0x188C, 0x61DA,
+ 0x77A2, 0x0EF4, 0x850E, 0xFC58, 0xCBCF, 0xB299, 0x3963, 0x4035,
+ 0x1593, 0x6CC5, 0xE73F, 0x9E69, 0xA9FE, 0xD0A8, 0x5B52, 0x2204,
+ 0x347C, 0x4D2A, 0xC6D0, 0xBF86, 0x8811, 0xF147, 0x7ABD, 0x03EB,
+ 0xD1F1, 0xA8A7, 0x235D, 0x5A0B, 0x6D9C, 0x14CA, 0x9F30, 0xE666,
+ 0xF01E, 0x8948, 0x02B2, 0x7BE4, 0x4C73, 0x3525, 0xBEDF, 0xC789,
+ 0x922F, 0xEB79, 0x6083, 0x19D5, 0x2E42, 0x5714, 0xDCEE, 0xA5B8,
+ 0xB3C0, 0xCA96, 0x416C, 0x383A, 0x0FAD, 0x76FB, 0xFD01, 0x8457,
+ 0xAC9A, 0xD5CC, 0x5E36, 0x2760, 0x10F7, 0x69A1, 0xE25B, 0x9B0D,
+ 0x8D75, 0xF423, 0x7FD9, 0x068F, 0x3118, 0x484E, 0xC3B4, 0xBAE2,
+ 0xEF44, 0x9612, 0x1DE8, 0x64BE, 0x5329, 0x2A7F, 0xA185, 0xD8D3,
+ 0xCEAB, 0xB7FD, 0x3C07, 0x4551, 0x72C6, 0x0B90, 0x806A, 0xF93C,
+ 0x2B26, 0x5270, 0xD98A, 0xA0DC, 0x974B, 0xEE1D, 0x65E7, 0x1CB1,
+ 0x0AC9, 0x739F, 0xF865, 0x8133, 0xB6A4, 0xCFF2, 0x4408, 0x3D5E,
+ 0x68F8, 0x11AE, 0x9A54, 0xE302, 0xD495, 0xADC3, 0x2639, 0x5F6F,
+ 0x4917, 0x3041, 0xBBBB, 0xC2ED, 0xF57A, 0x8C2C, 0x07D6, 0x7E80,
+ 0xFAD7, 0x8381, 0x087B, 0x712D, 0x46BA, 0x3FEC, 0xB416, 0xCD40,
+ 0xDB38, 0xA26E, 0x2994, 0x50C2, 0x6755, 0x1E03, 0x95F9, 0xECAF,
+ 0xB909, 0xC05F, 0x4BA5, 0x32F3, 0x0564, 0x7C32, 0xF7C8, 0x8E9E,
+ 0x98E6, 0xE1B0, 0x6A4A, 0x131C, 0x248B, 0x5DDD, 0xD627, 0xAF71,
+ 0x7D6B, 0x043D, 0x8FC7, 0xF691, 0xC106, 0xB850, 0x33AA, 0x4AFC,
+ 0x5C84, 0x25D2, 0xAE28, 0xD77E, 0xE0E9, 0x99BF, 0x1245, 0x6B13,
+ 0x3EB5, 0x47E3, 0xCC19, 0xB54F, 0x82D8, 0xFB8E, 0x7074, 0x0922,
+ 0x1F5A, 0x660C, 0xEDF6, 0x94A0, 0xA337, 0xDA61, 0x519B, 0x28CD };
+
+/**
+ * fm10k_crc_16b - Generate a 16 bit CRC for a region of 16 bit data
+ * @data: pointer to data to process
+ * @seed: seed value for CRC
+ * @len: length measured in 16 bits words
+ *
+ * This function will generate a CRC based on the polynomial 0xAC9A and
+ * whatever value is stored in the seed variable. Note that this
+ * value inverts the local seed and the result in order to capture all
+ * leading and trailing zeros.
+ */
+static u16 fm10k_crc_16b(const u32 *data, u16 seed, u16 len)
+{
+ u32 result = seed;
+
+ while (len--) {
+ result ^= *(data++);
+ result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF];
+ result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF];
+
+ if (!(len--))
+ break;
+
+ result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF];
+ result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF];
+ }
+
+ return (u16)result;
+}
+
+/**
+ * fm10k_fifo_crc - generate a CRC based off of FIFO data
+ * @fifo: pointer to FIFO
+ * @offset: offset point for start of FIFO
+ * @len: number of DWORDS words to process
+ * @seed: seed value for CRC
+ *
+ * This function generates a CRC for some region of the FIFO
+ **/
+static u16 fm10k_fifo_crc(struct fm10k_mbx_fifo *fifo, u16 offset,
+ u16 len, u16 seed)
+{
+ u32 *data = fifo->buffer + offset;
+
+ /* track when we should cross the end of the FIFO */
+ offset = fifo->size - offset;
+
+ /* if we are in 2 blocks process the end of the FIFO first */
+ if (offset < len) {
+ seed = fm10k_crc_16b(data, seed, offset * 2);
+ data = fifo->buffer;
+ len -= offset;
+ }
+
+ /* process any remaining bits */
+ return fm10k_crc_16b(data, seed, len * 2);
+}
+
+/**
+ * fm10k_mbx_update_local_crc - Update the local CRC for outgoing data
+ * @mbx: pointer to mailbox
+ * @head: head index provided by remote mailbox
+ *
+ * This function will generate the CRC for all data from the end of the
+ * last head update to the current one. It uses the result of the
+ * previous CRC as the seed for this update. The result is stored in
+ * mbx->local.
+ **/
+static void fm10k_mbx_update_local_crc(struct fm10k_mbx_info *mbx, u16 head)
+{
+ u16 len = mbx->tail_len - fm10k_mbx_index_len(mbx, head, mbx->tail);
+
+ /* determine the offset for the start of the region to be pulled */
+ head = fm10k_fifo_head_offset(&mbx->tx, mbx->pulled);
+
+ /* update local CRC to include all of the pulled data */
+ mbx->local = fm10k_fifo_crc(&mbx->tx, head, len, mbx->local);
+}
+
+/**
+ * fm10k_mbx_verify_remote_crc - Verify the CRC is correct for current data
+ * @mbx: pointer to mailbox
+ *
+ * This function will take all data that has been provided from the remote
+ * end and generate a CRC for it. This is stored in mbx->remote. The
+ * CRC for the header is then computed and if the result is non-zero this
+ * is an error and we signal an error dropping all data and resetting the
+ * connection.
+ */
+static s32 fm10k_mbx_verify_remote_crc(struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->rx;
+ u16 len = mbx->head_len;
+ u16 offset = fm10k_fifo_tail_offset(fifo, mbx->pushed) - len;
+ u16 crc;
+
+ /* update the remote CRC if new data has been received */
+ if (len)
+ mbx->remote = fm10k_fifo_crc(fifo, offset, len, mbx->remote);
+
+ /* process the full header as we have to validate the CRC */
+ crc = fm10k_crc_16b(&mbx->mbx_hdr, mbx->remote, 1);
+
+ /* notify other end if we have a problem */
+ return crc ? FM10K_MBX_ERR_CRC : 0;
+}
+
+/**
+ * fm10k_mbx_rx_ready - Indicates that a message is ready in the Rx FIFO
+ * @mbx: pointer to mailbox
+ *
+ * This function returns true if there is a message in the Rx FIFO to dequeue.
+ **/
+static bool fm10k_mbx_rx_ready(struct fm10k_mbx_info *mbx)
+{
+ u16 msg_size = fm10k_fifo_head_len(&mbx->rx);
+
+ return msg_size && (fm10k_fifo_used(&mbx->rx) >= msg_size);
+}
+
+/**
+ * fm10k_mbx_tx_ready - Indicates that the mailbox is in state ready for Tx
+ * @mbx: pointer to mailbox
+ * @len: verify free space is >= this value
+ *
+ * This function returns true if the mailbox is in a state ready to transmit.
+ **/
+static bool fm10k_mbx_tx_ready(struct fm10k_mbx_info *mbx, u16 len)
+{
+ u16 fifo_unused = fm10k_fifo_unused(&mbx->tx);
+
+ return (mbx->state == FM10K_STATE_OPEN) && (fifo_unused >= len);
+}
+
+/**
+ * fm10k_mbx_tx_complete - Indicates that the Tx FIFO has been emptied
+ * @mbx: pointer to mailbox
+ *
+ * This function returns true if the Tx FIFO is empty.
+ **/
+static bool fm10k_mbx_tx_complete(struct fm10k_mbx_info *mbx)
+{
+ return fm10k_fifo_empty(&mbx->tx);
+}
+
+/**
+ * fm10k_mbx_deqeueue_rx - Dequeues the message from the head in the Rx FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function dequeues messages and hands them off to the tlv parser.
+ * It will return the number of messages processed when called.
+ **/
+static u16 fm10k_mbx_dequeue_rx(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->rx;
+ s32 err;
+ u16 cnt;
+
+ /* parse Rx messages out of the Rx FIFO to empty it */
+ for (cnt = 0; !fm10k_fifo_empty(fifo); cnt++) {
+ err = fm10k_tlv_msg_parse(hw, fifo->buffer + fifo->head,
+ mbx, mbx->msg_data);
+ if (err < 0)
+ mbx->rx_parse_err++;
+
+ fm10k_fifo_head_drop(fifo);
+ }
+
+ /* shift remaining bytes back to start of FIFO */
+ memmove(fifo->buffer, fifo->buffer + fifo->tail, mbx->pushed << 2);
+
+ /* shift head and tail based on the memory we moved */
+ fifo->tail -= fifo->head;
+ fifo->head = 0;
+
+ return cnt;
+}
+
+/**
+ * fm10k_mbx_enqueue_tx - Enqueues the message to the tail of the Tx FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ * @msg: message array to read
+ *
+ * This function enqueues a message up to the size specified by the length
+ * contained in the first DWORD of the message and will place at the tail
+ * of the FIFO. It will return 0 on success, or a negative value on error.
+ **/
+static s32 fm10k_mbx_enqueue_tx(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx, const u32 *msg)
+{
+ u32 countdown = mbx->timeout;
+ s32 err;
+
+ switch (mbx->state) {
+ case FM10K_STATE_CLOSED:
+ case FM10K_STATE_DISCONNECT:
+ return FM10K_MBX_ERR_NO_MBX;
+ default:
+ break;
+ }
+
+ /* enqueue the message on the Tx FIFO */
+ err = fm10k_fifo_enqueue(&mbx->tx, msg);
+
+ /* if it failed give the FIFO a chance to drain */
+ while (err && countdown) {
+ countdown--;
+ udelay(mbx->udelay);
+ mbx->ops.process(hw, mbx);
+ err = fm10k_fifo_enqueue(&mbx->tx, msg);
+ }
+
+ /* if we failed trhead the error */
+ if (err) {
+ mbx->timeout = 0;
+ mbx->tx_busy++;
+ }
+
+ /* begin processing message, ignore errors as this is just meant
+ * to start the mailbox flow so we are not concerned if there
+ * is a bad error, or the mailbox is already busy with a request
+ */
+ if (!mbx->tail_len)
+ mbx->ops.process(hw, mbx);
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_read - Copies the mbmem to local message buffer
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function copies the message from the mbmem to the message array
+ **/
+static s32 fm10k_mbx_read(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx)
+{
+ /* only allow one reader in here at a time */
+ if (mbx->mbx_hdr)
+ return FM10K_MBX_ERR_BUSY;
+
+ /* read to capture initial interrupt bits */
+ if (fm10k_read_reg(hw, mbx->mbx_reg) & FM10K_MBX_REQ_INTERRUPT)
+ mbx->mbx_lock = FM10K_MBX_ACK;
+
+ /* write back interrupt bits to clear */
+ fm10k_write_reg(hw, mbx->mbx_reg,
+ FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT);
+
+ /* read remote header */
+ mbx->mbx_hdr = fm10k_read_reg(hw, mbx->mbmem_reg ^ mbx->mbmem_len);
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_write - Copies the local message buffer to mbmem
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function copies the message from the the message array to mbmem
+ **/
+static void fm10k_mbx_write(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx)
+{
+ u32 mbmem = mbx->mbmem_reg;
+
+ /* write new msg header to notify recepient of change */
+ fm10k_write_reg(hw, mbmem, mbx->mbx_hdr);
+
+ /* write mailbox to sent interrupt */
+ if (mbx->mbx_lock)
+ fm10k_write_reg(hw, mbx->mbx_reg, mbx->mbx_lock);
+
+ /* we no longer are using the header so free it */
+ mbx->mbx_hdr = 0;
+ mbx->mbx_lock = 0;
+}
+
+/**
+ * fm10k_mbx_create_connect_hdr - Generate a connect mailbox header
+ * @mbx: pointer to mailbox
+ *
+ * This function returns a connection mailbox header
+ **/
+static void fm10k_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx)
+{
+ mbx->mbx_lock |= FM10K_MBX_REQ;
+
+ mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_CONNECT, TYPE) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->rx.size - 1, CONNECT_SIZE);
+}
+
+/**
+ * fm10k_mbx_create_data_hdr - Generate a data mailbox header
+ * @mbx: pointer to mailbox
+ *
+ * This function returns a data mailbox header
+ **/
+static void fm10k_mbx_create_data_hdr(struct fm10k_mbx_info *mbx)
+{
+ u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DATA, TYPE) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD);
+ struct fm10k_mbx_fifo *fifo = &mbx->tx;
+ u16 crc;
+
+ if (mbx->tail_len)
+ mbx->mbx_lock |= FM10K_MBX_REQ;
+
+ /* generate CRC for data in flight and header */
+ crc = fm10k_fifo_crc(fifo, fm10k_fifo_head_offset(fifo, mbx->pulled),
+ mbx->tail_len, mbx->local);
+ crc = fm10k_crc_16b(&hdr, crc, 1);
+
+ /* load header to memory to be written */
+ mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC);
+}
+
+/**
+ * fm10k_mbx_create_disconnect_hdr - Generate a disconnect mailbox header
+ * @mbx: pointer to mailbox
+ *
+ * This function returns a disconnect mailbox header
+ **/
+static void fm10k_mbx_create_disconnect_hdr(struct fm10k_mbx_info *mbx)
+{
+ u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DISCONNECT, TYPE) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD);
+ u16 crc = fm10k_crc_16b(&hdr, mbx->local, 1);
+
+ mbx->mbx_lock |= FM10K_MBX_ACK;
+
+ /* load header to memory to be written */
+ mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC);
+}
+
+/**
+ * fm10k_mbx_create_error_msg - Generate a error message
+ * @mbx: pointer to mailbox
+ * @err: local error encountered
+ *
+ * This function will interpret the error provided by err, and based on
+ * that it may shift the message by 1 DWORD and then place an error header
+ * at the start of the message.
+ **/
+static void fm10k_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err)
+{
+ /* only generate an error message for these types */
+ switch (err) {
+ case FM10K_MBX_ERR_TAIL:
+ case FM10K_MBX_ERR_HEAD:
+ case FM10K_MBX_ERR_TYPE:
+ case FM10K_MBX_ERR_SIZE:
+ case FM10K_MBX_ERR_RSVD0:
+ case FM10K_MBX_ERR_CRC:
+ break;
+ default:
+ return;
+ }
+
+ mbx->mbx_lock |= FM10K_MBX_REQ;
+
+ mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_ERROR, TYPE) |
+ FM10K_MSG_HDR_FIELD_SET(err, ERR_NO) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD);
+}
+
+/**
+ * fm10k_mbx_validate_msg_hdr - Validate common fields in the message header
+ * @mbx: pointer to mailbox
+ * @msg: message array to read
+ *
+ * This function will parse up the fields in the mailbox header and return
+ * an error if the header contains any of a number of invalid configurations
+ * including unrecognized type, invalid route, or a malformed message.
+ **/
+static s32 fm10k_mbx_validate_msg_hdr(struct fm10k_mbx_info *mbx)
+{
+ u16 type, rsvd0, head, tail, size;
+ const u32 *hdr = &mbx->mbx_hdr;
+
+ type = FM10K_MSG_HDR_FIELD_GET(*hdr, TYPE);
+ rsvd0 = FM10K_MSG_HDR_FIELD_GET(*hdr, RSVD0);
+ tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL);
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD);
+ size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE);
+
+ if (rsvd0)
+ return FM10K_MBX_ERR_RSVD0;
+
+ switch (type) {
+ case FM10K_MSG_DISCONNECT:
+ /* validate that all data has been received */
+ if (tail != mbx->head)
+ return FM10K_MBX_ERR_TAIL;
+
+ /* fall through */
+ case FM10K_MSG_DATA:
+ /* validate that head is moving correctly */
+ if (!head || (head == FM10K_MSG_HDR_MASK(HEAD)))
+ return FM10K_MBX_ERR_HEAD;
+ if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len)
+ return FM10K_MBX_ERR_HEAD;
+
+ /* validate that tail is moving correctly */
+ if (!tail || (tail == FM10K_MSG_HDR_MASK(TAIL)))
+ return FM10K_MBX_ERR_TAIL;
+ if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len)
+ break;
+
+ return FM10K_MBX_ERR_TAIL;
+ case FM10K_MSG_CONNECT:
+ /* validate size is in range and is power of 2 mask */
+ if ((size < FM10K_VFMBX_MSG_MTU) || (size & (size + 1)))
+ return FM10K_MBX_ERR_SIZE;
+
+ /* fall through */
+ case FM10K_MSG_ERROR:
+ if (!head || (head == FM10K_MSG_HDR_MASK(HEAD)))
+ return FM10K_MBX_ERR_HEAD;
+ /* neither create nor error include a tail offset */
+ if (tail)
+ return FM10K_MBX_ERR_TAIL;
+
+ break;
+ default:
+ return FM10K_MBX_ERR_TYPE;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_create_reply - Generate reply based on state and remote head
+ * @mbx: pointer to mailbox
+ * @head: acknowledgement number
+ *
+ * This function will generate an outgoing message based on the current
+ * mailbox state and the remote fifo head. It will return the length
+ * of the outgoing message excluding header on success, and a negative value
+ * on error.
+ **/
+static s32 fm10k_mbx_create_reply(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx, u16 head)
+{
+ switch (mbx->state) {
+ case FM10K_STATE_OPEN:
+ case FM10K_STATE_DISCONNECT:
+ /* update our checksum for the outgoing data */
+ fm10k_mbx_update_local_crc(mbx, head);
+
+ /* as long as other end recognizes us keep sending data */
+ fm10k_mbx_pull_head(hw, mbx, head);
+
+ /* generate new header based on data */
+ if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN))
+ fm10k_mbx_create_data_hdr(mbx);
+ else
+ fm10k_mbx_create_disconnect_hdr(mbx);
+ break;
+ case FM10K_STATE_CONNECT:
+ /* send disconnect even if we aren't connected */
+ fm10k_mbx_create_connect_hdr(mbx);
+ break;
+ case FM10K_STATE_CLOSED:
+ /* generate new header based on data */
+ fm10k_mbx_create_disconnect_hdr(mbx);
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_reset_work- Reset internal pointers for any pending work
+ * @mbx: pointer to mailbox
+ *
+ * This function will reset all internal pointers so any work in progress
+ * is dropped. This call should occur every time we transition from the
+ * open state to the connect state.
+ **/
+static void fm10k_mbx_reset_work(struct fm10k_mbx_info *mbx)
+{
+ /* reset our outgoing max size back to Rx limits */
+ mbx->max_size = mbx->rx.size - 1;
+
+ /* just do a quick resysnc to start of message */
+ mbx->pushed = 0;
+ mbx->pulled = 0;
+ mbx->tail_len = 0;
+ mbx->head_len = 0;
+ mbx->rx.tail = 0;
+ mbx->rx.head = 0;
+}
+
+/**
+ * fm10k_mbx_update_max_size - Update the max_size and drop any large messages
+ * @mbx: pointer to mailbox
+ * @size: new value for max_size
+ *
+ * This function will update the max_size value and drop any outgoing messages
+ * from the head of the Tx FIFO that are larger than max_size.
+ **/
+static void fm10k_mbx_update_max_size(struct fm10k_mbx_info *mbx, u16 size)
+{
+ u16 len;
+
+ mbx->max_size = size;
+
+ /* flush any oversized messages from the queue */
+ for (len = fm10k_fifo_head_len(&mbx->tx);
+ len > size;
+ len = fm10k_fifo_head_len(&mbx->tx)) {
+ fm10k_fifo_head_drop(&mbx->tx);
+ mbx->tx_dropped++;
+ }
+}
+
+/**
+ * fm10k_mbx_connect_reset - Reset following request for reset
+ * @mbx: pointer to mailbox
+ *
+ * This function resets the mailbox to either a disconnected state
+ * or a connect state depending on the current mailbox state
+ **/
+static void fm10k_mbx_connect_reset(struct fm10k_mbx_info *mbx)
+{
+ /* just do a quick resysnc to start of frame */
+ fm10k_mbx_reset_work(mbx);
+
+ /* reset CRC seeds */
+ mbx->local = FM10K_MBX_CRC_SEED;
+ mbx->remote = FM10K_MBX_CRC_SEED;
+
+ /* we cannot exit connect until the size is good */
+ if (mbx->state == FM10K_STATE_OPEN)
+ mbx->state = FM10K_STATE_CONNECT;
+ else
+ mbx->state = FM10K_STATE_CLOSED;
+}
+
+/**
+ * fm10k_mbx_process_connect - Process connect header
+ * @mbx: pointer to mailbox
+ * @msg: message array to process
+ *
+ * This function will read an incoming connect header and reply with the
+ * appropriate message. It will return a value indicating the number of
+ * data DWORDs on success, or will return a negative value on failure.
+ **/
+static s32 fm10k_mbx_process_connect(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const enum fm10k_mbx_state state = mbx->state;
+ const u32 *hdr = &mbx->mbx_hdr;
+ u16 size, head;
+
+ /* we will need to pull all of the fields for verification */
+ size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE);
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD);
+
+ switch (state) {
+ case FM10K_STATE_DISCONNECT:
+ case FM10K_STATE_OPEN:
+ /* reset any in-progress work */
+ fm10k_mbx_connect_reset(mbx);
+ break;
+ case FM10K_STATE_CONNECT:
+ /* we cannot exit connect until the size is good */
+ if (size > mbx->rx.size) {
+ mbx->max_size = mbx->rx.size - 1;
+ } else {
+ /* record the remote system requesting connection */
+ mbx->state = FM10K_STATE_OPEN;
+
+ fm10k_mbx_update_max_size(mbx, size);
+ }
+ break;
+ default:
+ break;
+ }
+
+ /* align our tail index to remote head index */
+ mbx->tail = head;
+
+ return fm10k_mbx_create_reply(hw, mbx, head);
+}
+
+/**
+ * fm10k_mbx_process_data - Process data header
+ * @mbx: pointer to mailbox
+ *
+ * This function will read an incoming data header and reply with the
+ * appropriate message. It will return a value indicating the number of
+ * data DWORDs on success, or will return a negative value on failure.
+ **/
+static s32 fm10k_mbx_process_data(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const u32 *hdr = &mbx->mbx_hdr;
+ u16 head, tail;
+ s32 err;
+
+ /* we will need to pull all of the fields for verification */
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD);
+ tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL);
+
+ /* if we are in connect just update our data and go */
+ if (mbx->state == FM10K_STATE_CONNECT) {
+ mbx->tail = head;
+ mbx->state = FM10K_STATE_OPEN;
+ }
+
+ /* abort on message size errors */
+ err = fm10k_mbx_push_tail(hw, mbx, tail);
+ if (err < 0)
+ return err;
+
+ /* verify the checksum on the incoming data */
+ err = fm10k_mbx_verify_remote_crc(mbx);
+ if (err)
+ return err;
+
+ /* process messages if we have received any */
+ fm10k_mbx_dequeue_rx(hw, mbx);
+
+ return fm10k_mbx_create_reply(hw, mbx, head);
+}
+
+/**
+ * fm10k_mbx_process_disconnect - Process disconnect header
+ * @mbx: pointer to mailbox
+ *
+ * This function will read an incoming disconnect header and reply with the
+ * appropriate message. It will return a value indicating the number of
+ * data DWORDs on success, or will return a negative value on failure.
+ **/
+static s32 fm10k_mbx_process_disconnect(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const enum fm10k_mbx_state state = mbx->state;
+ const u32 *hdr = &mbx->mbx_hdr;
+ u16 head, tail;
+ s32 err;
+
+ /* we will need to pull all of the fields for verification */
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD);
+ tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL);
+
+ /* We should not be receiving disconnect if Rx is incomplete */
+ if (mbx->pushed)
+ return FM10K_MBX_ERR_TAIL;
+
+ /* we have already verified mbx->head == tail so we know this is 0 */
+ mbx->head_len = 0;
+
+ /* verify the checksum on the incoming header is correct */
+ err = fm10k_mbx_verify_remote_crc(mbx);
+ if (err)
+ return err;
+
+ switch (state) {
+ case FM10K_STATE_DISCONNECT:
+ case FM10K_STATE_OPEN:
+ /* state doesn't change if we still have work to do */
+ if (!fm10k_mbx_tx_complete(mbx))
+ break;
+
+ /* verify the head indicates we completed all transmits */
+ if (head != mbx->tail)
+ return FM10K_MBX_ERR_HEAD;
+
+ /* reset any in-progress work */
+ fm10k_mbx_connect_reset(mbx);
+ break;
+ default:
+ break;
+ }
+
+ return fm10k_mbx_create_reply(hw, mbx, head);
+}
+
+/**
+ * fm10k_mbx_process_error - Process error header
+ * @mbx: pointer to mailbox
+ *
+ * This function will read an incoming error header and reply with the
+ * appropriate message. It will return a value indicating the number of
+ * data DWORDs on success, or will return a negative value on failure.
+ **/
+static s32 fm10k_mbx_process_error(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const u32 *hdr = &mbx->mbx_hdr;
+ s32 err_no;
+ u16 head;
+
+ /* we will need to pull all of the fields for verification */
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD);
+
+ /* we only have lower 10 bits of error number os add upper bits */
+ err_no = FM10K_MSG_HDR_FIELD_GET(*hdr, ERR_NO);
+ err_no |= ~FM10K_MSG_HDR_MASK(ERR_NO);
+
+ switch (mbx->state) {
+ case FM10K_STATE_OPEN:
+ case FM10K_STATE_DISCONNECT:
+ /* flush any uncompleted work */
+ fm10k_mbx_reset_work(mbx);
+
+ /* reset CRC seeds */
+ mbx->local = FM10K_MBX_CRC_SEED;
+ mbx->remote = FM10K_MBX_CRC_SEED;
+
+ /* reset tail index and size to prepare for reconnect */
+ mbx->tail = head;
+
+ /* if open then reset max_size and go back to connect */
+ if (mbx->state == FM10K_STATE_OPEN) {
+ mbx->state = FM10K_STATE_CONNECT;
+ break;
+ }
+
+ /* send a connect message to get data flowing again */
+ fm10k_mbx_create_connect_hdr(mbx);
+ return 0;
+ default:
+ break;
+ }
+
+ return fm10k_mbx_create_reply(hw, mbx, mbx->tail);
+}
+
+/**
+ * fm10k_mbx_process - Process mailbox interrupt
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will process incoming mailbox events and generate mailbox
+ * replies. It will return a value indicating the number of DWORDs
+ * transmitted excluding header on success or a negative value on error.
+ **/
+static s32 fm10k_mbx_process(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ s32 err;
+
+ /* we do not read mailbox if closed */
+ if (mbx->state == FM10K_STATE_CLOSED)
+ return 0;
+
+ /* copy data from mailbox */
+ err = fm10k_mbx_read(hw, mbx);
+ if (err)
+ return err;
+
+ /* validate type, source, and destination */
+ err = fm10k_mbx_validate_msg_hdr(mbx);
+ if (err < 0)
+ goto msg_err;
+
+ switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, TYPE)) {
+ case FM10K_MSG_CONNECT:
+ err = fm10k_mbx_process_connect(hw, mbx);
+ break;
+ case FM10K_MSG_DATA:
+ err = fm10k_mbx_process_data(hw, mbx);
+ break;
+ case FM10K_MSG_DISCONNECT:
+ err = fm10k_mbx_process_disconnect(hw, mbx);
+ break;
+ case FM10K_MSG_ERROR:
+ err = fm10k_mbx_process_error(hw, mbx);
+ break;
+ default:
+ err = FM10K_MBX_ERR_TYPE;
+ break;
+ }
+
+msg_err:
+ /* notify partner of errors on our end */
+ if (err < 0)
+ fm10k_mbx_create_error_msg(mbx, err);
+
+ /* copy data from mailbox */
+ fm10k_mbx_write(hw, mbx);
+
+ return err;
+}
+
+/**
+ * fm10k_mbx_disconnect - Shutdown mailbox connection
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will shut down the mailbox. It places the mailbox first
+ * in the disconnect state, it then allows up to a predefined timeout for
+ * the mailbox to transition to close on its own. If this does not occur
+ * then the mailbox will be forced into the closed state.
+ *
+ * Any mailbox transactions not completed before calling this function
+ * are not guaranteed to complete and may be dropped.
+ **/
+static void fm10k_mbx_disconnect(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0;
+
+ /* Place mbx in ready to disconnect state */
+ mbx->state = FM10K_STATE_DISCONNECT;
+
+ /* trigger interrupt to start shutdown process */
+ fm10k_write_reg(hw, mbx->mbx_reg, FM10K_MBX_REQ |
+ FM10K_MBX_INTERRUPT_DISABLE);
+ do {
+ udelay(FM10K_MBX_POLL_DELAY);
+ mbx->ops.process(hw, mbx);
+ timeout -= FM10K_MBX_POLL_DELAY;
+ } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED));
+
+ /* in case we didn't close just force the mailbox into shutdown */
+ fm10k_mbx_connect_reset(mbx);
+ fm10k_mbx_update_max_size(mbx, 0);
+
+ fm10k_write_reg(hw, mbx->mbmem_reg, 0);
+}
+
+/**
+ * fm10k_mbx_connect - Start mailbox connection
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will initiate a mailbox connection. It will populate the
+ * mailbox with a broadcast connect message and then initialize the lock.
+ * This is safe since the connect message is a single DWORD so the mailbox
+ * transaction is guaranteed to be atomic.
+ *
+ * This function will return an error if the mailbox has not been initiated
+ * or is currently in use.
+ **/
+static s32 fm10k_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx)
+{
+ /* we cannot connect an uninitialized mailbox */
+ if (!mbx->rx.buffer)
+ return FM10K_MBX_ERR_NO_SPACE;
+
+ /* we cannot connect an already connected mailbox */
+ if (mbx->state != FM10K_STATE_CLOSED)
+ return FM10K_MBX_ERR_BUSY;
+
+ /* mailbox timeout can now become active */
+ mbx->timeout = FM10K_MBX_INIT_TIMEOUT;
+
+ /* Place mbx in ready to connect state */
+ mbx->state = FM10K_STATE_CONNECT;
+
+ /* initialize header of remote mailbox */
+ fm10k_mbx_create_disconnect_hdr(mbx);
+ fm10k_write_reg(hw, mbx->mbmem_reg ^ mbx->mbmem_len, mbx->mbx_hdr);
+
+ /* enable interrupt and notify other party of new message */
+ mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT |
+ FM10K_MBX_INTERRUPT_ENABLE;
+
+ /* generate and load connect header into mailbox */
+ fm10k_mbx_create_connect_hdr(mbx);
+ fm10k_mbx_write(hw, mbx);
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_validate_handlers - Validate layout of message parsing data
+ * @msg_data: handlers for mailbox events
+ *
+ * This function validates the layout of the message parsing data. This
+ * should be mostly static, but it is important to catch any errors that
+ * are made when constructing the parsers.
+ **/
+static s32 fm10k_mbx_validate_handlers(const struct fm10k_msg_data *msg_data)
+{
+ const struct fm10k_tlv_attr *attr;
+ unsigned int id;
+
+ /* Allow NULL mailboxes that transmit but don't receive */
+ if (!msg_data)
+ return 0;
+
+ while (msg_data->id != FM10K_TLV_ERROR) {
+ /* all messages should have a function handler */
+ if (!msg_data->func)
+ return FM10K_ERR_PARAM;
+
+ /* parser is optional */
+ attr = msg_data->attr;
+ if (attr) {
+ while (attr->id != FM10K_TLV_ERROR) {
+ id = attr->id;
+ attr++;
+ /* ID should always be increasing */
+ if (id >= attr->id)
+ return FM10K_ERR_PARAM;
+ /* ID should fit in results array */
+ if (id >= FM10K_TLV_RESULTS_MAX)
+ return FM10K_ERR_PARAM;
+ }
+
+ /* verify terminator is in the list */
+ if (attr->id != FM10K_TLV_ERROR)
+ return FM10K_ERR_PARAM;
+ }
+
+ id = msg_data->id;
+ msg_data++;
+ /* ID should always be increasing */
+ if (id >= msg_data->id)
+ return FM10K_ERR_PARAM;
+ }
+
+ /* verify terminator is in the list */
+ if ((msg_data->id != FM10K_TLV_ERROR) || !msg_data->func)
+ return FM10K_ERR_PARAM;
+
+ return 0;
+}
+
+/**
+ * fm10k_mbx_register_handlers - Register a set of handler ops for mailbox
+ * @mbx: pointer to mailbox
+ * @msg_data: handlers for mailbox events
+ *
+ * This function associates a set of message handling ops with a mailbox.
+ **/
+static s32 fm10k_mbx_register_handlers(struct fm10k_mbx_info *mbx,
+ const struct fm10k_msg_data *msg_data)
+{
+ /* validate layout of handlers before assigning them */
+ if (fm10k_mbx_validate_handlers(msg_data))
+ return FM10K_ERR_PARAM;
+
+ /* initialize the message handlers */
+ mbx->msg_data = msg_data;
+
+ return 0;
+}
+
+/**
+ * fm10k_pfvf_mbx_init - Initialize mailbox memory for PF/VF mailbox
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ * @msg_data: handlers for mailbox events
+ * @id: ID reference for PF as it supports up to 64 PF/VF mailboxes
+ *
+ * This function initializes the mailbox for use. It will split the
+ * buffer provided an use that th populate both the Tx and Rx FIFO by
+ * evenly splitting it. In order to allow for easy masking of head/tail
+ * the value reported in size must be a power of 2 and is reported in
+ * DWORDs, not bytes. Any invalid values will cause the mailbox to return
+ * error.
+ **/
+s32 fm10k_pfvf_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx,
+ const struct fm10k_msg_data *msg_data, u8 id)
+{
+ /* initialize registers */
+ switch (hw->mac.type) {
+ case fm10k_mac_vf:
+ mbx->mbx_reg = FM10K_VFMBX;
+ mbx->mbmem_reg = FM10K_VFMBMEM(FM10K_VFMBMEM_VF_XOR);
+ break;
+ case fm10k_mac_pf:
+ /* there are only 64 VF <-> PF mailboxes */
+ if (id < 64) {
+ mbx->mbx_reg = FM10K_MBX(id);
+ mbx->mbmem_reg = FM10K_MBMEM_VF(id, 0);
+ break;
+ }
+ /* fallthough */
+ default:
+ return FM10K_MBX_ERR_NO_MBX;
+ }
+
+ /* start out in closed state */
+ mbx->state = FM10K_STATE_CLOSED;
+
+ /* validate layout of handlers before assigning them */
+ if (fm10k_mbx_validate_handlers(msg_data))
+ return FM10K_ERR_PARAM;
+
+ /* initialize the message handlers */
+ mbx->msg_data = msg_data;
+
+ /* start mailbox as timed out and let the reset_hw call
+ * set the timeout value to begin communications
+ */
+ mbx->timeout = 0;
+ mbx->udelay = FM10K_MBX_INIT_DELAY;
+
+ /* initalize tail and head */
+ mbx->tail = 1;
+ mbx->head = 1;
+
+ /* initialize CRC seeds */
+ mbx->local = FM10K_MBX_CRC_SEED;
+ mbx->remote = FM10K_MBX_CRC_SEED;
+
+ /* Split buffer for use by Tx/Rx FIFOs */
+ mbx->max_size = FM10K_MBX_MSG_MAX_SIZE;
+ mbx->mbmem_len = FM10K_VFMBMEM_VF_XOR;
+
+ /* initialize the FIFOs, sizes are in 4 byte increments */
+ fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE);
+ fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE],
+ FM10K_MBX_RX_BUFFER_SIZE);
+
+ /* initialize function pointers */
+ mbx->ops.connect = fm10k_mbx_connect;
+ mbx->ops.disconnect = fm10k_mbx_disconnect;
+ mbx->ops.rx_ready = fm10k_mbx_rx_ready;
+ mbx->ops.tx_ready = fm10k_mbx_tx_ready;
+ mbx->ops.tx_complete = fm10k_mbx_tx_complete;
+ mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx;
+ mbx->ops.process = fm10k_mbx_process;
+ mbx->ops.register_handlers = fm10k_mbx_register_handlers;
+
+ return 0;
+}
+
+/**
+ * fm10k_sm_mbx_create_data_hdr - Generate a mailbox header for local FIFO
+ * @mbx: pointer to mailbox
+ *
+ * This function returns a connection mailbox header
+ **/
+static void fm10k_sm_mbx_create_data_hdr(struct fm10k_mbx_info *mbx)
+{
+ if (mbx->tail_len)
+ mbx->mbx_lock |= FM10K_MBX_REQ;
+
+ mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD);
+}
+
+/**
+ * fm10k_sm_mbx_create_connect_hdr - Generate a mailbox header for local FIFO
+ * @mbx: pointer to mailbox
+ * @err: error flags to report if any
+ *
+ * This function returns a connection mailbox header
+ **/
+static void fm10k_sm_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx, u8 err)
+{
+ if (mbx->local)
+ mbx->mbx_lock |= FM10K_MBX_REQ;
+
+ mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) |
+ FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD) |
+ FM10K_MSG_HDR_FIELD_SET(err, SM_ERR);
+}
+
+/**
+ * fm10k_sm_mbx_connect_reset - Reset following request for reset
+ * @mbx: pointer to mailbox
+ *
+ * This function resets the mailbox to a just connected state
+ **/
+static void fm10k_sm_mbx_connect_reset(struct fm10k_mbx_info *mbx)
+{
+ /* flush any uncompleted work */
+ fm10k_mbx_reset_work(mbx);
+
+ /* set local version to max and remote version to 0 */
+ mbx->local = FM10K_SM_MBX_VERSION;
+ mbx->remote = 0;
+
+ /* initalize tail and head */
+ mbx->tail = 1;
+ mbx->head = 1;
+
+ /* reset state back to connect */
+ mbx->state = FM10K_STATE_CONNECT;
+}
+
+/**
+ * fm10k_sm_mbx_connect - Start switch manager mailbox connection
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will initiate a mailbox connection with the switch
+ * manager. To do this it will first disconnect the mailbox, and then
+ * reconnect it in order to complete a reset of the mailbox.
+ *
+ * This function will return an error if the mailbox has not been initiated
+ * or is currently in use.
+ **/
+static s32 fm10k_sm_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx)
+{
+ /* we cannot connect an uninitialized mailbox */
+ if (!mbx->rx.buffer)
+ return FM10K_MBX_ERR_NO_SPACE;
+
+ /* we cannot connect an already connected mailbox */
+ if (mbx->state != FM10K_STATE_CLOSED)
+ return FM10K_MBX_ERR_BUSY;
+
+ /* mailbox timeout can now become active */
+ mbx->timeout = FM10K_MBX_INIT_TIMEOUT;
+
+ /* Place mbx in ready to connect state */
+ mbx->state = FM10K_STATE_CONNECT;
+ mbx->max_size = FM10K_MBX_MSG_MAX_SIZE;
+
+ /* reset interface back to connect */
+ fm10k_sm_mbx_connect_reset(mbx);
+
+ /* enable interrupt and notify other party of new message */
+ mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT |
+ FM10K_MBX_INTERRUPT_ENABLE;
+
+ /* generate and load connect header into mailbox */
+ fm10k_sm_mbx_create_connect_hdr(mbx, 0);
+ fm10k_mbx_write(hw, mbx);
+
+ /* enable interrupt and notify other party of new message */
+
+ return 0;
+}
+
+/**
+ * fm10k_sm_mbx_disconnect - Shutdown mailbox connection
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will shut down the mailbox. It places the mailbox first
+ * in the disconnect state, it then allows up to a predefined timeout for
+ * the mailbox to transition to close on its own. If this does not occur
+ * then the mailbox will be forced into the closed state.
+ *
+ * Any mailbox transactions not completed before calling this function
+ * are not guaranteed to complete and may be dropped.
+ **/
+static void fm10k_sm_mbx_disconnect(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0;
+
+ /* Place mbx in ready to disconnect state */
+ mbx->state = FM10K_STATE_DISCONNECT;
+
+ /* trigger interrupt to start shutdown process */
+ fm10k_write_reg(hw, mbx->mbx_reg, FM10K_MBX_REQ |
+ FM10K_MBX_INTERRUPT_DISABLE);
+ do {
+ udelay(FM10K_MBX_POLL_DELAY);
+ mbx->ops.process(hw, mbx);
+ timeout -= FM10K_MBX_POLL_DELAY;
+ } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED));
+
+ /* in case we didn't close just force the mailbox into shutdown */
+ mbx->state = FM10K_STATE_CLOSED;
+ mbx->remote = 0;
+ fm10k_mbx_reset_work(mbx);
+ fm10k_mbx_update_max_size(mbx, 0);
+
+ fm10k_write_reg(hw, mbx->mbmem_reg, 0);
+}
+
+/**
+ * fm10k_mbx_validate_fifo_hdr - Validate fields in the remote FIFO header
+ * @mbx: pointer to mailbox
+ *
+ * This function will parse up the fields in the mailbox header and return
+ * an error if the header contains any of a number of invalid configurations
+ * including unrecognized offsets or version numbers.
+ **/
+static s32 fm10k_sm_mbx_validate_fifo_hdr(struct fm10k_mbx_info *mbx)
+{
+ const u32 *hdr = &mbx->mbx_hdr;
+ u16 tail, head, ver;
+
+ tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL);
+ ver = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_VER);
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD);
+
+ switch (ver) {
+ case 0:
+ break;
+ case FM10K_SM_MBX_VERSION:
+ if (!head || head > FM10K_SM_MBX_FIFO_LEN)
+ return FM10K_MBX_ERR_HEAD;
+ if (!tail || tail > FM10K_SM_MBX_FIFO_LEN)
+ return FM10K_MBX_ERR_TAIL;
+ if (mbx->tail < head)
+ head += mbx->mbmem_len - 1;
+ if (tail < mbx->head)
+ tail += mbx->mbmem_len - 1;
+ if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len)
+ return FM10K_MBX_ERR_HEAD;
+ if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len)
+ break;
+ return FM10K_MBX_ERR_TAIL;
+ default:
+ return FM10K_MBX_ERR_SRC;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_sm_mbx_process_error - Process header with error flag set
+ * @mbx: pointer to mailbox
+ *
+ * This function is meant to respond to a request where the error flag
+ * is set. As a result we will terminate a connection if one is present
+ * and fall back into the reset state with a connection header of version
+ * 0 (RESET).
+ **/
+static void fm10k_sm_mbx_process_error(struct fm10k_mbx_info *mbx)
+{
+ const enum fm10k_mbx_state state = mbx->state;
+
+ switch (state) {
+ case FM10K_STATE_DISCONNECT:
+ /* if there is an error just disconnect */
+ mbx->remote = 0;
+ break;
+ case FM10K_STATE_OPEN:
+ /* flush any uncompleted work */
+ fm10k_sm_mbx_connect_reset(mbx);
+ break;
+ case FM10K_STATE_CONNECT:
+ /* try connnecting at lower version */
+ if (mbx->remote) {
+ while (mbx->local > 1)
+ mbx->local--;
+ mbx->remote = 0;
+ }
+ break;
+ default:
+ break;
+ }
+
+ fm10k_sm_mbx_create_connect_hdr(mbx, 0);
+}
+
+/**
+ * fm10k_sm_mbx_create_error_message - Process an error in FIFO hdr
+ * @mbx: pointer to mailbox
+ * @err: local error encountered
+ *
+ * This function will interpret the error provided by err, and based on
+ * that it may set the error bit in the local message header
+ **/
+static void fm10k_sm_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err)
+{
+ /* only generate an error message for these types */
+ switch (err) {
+ case FM10K_MBX_ERR_TAIL:
+ case FM10K_MBX_ERR_HEAD:
+ case FM10K_MBX_ERR_SRC:
+ case FM10K_MBX_ERR_SIZE:
+ case FM10K_MBX_ERR_RSVD0:
+ break;
+ default:
+ return;
+ }
+
+ /* process it as though we received an error, and send error reply */
+ fm10k_sm_mbx_process_error(mbx);
+ fm10k_sm_mbx_create_connect_hdr(mbx, 1);
+}
+
+/**
+ * fm10k_sm_mbx_receive - Take message from Rx mailbox FIFO and put it in Rx
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will dequeue one message from the Rx switch manager mailbox
+ * FIFO and place it in the Rx mailbox FIFO for processing by software.
+ **/
+static s32 fm10k_sm_mbx_receive(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx,
+ u16 tail)
+{
+ /* reduce length by 1 to convert to a mask */
+ u16 mbmem_len = mbx->mbmem_len - 1;
+ s32 err;
+
+ /* push tail in front of head */
+ if (tail < mbx->head)
+ tail += mbmem_len;
+
+ /* copy data to the Rx FIFO */
+ err = fm10k_mbx_push_tail(hw, mbx, tail);
+ if (err < 0)
+ return err;
+
+ /* process messages if we have received any */
+ fm10k_mbx_dequeue_rx(hw, mbx);
+
+ /* guarantee head aligns with the end of the last message */
+ mbx->head = fm10k_mbx_head_sub(mbx, mbx->pushed);
+ mbx->pushed = 0;
+
+ /* clear any extra bits left over since index adds 1 extra bit */
+ if (mbx->head > mbmem_len)
+ mbx->head -= mbmem_len;
+
+ return err;
+}
+
+/**
+ * fm10k_sm_mbx_transmit - Take message from Tx and put it in Tx mailbox FIFO
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will dequeue one message from the Tx mailbox FIFO and place
+ * it in the Tx switch manager mailbox FIFO for processing by hardware.
+ **/
+static void fm10k_sm_mbx_transmit(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx, u16 head)
+{
+ struct fm10k_mbx_fifo *fifo = &mbx->tx;
+ /* reduce length by 1 to convert to a mask */
+ u16 mbmem_len = mbx->mbmem_len - 1;
+ u16 tail_len, len = 0;
+ u32 *msg;
+
+ /* push head behind tail */
+ if (mbx->tail < head)
+ head += mbmem_len;
+
+ fm10k_mbx_pull_head(hw, mbx, head);
+
+ /* determine msg aligned offset for end of buffer */
+ do {
+ msg = fifo->buffer + fm10k_fifo_head_offset(fifo, len);
+ tail_len = len;
+ len += FM10K_TLV_DWORD_LEN(*msg);
+ } while ((len <= mbx->tail_len) && (len < mbmem_len));
+
+ /* guarantee we stop on a message boundary */
+ if (mbx->tail_len > tail_len) {
+ mbx->tail = fm10k_mbx_tail_sub(mbx, mbx->tail_len - tail_len);
+ mbx->tail_len = tail_len;
+ }
+
+ /* clear any extra bits left over since index adds 1 extra bit */
+ if (mbx->tail > mbmem_len)
+ mbx->tail -= mbmem_len;
+}
+
+/**
+ * fm10k_sm_mbx_create_reply - Generate reply based on state and remote head
+ * @mbx: pointer to mailbox
+ * @head: acknowledgement number
+ *
+ * This function will generate an outgoing message based on the current
+ * mailbox state and the remote fifo head. It will return the length
+ * of the outgoing message excluding header on success, and a negative value
+ * on error.
+ **/
+static void fm10k_sm_mbx_create_reply(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx, u16 head)
+{
+ switch (mbx->state) {
+ case FM10K_STATE_OPEN:
+ case FM10K_STATE_DISCONNECT:
+ /* flush out Tx data */
+ fm10k_sm_mbx_transmit(hw, mbx, head);
+
+ /* generate new header based on data */
+ if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) {
+ fm10k_sm_mbx_create_data_hdr(mbx);
+ } else {
+ mbx->remote = 0;
+ fm10k_sm_mbx_create_connect_hdr(mbx, 0);
+ }
+ break;
+ case FM10K_STATE_CONNECT:
+ case FM10K_STATE_CLOSED:
+ fm10k_sm_mbx_create_connect_hdr(mbx, 0);
+ break;
+ default:
+ break;
+ }
+}
+
+/**
+ * fm10k_sm_mbx_process_reset - Process header with version == 0 (RESET)
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function is meant to respond to a request where the version data
+ * is set to 0. As such we will either terminate the connection or go
+ * into the connect state in order to re-establish the connection. This
+ * function can also be used to respond to an error as the connection
+ * resetting would also be a means of dealing with errors.
+ **/
+static void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const enum fm10k_mbx_state state = mbx->state;
+
+ switch (state) {
+ case FM10K_STATE_DISCONNECT:
+ /* drop remote connections and disconnect */
+ mbx->state = FM10K_STATE_CLOSED;
+ mbx->remote = 0;
+ mbx->local = 0;
+ break;
+ case FM10K_STATE_OPEN:
+ /* flush any incomplete work */
+ fm10k_sm_mbx_connect_reset(mbx);
+ break;
+ case FM10K_STATE_CONNECT:
+ /* Update remote value to match local value */
+ mbx->remote = mbx->local;
+ default:
+ break;
+ }
+
+ fm10k_sm_mbx_create_reply(hw, mbx, mbx->tail);
+}
+
+/**
+ * fm10k_sm_mbx_process_version_1 - Process header with version == 1
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function is meant to process messages received when the remote
+ * mailbox is active.
+ **/
+static s32 fm10k_sm_mbx_process_version_1(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ const u32 *hdr = &mbx->mbx_hdr;
+ u16 head, tail;
+ s32 len;
+
+ /* pull all fields needed for verification */
+ tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL);
+ head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD);
+
+ /* if we are in connect and wanting version 1 then start up and go */
+ if (mbx->state == FM10K_STATE_CONNECT) {
+ if (!mbx->remote)
+ goto send_reply;
+ if (mbx->remote != 1)
+ return FM10K_MBX_ERR_SRC;
+
+ mbx->state = FM10K_STATE_OPEN;
+ }
+
+ do {
+ /* abort on message size errors */
+ len = fm10k_sm_mbx_receive(hw, mbx, tail);
+ if (len < 0)
+ return len;
+
+ /* continue until we have flushed the Rx FIFO */
+ } while (len);
+
+send_reply:
+ fm10k_sm_mbx_create_reply(hw, mbx, head);
+
+ return 0;
+}
+
+/**
+ * fm10k_sm_mbx_process - Process mailbox switch mailbox interrupt
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ *
+ * This function will process incoming mailbox events and generate mailbox
+ * replies. It will return a value indicating the number of DWORDs
+ * transmitted excluding header on success or a negative value on error.
+ **/
+static s32 fm10k_sm_mbx_process(struct fm10k_hw *hw,
+ struct fm10k_mbx_info *mbx)
+{
+ s32 err;
+
+ /* we do not read mailbox if closed */
+ if (mbx->state == FM10K_STATE_CLOSED)
+ return 0;
+
+ /* retrieve data from switch manager */
+ err = fm10k_mbx_read(hw, mbx);
+ if (err)
+ return err;
+
+ err = fm10k_sm_mbx_validate_fifo_hdr(mbx);
+ if (err < 0)
+ goto fifo_err;
+
+ if (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_ERR)) {
+ fm10k_sm_mbx_process_error(mbx);
+ goto fifo_err;
+ }
+
+ switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_VER)) {
+ case 0:
+ fm10k_sm_mbx_process_reset(hw, mbx);
+ break;
+ case FM10K_SM_MBX_VERSION:
+ err = fm10k_sm_mbx_process_version_1(hw, mbx);
+ break;
+ }
+
+fifo_err:
+ if (err < 0)
+ fm10k_sm_mbx_create_error_msg(mbx, err);
+
+ /* report data to switch manager */
+ fm10k_mbx_write(hw, mbx);
+
+ return err;
+}
+
+/**
+ * fm10k_sm_mbx_init - Initialize mailbox memory for PF/SM mailbox
+ * @hw: pointer to hardware structure
+ * @mbx: pointer to mailbox
+ * @msg_data: handlers for mailbox events
+ *
+ * This function for now is used to stub out the PF/SM mailbox
+ **/
+s32 fm10k_sm_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx,
+ const struct fm10k_msg_data *msg_data)
+{
+ mbx->mbx_reg = FM10K_GMBX;
+ mbx->mbmem_reg = FM10K_MBMEM_PF(0);
+ /* start out in closed state */
+ mbx->state = FM10K_STATE_CLOSED;
+
+ /* validate layout of handlers before assigning them */
+ if (fm10k_mbx_validate_handlers(msg_data))
+ return FM10K_ERR_PARAM;
+
+ /* initialize the message handlers */
+ mbx->msg_data = msg_data;
+
+ /* start mailbox as timed out and let the reset_hw call
+ * set the timeout value to begin communications
+ */
+ mbx->timeout = 0;
+ mbx->udelay = FM10K_MBX_INIT_DELAY;
+
+ /* Split buffer for use by Tx/Rx FIFOs */
+ mbx->max_size = FM10K_MBX_MSG_MAX_SIZE;
+ mbx->mbmem_len = FM10K_MBMEM_PF_XOR;
+
+ /* initialize the FIFOs, sizes are in 4 byte increments */
+ fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE);
+ fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE],
+ FM10K_MBX_RX_BUFFER_SIZE);
+
+ /* initialize function pointers */
+ mbx->ops.connect = fm10k_sm_mbx_connect;
+ mbx->ops.disconnect = fm10k_sm_mbx_disconnect;
+ mbx->ops.rx_ready = fm10k_mbx_rx_ready;
+ mbx->ops.tx_ready = fm10k_mbx_tx_ready;
+ mbx->ops.tx_complete = fm10k_mbx_tx_complete;
+ mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx;
+ mbx->ops.process = fm10k_sm_mbx_process;
+ mbx->ops.register_handlers = fm10k_mbx_register_handlers;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_mbx.h b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.h
new file mode 100644
index 0000000..0419a7f
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.h
@@ -0,0 +1,307 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_MBX_H_
+#define _FM10K_MBX_H_
+
+/* forward declaration */
+struct fm10k_mbx_info;
+
+#include "fm10k_type.h"
+#include "fm10k_tlv.h"
+
+/* PF Mailbox Registers */
+#define FM10K_MBMEM(_n) ((_n) + 0x18000)
+#define FM10K_MBMEM_VF(_n, _m) (((_n) * 0x10) + (_m) + 0x18000)
+#define FM10K_MBMEM_SM(_n) ((_n) + 0x18400)
+#define FM10K_MBMEM_PF(_n) ((_n) + 0x18600)
+/* XOR provides means of switching from Tx to Rx FIFO */
+#define FM10K_MBMEM_PF_XOR (FM10K_MBMEM_SM(0) ^ FM10K_MBMEM_PF(0))
+#define FM10K_MBX(_n) ((_n) + 0x18800)
+#define FM10K_MBX_REQ 0x00000002
+#define FM10K_MBX_ACK 0x00000004
+#define FM10K_MBX_REQ_INTERRUPT 0x00000008
+#define FM10K_MBX_ACK_INTERRUPT 0x00000010
+#define FM10K_MBX_INTERRUPT_ENABLE 0x00000020
+#define FM10K_MBX_INTERRUPT_DISABLE 0x00000040
+#define FM10K_MBICR(_n) ((_n) + 0x18840)
+#define FM10K_GMBX 0x18842
+
+/* VF Mailbox Registers */
+#define FM10K_VFMBX 0x00010
+#define FM10K_VFMBMEM(_n) ((_n) + 0x00020)
+#define FM10K_VFMBMEM_LEN 16
+#define FM10K_VFMBMEM_VF_XOR (FM10K_VFMBMEM_LEN / 2)
+
+/* Delays/timeouts */
+#define FM10K_MBX_DISCONNECT_TIMEOUT 500
+#define FM10K_MBX_POLL_DELAY 19
+#define FM10K_MBX_INT_DELAY 20
+
+/* PF/VF Mailbox state machine
+ *
+ * +----------+ connect() +----------+
+ * | CLOSED | --------------> | CONNECT |
+ * +----------+ +----------+
+ * ^ ^ |
+ * | rcv: rcv: | | rcv:
+ * | Connect Disconnect | | Connect
+ * | Disconnect Error | | Data
+ * | | |
+ * | | V
+ * +----------+ disconnect() +----------+
+ * |DISCONNECT| <-------------- | OPEN |
+ * +----------+ +----------+
+ *
+ * The diagram above describes the PF/VF mailbox state machine. There
+ * are four main states to this machine.
+ * Closed: This state represents a mailbox that is in a standby state
+ * with interrupts disabled. In this state the mailbox should not
+ * read the mailbox or write any data. The only means of exiting
+ * this state is for the system to make the connect() call for the
+ * mailbox, it will then transition to the connect state.
+ * Connect: In this state the mailbox is seeking a connection. It will
+ * post a connect message with no specified destination and will
+ * wait for a reply from the other side of the mailbox. This state
+ * is exited when either a connect with the local mailbox as the
+ * destination is received or when a data message is received with
+ * a valid sequence number.
+ * Open: In this state the mailbox is able to transfer data between the local
+ * entity and the remote. It will fall back to connect in the event of
+ * receiving either an error message, or a disconnect message. It will
+ * transition to disconnect on a call to disconnect();
+ * Disconnect: In this state the mailbox is attempting to gracefully terminate
+ * the connection. It will do so at the first point where it knows
+ * that the remote endpoint is either done sending, or when the
+ * remote endpoint has fallen back into connect.
+ */
+enum fm10k_mbx_state {
+ FM10K_STATE_CLOSED,
+ FM10K_STATE_CONNECT,
+ FM10K_STATE_OPEN,
+ FM10K_STATE_DISCONNECT,
+};
+
+/* PF/VF Mailbox header format
+ * 3 2 1 0
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Size/Err_no/CRC | Rsvd0 | Head | Tail | Type |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ *
+ * The layout above describes the format for the header used in the PF/VF
+ * mailbox. The header is broken out into the following fields:
+ * Type: There are 4 supported message types
+ * 0x8: Data header - used to transport message data
+ * 0xC: Connect header - used to establish connection
+ * 0xD: Disconnect header - used to tear down a connection
+ * 0xE: Error header - used to address message exceptions
+ * Tail: Tail index for local FIFO
+ * Tail index actually consists of two parts. The MSB of
+ * the head is a loop tracker, it is 0 on an even numbered
+ * loop through the FIFO, and 1 on the odd numbered loops.
+ * To get the actual mailbox offset based on the tail it
+ * is necessary to add bit 3 to bit 0 and clear bit 3. This
+ * gives us a valid range of 0x1 - 0xE.
+ * Head: Head index for remote FIFO
+ * Head index follows the same format as the tail index.
+ * Rsvd0: Reserved 0 portion of the mailbox header
+ * CRC: Running CRC for all data since connect plus current message header
+ * Size: Maximum message size - Applies only to connect headers
+ * The maximum message size is provided during connect to avoid
+ * jamming the mailbox with messages that do not fit.
+ * Err_no: Error number - Applies only to error headers
+ * The error number provides a indication of the type of error
+ * experienced.
+ */
+
+/* macros for retriving and setting header values */
+#define FM10K_MSG_HDR_MASK(name) \
+ ((0x1u << FM10K_MSG_##name##_SIZE) - 1)
+#define FM10K_MSG_HDR_FIELD_SET(value, name) \
+ (((u32)(value) & FM10K_MSG_HDR_MASK(name)) << FM10K_MSG_##name##_SHIFT)
+#define FM10K_MSG_HDR_FIELD_GET(value, name) \
+ ((u16)((value) >> FM10K_MSG_##name##_SHIFT) & FM10K_MSG_HDR_MASK(name))
+
+/* offsets shared between all headers */
+#define FM10K_MSG_TYPE_SHIFT 0
+#define FM10K_MSG_TYPE_SIZE 4
+#define FM10K_MSG_TAIL_SHIFT 4
+#define FM10K_MSG_TAIL_SIZE 4
+#define FM10K_MSG_HEAD_SHIFT 8
+#define FM10K_MSG_HEAD_SIZE 4
+#define FM10K_MSG_RSVD0_SHIFT 12
+#define FM10K_MSG_RSVD0_SIZE 4
+
+/* offsets for data/disconnect headers */
+#define FM10K_MSG_CRC_SHIFT 16
+#define FM10K_MSG_CRC_SIZE 16
+
+/* offsets for connect headers */
+#define FM10K_MSG_CONNECT_SIZE_SHIFT 16
+#define FM10K_MSG_CONNECT_SIZE_SIZE 16
+
+/* offsets for error headers */
+#define FM10K_MSG_ERR_NO_SHIFT 16
+#define FM10K_MSG_ERR_NO_SIZE 16
+
+enum fm10k_msg_type {
+ FM10K_MSG_DATA = 0x8,
+ FM10K_MSG_CONNECT = 0xC,
+ FM10K_MSG_DISCONNECT = 0xD,
+ FM10K_MSG_ERROR = 0xE,
+};
+
+/* HNI/SM Mailbox FIFO format
+ * 3 2 1 0
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-------+-----------------------+-------+-----------------------+
+ * | Error | Remote Head |Version| Local Tail |
+ * +-------+-----------------------+-------+-----------------------+
+ * | |
+ * . Local FIFO Data .
+ * . .
+ * +-------+-----------------------+-------+-----------------------+
+ *
+ * The layout above describes the format for the FIFOs used by the host
+ * network interface and the switch manager to communicate messages back
+ * and forth. Both the HNI and the switch maintain one such FIFO. The
+ * layout in memory has the switch manager FIFO followed immediately by
+ * the HNI FIFO. For this reason I am using just the pointer to the
+ * HNI FIFO in the mailbox ops as the offset between the two is fixed.
+ *
+ * The header for the FIFO is broken out into the following fields:
+ * Local Tail: Offset into FIFO region for next DWORD to write.
+ * Version: Version info for mailbox, only values of 0/1 are supported.
+ * Remote Head: Offset into remote FIFO to indicate how much we have read.
+ * Error: Error indication, values TBD.
+ */
+
+/* version number for switch manager mailboxes */
+#define FM10K_SM_MBX_VERSION 1
+#define FM10K_SM_MBX_FIFO_LEN (FM10K_MBMEM_PF_XOR - 1)
+
+/* offsets shared between all SM FIFO headers */
+#define FM10K_MSG_SM_TAIL_SHIFT 0
+#define FM10K_MSG_SM_TAIL_SIZE 12
+#define FM10K_MSG_SM_VER_SHIFT 12
+#define FM10K_MSG_SM_VER_SIZE 4
+#define FM10K_MSG_SM_HEAD_SHIFT 16
+#define FM10K_MSG_SM_HEAD_SIZE 12
+#define FM10K_MSG_SM_ERR_SHIFT 28
+#define FM10K_MSG_SM_ERR_SIZE 4
+
+/* All error messages returned by mailbox functions
+ * The value -511 is 0xFE01 in hex. The idea is to order the errors
+ * from 0xFE01 - 0xFEFF so error codes are easily visible in the mailbox
+ * messages. This also helps to avoid error number collisions as Linux
+ * doesn't appear to use error numbers 256 - 511.
+ */
+#define FM10K_MBX_ERR(_n) ((_n) - 512)
+#define FM10K_MBX_ERR_NO_MBX FM10K_MBX_ERR(0x01)
+#define FM10K_MBX_ERR_NO_SPACE FM10K_MBX_ERR(0x03)
+#define FM10K_MBX_ERR_TAIL FM10K_MBX_ERR(0x05)
+#define FM10K_MBX_ERR_HEAD FM10K_MBX_ERR(0x06)
+#define FM10K_MBX_ERR_SRC FM10K_MBX_ERR(0x08)
+#define FM10K_MBX_ERR_TYPE FM10K_MBX_ERR(0x09)
+#define FM10K_MBX_ERR_SIZE FM10K_MBX_ERR(0x0B)
+#define FM10K_MBX_ERR_BUSY FM10K_MBX_ERR(0x0C)
+#define FM10K_MBX_ERR_RSVD0 FM10K_MBX_ERR(0x0E)
+#define FM10K_MBX_ERR_CRC FM10K_MBX_ERR(0x0F)
+
+#define FM10K_MBX_CRC_SEED 0xFFFF
+
+struct fm10k_mbx_ops {
+ s32 (*connect)(struct fm10k_hw *, struct fm10k_mbx_info *);
+ void (*disconnect)(struct fm10k_hw *, struct fm10k_mbx_info *);
+ bool (*rx_ready)(struct fm10k_mbx_info *);
+ bool (*tx_ready)(struct fm10k_mbx_info *, u16);
+ bool (*tx_complete)(struct fm10k_mbx_info *);
+ s32 (*enqueue_tx)(struct fm10k_hw *, struct fm10k_mbx_info *,
+ const u32 *);
+ s32 (*process)(struct fm10k_hw *, struct fm10k_mbx_info *);
+ s32 (*register_handlers)(struct fm10k_mbx_info *,
+ const struct fm10k_msg_data *);
+};
+
+struct fm10k_mbx_fifo {
+ u32 *buffer;
+ u16 head;
+ u16 tail;
+ u16 size;
+};
+
+/* size of buffer to be stored in mailbox for FIFOs */
+#define FM10K_MBX_TX_BUFFER_SIZE 512
+#define FM10K_MBX_RX_BUFFER_SIZE 128
+#define FM10K_MBX_BUFFER_SIZE \
+ (FM10K_MBX_TX_BUFFER_SIZE + FM10K_MBX_RX_BUFFER_SIZE)
+
+/* minimum and maximum message size in dwords */
+#define FM10K_MBX_MSG_MAX_SIZE \
+ ((FM10K_MBX_TX_BUFFER_SIZE - 1) & (FM10K_MBX_RX_BUFFER_SIZE - 1))
+#define FM10K_VFMBX_MSG_MTU ((FM10K_VFMBMEM_LEN / 2) - 1)
+
+#define FM10K_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */
+#define FM10K_MBX_INIT_DELAY 500 /* microseconds between retries */
+
+struct fm10k_mbx_info {
+ /* function pointers for mailbox operations */
+ struct fm10k_mbx_ops ops;
+ const struct fm10k_msg_data *msg_data;
+
+ /* message FIFOs */
+ struct fm10k_mbx_fifo rx;
+ struct fm10k_mbx_fifo tx;
+
+ /* delay for handling timeouts */
+ u32 timeout;
+ u32 udelay;
+
+ /* mailbox state info */
+ u32 mbx_reg, mbmem_reg, mbx_lock, mbx_hdr;
+ u16 max_size, mbmem_len;
+ u16 tail, tail_len, pulled;
+ u16 head, head_len, pushed;
+ u16 local, remote;
+ enum fm10k_mbx_state state;
+
+ /* result of last mailbox test */
+ s32 test_result;
+
+ /* statistics */
+ u64 tx_busy;
+ u64 tx_dropped;
+ u64 tx_messages;
+ u64 tx_dwords;
+ u64 rx_messages;
+ u64 rx_dwords;
+ u64 rx_parse_err;
+
+ /* Buffer to store messages */
+ u32 buffer[FM10K_MBX_BUFFER_SIZE];
+};
+
+s32 fm10k_pfvf_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *,
+ const struct fm10k_msg_data *, u8);
+s32 fm10k_sm_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *,
+ const struct fm10k_msg_data *);
+
+#endif /* _FM10K_MBX_H_ */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
new file mode 100644
index 0000000..bf44a8f
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
@@ -0,0 +1,1435 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k.h"
+#include <linux/vmalloc.h>
+#if IS_ENABLED(CONFIG_VXLAN)
+#include <net/vxlan.h>
+#endif /* CONFIG_VXLAN */
+
+/**
+ * fm10k_setup_tx_resources - allocate Tx resources (Descriptors)
+ * @tx_ring: tx descriptor ring (for a specific queue) to setup
+ *
+ * Return 0 on success, negative on failure
+ **/
+int fm10k_setup_tx_resources(struct fm10k_ring *tx_ring)
+{
+ struct device *dev = tx_ring->dev;
+ int size;
+
+ size = sizeof(struct fm10k_tx_buffer) * tx_ring->count;
+
+ tx_ring->tx_buffer = vzalloc(size);
+ if (!tx_ring->tx_buffer)
+ goto err;
+
+ u64_stats_init(&tx_ring->syncp);
+
+ /* round up to nearest 4K */
+ tx_ring->size = tx_ring->count * sizeof(struct fm10k_tx_desc);
+ tx_ring->size = ALIGN(tx_ring->size, 4096);
+
+ tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size,
+ &tx_ring->dma, GFP_KERNEL);
+ if (!tx_ring->desc)
+ goto err;
+
+ return 0;
+
+err:
+ vfree(tx_ring->tx_buffer);
+ tx_ring->tx_buffer = NULL;
+ return -ENOMEM;
+}
+
+/**
+ * fm10k_setup_all_tx_resources - allocate all queues Tx resources
+ * @interface: board private structure
+ *
+ * If this function returns with an error, then it's possible one or
+ * more of the rings is populated (while the rest are not). It is the
+ * callers duty to clean those orphaned rings.
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int fm10k_setup_all_tx_resources(struct fm10k_intfc *interface)
+{
+ int i, err = 0;
+
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ err = fm10k_setup_tx_resources(interface->tx_ring[i]);
+ if (!err)
+ continue;
+
+ netif_err(interface, probe, interface->netdev,
+ "Allocation for Tx Queue %u failed\n", i);
+ goto err_setup_tx;
+ }
+
+ return 0;
+err_setup_tx:
+ /* rewind the index freeing the rings as we go */
+ while (i--)
+ fm10k_free_tx_resources(interface->tx_ring[i]);
+ return err;
+}
+
+/**
+ * fm10k_setup_rx_resources - allocate Rx resources (Descriptors)
+ * @rx_ring: rx descriptor ring (for a specific queue) to setup
+ *
+ * Returns 0 on success, negative on failure
+ **/
+int fm10k_setup_rx_resources(struct fm10k_ring *rx_ring)
+{
+ struct device *dev = rx_ring->dev;
+ int size;
+
+ size = sizeof(struct fm10k_rx_buffer) * rx_ring->count;
+
+ rx_ring->rx_buffer = vzalloc(size);
+ if (!rx_ring->rx_buffer)
+ goto err;
+
+ u64_stats_init(&rx_ring->syncp);
+
+ /* Round up to nearest 4K */
+ rx_ring->size = rx_ring->count * sizeof(union fm10k_rx_desc);
+ rx_ring->size = ALIGN(rx_ring->size, 4096);
+
+ rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
+ &rx_ring->dma, GFP_KERNEL);
+ if (!rx_ring->desc)
+ goto err;
+
+ return 0;
+err:
+ vfree(rx_ring->rx_buffer);
+ rx_ring->rx_buffer = NULL;
+ return -ENOMEM;
+}
+
+/**
+ * fm10k_setup_all_rx_resources - allocate all queues Rx resources
+ * @interface: board private structure
+ *
+ * If this function returns with an error, then it's possible one or
+ * more of the rings is populated (while the rest are not). It is the
+ * callers duty to clean those orphaned rings.
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int fm10k_setup_all_rx_resources(struct fm10k_intfc *interface)
+{
+ int i, err = 0;
+
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ err = fm10k_setup_rx_resources(interface->rx_ring[i]);
+ if (!err)
+ continue;
+
+ netif_err(interface, probe, interface->netdev,
+ "Allocation for Rx Queue %u failed\n", i);
+ goto err_setup_rx;
+ }
+
+ return 0;
+err_setup_rx:
+ /* rewind the index freeing the rings as we go */
+ while (i--)
+ fm10k_free_rx_resources(interface->rx_ring[i]);
+ return err;
+}
+
+void fm10k_unmap_and_free_tx_resource(struct fm10k_ring *ring,
+ struct fm10k_tx_buffer *tx_buffer)
+{
+ if (tx_buffer->skb) {
+ dev_kfree_skb_any(tx_buffer->skb);
+ if (dma_unmap_len(tx_buffer, len))
+ dma_unmap_single(ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ } else if (dma_unmap_len(tx_buffer, len)) {
+ dma_unmap_page(ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ }
+ tx_buffer->next_to_watch = NULL;
+ tx_buffer->skb = NULL;
+ dma_unmap_len_set(tx_buffer, len, 0);
+ /* tx_buffer must be completely set up in the transmit path */
+}
+
+/**
+ * fm10k_clean_tx_ring - Free Tx Buffers
+ * @tx_ring: ring to be cleaned
+ **/
+static void fm10k_clean_tx_ring(struct fm10k_ring *tx_ring)
+{
+ struct fm10k_tx_buffer *tx_buffer;
+ unsigned long size;
+ u16 i;
+
+ /* ring already cleared, nothing to do */
+ if (!tx_ring->tx_buffer)
+ return;
+
+ /* Free all the Tx ring sk_buffs */
+ for (i = 0; i < tx_ring->count; i++) {
+ tx_buffer = &tx_ring->tx_buffer[i];
+ fm10k_unmap_and_free_tx_resource(tx_ring, tx_buffer);
+ }
+
+ /* reset BQL values */
+ netdev_tx_reset_queue(txring_txq(tx_ring));
+
+ size = sizeof(struct fm10k_tx_buffer) * tx_ring->count;
+ memset(tx_ring->tx_buffer, 0, size);
+
+ /* Zero out the descriptor ring */
+ memset(tx_ring->desc, 0, tx_ring->size);
+}
+
+/**
+ * fm10k_free_tx_resources - Free Tx Resources per Queue
+ * @tx_ring: Tx descriptor ring for a specific queue
+ *
+ * Free all transmit software resources
+ **/
+void fm10k_free_tx_resources(struct fm10k_ring *tx_ring)
+{
+ fm10k_clean_tx_ring(tx_ring);
+
+ vfree(tx_ring->tx_buffer);
+ tx_ring->tx_buffer = NULL;
+
+ /* if not set, then don't free */
+ if (!tx_ring->desc)
+ return;
+
+ dma_free_coherent(tx_ring->dev, tx_ring->size,
+ tx_ring->desc, tx_ring->dma);
+ tx_ring->desc = NULL;
+}
+
+/**
+ * fm10k_clean_all_tx_rings - Free Tx Buffers for all queues
+ * @interface: board private structure
+ **/
+void fm10k_clean_all_tx_rings(struct fm10k_intfc *interface)
+{
+ int i;
+
+ for (i = 0; i < interface->num_tx_queues; i++)
+ fm10k_clean_tx_ring(interface->tx_ring[i]);
+
+ /* remove any stale timestamp buffers and free them */
+ skb_queue_purge(&interface->ts_tx_skb_queue);
+}
+
+/**
+ * fm10k_free_all_tx_resources - Free Tx Resources for All Queues
+ * @interface: board private structure
+ *
+ * Free all transmit software resources
+ **/
+static void fm10k_free_all_tx_resources(struct fm10k_intfc *interface)
+{
+ int i = interface->num_tx_queues;
+
+ while (i--)
+ fm10k_free_tx_resources(interface->tx_ring[i]);
+}
+
+/**
+ * fm10k_clean_rx_ring - Free Rx Buffers per Queue
+ * @rx_ring: ring to free buffers from
+ **/
+static void fm10k_clean_rx_ring(struct fm10k_ring *rx_ring)
+{
+ unsigned long size;
+ u16 i;
+
+ if (!rx_ring->rx_buffer)
+ return;
+
+ if (rx_ring->skb)
+ dev_kfree_skb(rx_ring->skb);
+ rx_ring->skb = NULL;
+
+ /* Free all the Rx ring sk_buffs */
+ for (i = 0; i < rx_ring->count; i++) {
+ struct fm10k_rx_buffer *buffer = &rx_ring->rx_buffer[i];
+ /* clean-up will only set page pointer to NULL */
+ if (!buffer->page)
+ continue;
+
+ dma_unmap_page(rx_ring->dev, buffer->dma,
+ PAGE_SIZE, DMA_FROM_DEVICE);
+ __free_page(buffer->page);
+
+ buffer->page = NULL;
+ }
+
+ size = sizeof(struct fm10k_rx_buffer) * rx_ring->count;
+ memset(rx_ring->rx_buffer, 0, size);
+
+ /* Zero out the descriptor ring */
+ memset(rx_ring->desc, 0, rx_ring->size);
+
+ rx_ring->next_to_alloc = 0;
+ rx_ring->next_to_clean = 0;
+ rx_ring->next_to_use = 0;
+}
+
+/**
+ * fm10k_free_rx_resources - Free Rx Resources
+ * @rx_ring: ring to clean the resources from
+ *
+ * Free all receive software resources
+ **/
+void fm10k_free_rx_resources(struct fm10k_ring *rx_ring)
+{
+ fm10k_clean_rx_ring(rx_ring);
+
+ vfree(rx_ring->rx_buffer);
+ rx_ring->rx_buffer = NULL;
+
+ /* if not set, then don't free */
+ if (!rx_ring->desc)
+ return;
+
+ dma_free_coherent(rx_ring->dev, rx_ring->size,
+ rx_ring->desc, rx_ring->dma);
+
+ rx_ring->desc = NULL;
+}
+
+/**
+ * fm10k_clean_all_rx_rings - Free Rx Buffers for all queues
+ * @interface: board private structure
+ **/
+void fm10k_clean_all_rx_rings(struct fm10k_intfc *interface)
+{
+ int i;
+
+ for (i = 0; i < interface->num_rx_queues; i++)
+ fm10k_clean_rx_ring(interface->rx_ring[i]);
+}
+
+/**
+ * fm10k_free_all_rx_resources - Free Rx Resources for All Queues
+ * @interface: board private structure
+ *
+ * Free all receive software resources
+ **/
+static void fm10k_free_all_rx_resources(struct fm10k_intfc *interface)
+{
+ int i = interface->num_rx_queues;
+
+ while (i--)
+ fm10k_free_rx_resources(interface->rx_ring[i]);
+}
+
+/**
+ * fm10k_request_glort_range - Request GLORTs for use in configuring rules
+ * @interface: board private structure
+ *
+ * This function allocates a range of glorts for this inteface to use.
+ **/
+static void fm10k_request_glort_range(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ u16 mask = (~hw->mac.dglort_map) >> FM10K_DGLORTMAP_MASK_SHIFT;
+
+ /* establish GLORT base */
+ interface->glort = hw->mac.dglort_map & FM10K_DGLORTMAP_NONE;
+ interface->glort_count = 0;
+
+ /* nothing we can do until mask is allocated */
+ if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE)
+ return;
+
+ /* we support 3 possible GLORT configurations.
+ * 1: VFs consume all but the last 1
+ * 2: VFs and PF split glorts with possible gap between
+ * 3: VFs allocated first 64, all others belong to PF
+ */
+ if (mask <= hw->iov.total_vfs) {
+ interface->glort_count = 1;
+ interface->glort += mask;
+ } else if (mask < 64) {
+ interface->glort_count = (mask + 1) / 2;
+ interface->glort += interface->glort_count;
+ } else {
+ interface->glort_count = mask - 63;
+ interface->glort += 64;
+ }
+}
+
+/**
+ * fm10k_del_vxlan_port_all
+ * @interface: board private structure
+ *
+ * This function frees the entire vxlan_port list
+ **/
+static void fm10k_del_vxlan_port_all(struct fm10k_intfc *interface)
+{
+ struct fm10k_vxlan_port *vxlan_port;
+
+ /* flush all entries from list */
+ vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
+ struct fm10k_vxlan_port, list);
+ while (vxlan_port) {
+ list_del(&vxlan_port->list);
+ kfree(vxlan_port);
+ vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
+ struct fm10k_vxlan_port,
+ list);
+ }
+}
+
+/**
+ * fm10k_restore_vxlan_port
+ * @interface: board private structure
+ *
+ * This function restores the value in the tunnel_cfg register after reset
+ **/
+static void fm10k_restore_vxlan_port(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_vxlan_port *vxlan_port;
+
+ /* only the PF supports configuring tunnels */
+ if (hw->mac.type != fm10k_mac_pf)
+ return;
+
+ vxlan_port = list_first_entry_or_null(&interface->vxlan_port,
+ struct fm10k_vxlan_port, list);
+
+ /* restore tunnel configuration register */
+ fm10k_write_reg(hw, FM10K_TUNNEL_CFG,
+ (vxlan_port ? ntohs(vxlan_port->port) : 0) |
+ (ETH_P_TEB << FM10K_TUNNEL_CFG_NVGRE_SHIFT));
+}
+
+/**
+ * fm10k_add_vxlan_port
+ * @netdev: network interface device structure
+ * @sa_family: Address family of new port
+ * @port: port number used for VXLAN
+ *
+ * This funciton is called when a new VXLAN interface has added a new port
+ * number to the range that is currently in use for VXLAN. The new port
+ * number is always added to the tail so that the port number list should
+ * match the order in which the ports were allocated. The head of the list
+ * is always used as the VXLAN port number for offloads.
+ **/
+static void fm10k_add_vxlan_port(struct net_device *dev,
+ sa_family_t sa_family, __be16 port) {
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_vxlan_port *vxlan_port;
+
+ /* only the PF supports configuring tunnels */
+ if (interface->hw.mac.type != fm10k_mac_pf)
+ return;
+
+ /* existing ports are pulled out so our new entry is always last */
+ fm10k_vxlan_port_for_each(vxlan_port, interface) {
+ if ((vxlan_port->port == port) &&
+ (vxlan_port->sa_family == sa_family)) {
+ list_del(&vxlan_port->list);
+ goto insert_tail;
+ }
+ }
+
+ /* allocate memory to track ports */
+ vxlan_port = kmalloc(sizeof(*vxlan_port), GFP_ATOMIC);
+ if (!vxlan_port)
+ return;
+ vxlan_port->port = port;
+ vxlan_port->sa_family = sa_family;
+
+insert_tail:
+ /* add new port value to list */
+ list_add_tail(&vxlan_port->list, &interface->vxlan_port);
+
+ fm10k_restore_vxlan_port(interface);
+}
+
+/**
+ * fm10k_del_vxlan_port
+ * @netdev: network interface device structure
+ * @sa_family: Address family of freed port
+ * @port: port number used for VXLAN
+ *
+ * This funciton is called when a new VXLAN interface has freed a port
+ * number from the range that is currently in use for VXLAN. The freed
+ * port is removed from the list and the new head is used to determine
+ * the port number for offloads.
+ **/
+static void fm10k_del_vxlan_port(struct net_device *dev,
+ sa_family_t sa_family, __be16 port) {
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_vxlan_port *vxlan_port;
+
+ if (interface->hw.mac.type != fm10k_mac_pf)
+ return;
+
+ /* find the port in the list and free it */
+ fm10k_vxlan_port_for_each(vxlan_port, interface) {
+ if ((vxlan_port->port == port) &&
+ (vxlan_port->sa_family == sa_family)) {
+ list_del(&vxlan_port->list);
+ kfree(vxlan_port);
+ break;
+ }
+ }
+
+ fm10k_restore_vxlan_port(interface);
+}
+
+/**
+ * fm10k_open - Called when a network interface is made active
+ * @netdev: network interface device structure
+ *
+ * Returns 0 on success, negative value on failure
+ *
+ * The open entry point is called when a network interface is made
+ * active by the system (IFF_UP). At this point all resources needed
+ * for transmit and receive operations are allocated, the interrupt
+ * handler is registered with the OS, the watchdog timer is started,
+ * and the stack is notified that the interface is ready.
+ **/
+int fm10k_open(struct net_device *netdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ int err;
+
+ /* allocate transmit descriptors */
+ err = fm10k_setup_all_tx_resources(interface);
+ if (err)
+ goto err_setup_tx;
+
+ /* allocate receive descriptors */
+ err = fm10k_setup_all_rx_resources(interface);
+ if (err)
+ goto err_setup_rx;
+
+ /* allocate interrupt resources */
+ err = fm10k_qv_request_irq(interface);
+ if (err)
+ goto err_req_irq;
+
+ /* setup GLORT assignment for this port */
+ fm10k_request_glort_range(interface);
+
+ /* Notify the stack of the actual queue counts */
+ err = netif_set_real_num_tx_queues(netdev,
+ interface->num_tx_queues);
+ if (err)
+ goto err_set_queues;
+
+ err = netif_set_real_num_rx_queues(netdev,
+ interface->num_rx_queues);
+ if (err)
+ goto err_set_queues;
+
+#if IS_ENABLED(CONFIG_VXLAN)
+ /* update VXLAN port configuration */
+ vxlan_get_rx_port(netdev);
+
+#endif
+ fm10k_up(interface);
+
+ return 0;
+
+err_set_queues:
+ fm10k_qv_free_irq(interface);
+err_req_irq:
+ fm10k_free_all_rx_resources(interface);
+err_setup_rx:
+ fm10k_free_all_tx_resources(interface);
+err_setup_tx:
+ return err;
+}
+
+/**
+ * fm10k_close - Disables a network interface
+ * @netdev: network interface device structure
+ *
+ * Returns 0, this is not allowed to fail
+ *
+ * The close entry point is called when an interface is de-activated
+ * by the OS. The hardware is still under the drivers control, but
+ * needs to be disabled. A global MAC reset is issued to stop the
+ * hardware, and all transmit and receive resources are freed.
+ **/
+int fm10k_close(struct net_device *netdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+
+ fm10k_down(interface);
+
+ fm10k_qv_free_irq(interface);
+
+ fm10k_del_vxlan_port_all(interface);
+
+ fm10k_free_all_tx_resources(interface);
+ fm10k_free_all_rx_resources(interface);
+
+ return 0;
+}
+
+static netdev_tx_t fm10k_xmit_frame(struct sk_buff *skb, struct net_device *dev)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ unsigned int r_idx = skb->queue_mapping;
+ int err;
+
+ if ((skb->protocol == htons(ETH_P_8021Q)) &&
+ !vlan_tx_tag_present(skb)) {
+ /* FM10K only supports hardware tagging, any tags in frame
+ * are considered 2nd level or "outer" tags
+ */
+ struct vlan_hdr *vhdr;
+ __be16 proto;
+
+ /* make sure skb is not shared */
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (!skb)
+ return NETDEV_TX_OK;
+
+ /* make sure there is enough room to move the ethernet header */
+ if (unlikely(!pskb_may_pull(skb, VLAN_ETH_HLEN)))
+ return NETDEV_TX_OK;
+
+ /* verify the skb head is not shared */
+ err = skb_cow_head(skb, 0);
+ if (err)
+ return NETDEV_TX_OK;
+
+ /* locate vlan header */
+ vhdr = (struct vlan_hdr *)(skb->data + ETH_HLEN);
+
+ /* pull the 2 key pieces of data out of it */
+ __vlan_hwaccel_put_tag(skb,
+ htons(ETH_P_8021Q),
+ ntohs(vhdr->h_vlan_TCI));
+ proto = vhdr->h_vlan_encapsulated_proto;
+ skb->protocol = (ntohs(proto) >= 1536) ? proto :
+ htons(ETH_P_802_2);
+
+ /* squash it by moving the ethernet addresses up 4 bytes */
+ memmove(skb->data + VLAN_HLEN, skb->data, 12);
+ __skb_pull(skb, VLAN_HLEN);
+ skb_reset_mac_header(skb);
+ }
+
+ /* The minimum packet size for a single buffer is 17B so pad the skb
+ * in order to meet this minimum size requirement.
+ */
+ if (unlikely(skb->len < 17)) {
+ int pad_len = 17 - skb->len;
+
+ if (skb_pad(skb, pad_len))
+ return NETDEV_TX_OK;
+ __skb_put(skb, pad_len);
+ }
+
+ /* prepare packet for hardware time stamping */
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
+ fm10k_ts_tx_enqueue(interface, skb);
+
+ if (r_idx >= interface->num_tx_queues)
+ r_idx %= interface->num_tx_queues;
+
+ err = fm10k_xmit_frame_ring(skb, interface->tx_ring[r_idx]);
+
+ return err;
+}
+
+static int fm10k_change_mtu(struct net_device *dev, int new_mtu)
+{
+ if (new_mtu < 68 || new_mtu > FM10K_MAX_JUMBO_FRAME_SIZE)
+ return -EINVAL;
+
+ dev->mtu = new_mtu;
+
+ return 0;
+}
+
+/**
+ * fm10k_tx_timeout - Respond to a Tx Hang
+ * @netdev: network interface device structure
+ **/
+static void fm10k_tx_timeout(struct net_device *netdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ bool real_tx_hang = false;
+ int i;
+
+#define TX_TIMEO_LIMIT 16000
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ struct fm10k_ring *tx_ring = interface->tx_ring[i];
+
+ if (check_for_tx_hang(tx_ring) && fm10k_check_tx_hang(tx_ring))
+ real_tx_hang = true;
+ }
+
+ if (real_tx_hang) {
+ fm10k_tx_timeout_reset(interface);
+ } else {
+ netif_info(interface, drv, netdev,
+ "Fake Tx hang detected with timeout of %d seconds\n",
+ netdev->watchdog_timeo/HZ);
+
+ /* fake Tx hang - increase the kernel timeout */
+ if (netdev->watchdog_timeo < TX_TIMEO_LIMIT)
+ netdev->watchdog_timeo *= 2;
+ }
+}
+
+static int fm10k_uc_vlan_unsync(struct net_device *netdev,
+ const unsigned char *uc_addr)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ u16 glort = interface->glort;
+ u16 vid = interface->vid;
+ bool set = !!(vid / VLAN_N_VID);
+ int err;
+
+ /* drop any leading bits on the VLAN ID */
+ vid &= VLAN_N_VID - 1;
+
+ err = hw->mac.ops.update_uc_addr(hw, glort, uc_addr, vid, set, 0);
+ if (err)
+ return err;
+
+ /* return non-zero value as we are only doing a partial sync/unsync */
+ return 1;
+}
+
+static int fm10k_mc_vlan_unsync(struct net_device *netdev,
+ const unsigned char *mc_addr)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ u16 glort = interface->glort;
+ u16 vid = interface->vid;
+ bool set = !!(vid / VLAN_N_VID);
+ int err;
+
+ /* drop any leading bits on the VLAN ID */
+ vid &= VLAN_N_VID - 1;
+
+ err = hw->mac.ops.update_mc_addr(hw, glort, mc_addr, vid, set);
+ if (err)
+ return err;
+
+ /* return non-zero value as we are only doing a partial sync/unsync */
+ return 1;
+}
+
+static int fm10k_update_vid(struct net_device *netdev, u16 vid, bool set)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_hw *hw = &interface->hw;
+ s32 err;
+
+ /* updates do not apply to VLAN 0 */
+ if (!vid)
+ return 0;
+
+ if (vid >= VLAN_N_VID)
+ return -EINVAL;
+
+ /* Verify we have permission to add VLANs */
+ if (hw->mac.vlan_override)
+ return -EACCES;
+
+ /* if default VLAN is already present do nothing */
+ if (vid == hw->mac.default_vid)
+ return -EBUSY;
+
+ /* update active_vlans bitmask */
+ set_bit(vid, interface->active_vlans);
+ if (!set)
+ clear_bit(vid, interface->active_vlans);
+
+ fm10k_mbx_lock(interface);
+
+ /* only need to update the VLAN if not in promiscous mode */
+ if (!(netdev->flags & IFF_PROMISC)) {
+ err = hw->mac.ops.update_vlan(hw, vid, 0, set);
+ if (err)
+ return err;
+ }
+
+ /* update our base MAC address */
+ err = hw->mac.ops.update_uc_addr(hw, interface->glort, hw->mac.addr,
+ vid, set, 0);
+ if (err)
+ return err;
+
+ /* set vid prior to syncing/unsyncing the VLAN */
+ interface->vid = vid + (set ? VLAN_N_VID : 0);
+
+ /* Update the unicast and multicast address list to add/drop VLAN */
+ __dev_uc_unsync(netdev, fm10k_uc_vlan_unsync);
+ __dev_mc_unsync(netdev, fm10k_mc_vlan_unsync);
+
+ fm10k_mbx_unlock(interface);
+
+ return 0;
+}
+
+static int fm10k_vlan_rx_add_vid(struct net_device *netdev,
+ __always_unused __be16 proto, u16 vid)
+{
+ /* update VLAN and address table based on changes */
+ return fm10k_update_vid(netdev, vid, true);
+}
+
+static int fm10k_vlan_rx_kill_vid(struct net_device *netdev,
+ __always_unused __be16 proto, u16 vid)
+{
+ /* update VLAN and address table based on changes */
+ return fm10k_update_vid(netdev, vid, false);
+}
+
+static u16 fm10k_find_next_vlan(struct fm10k_intfc *interface, u16 vid)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ u16 default_vid = hw->mac.default_vid;
+ u16 vid_limit = vid < default_vid ? default_vid : VLAN_N_VID;
+
+ vid = find_next_bit(interface->active_vlans, vid_limit, ++vid);
+
+ return vid;
+}
+
+static void fm10k_clear_unused_vlans(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ u32 vid, prev_vid;
+
+ /* loop through and find any gaps in the table */
+ for (vid = 0, prev_vid = 0;
+ prev_vid < VLAN_N_VID;
+ prev_vid = vid + 1, vid = fm10k_find_next_vlan(interface, vid)) {
+ if (prev_vid == vid)
+ continue;
+
+ /* send request to clear multiple bits at a time */
+ prev_vid += (vid - prev_vid - 1) << FM10K_VLAN_LENGTH_SHIFT;
+ hw->mac.ops.update_vlan(hw, prev_vid, 0, false);
+ }
+}
+
+static int __fm10k_uc_sync(struct net_device *dev,
+ const unsigned char *addr, bool sync)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+ u16 vid, glort = interface->glort;
+ s32 err;
+
+ if (!is_valid_ether_addr(addr))
+ return -EADDRNOTAVAIL;
+
+ /* update table with current entries */
+ for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 0;
+ vid < VLAN_N_VID;
+ vid = fm10k_find_next_vlan(interface, vid)) {
+ err = hw->mac.ops.update_uc_addr(hw, glort, addr,
+ vid, sync, 0);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int fm10k_uc_sync(struct net_device *dev,
+ const unsigned char *addr)
+{
+ return __fm10k_uc_sync(dev, addr, true);
+}
+
+static int fm10k_uc_unsync(struct net_device *dev,
+ const unsigned char *addr)
+{
+ return __fm10k_uc_sync(dev, addr, false);
+}
+
+static int fm10k_set_mac(struct net_device *dev, void *p)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+ struct sockaddr *addr = p;
+ s32 err = 0;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ if (dev->flags & IFF_UP) {
+ /* setting MAC address requires mailbox */
+ fm10k_mbx_lock(interface);
+
+ err = fm10k_uc_sync(dev, addr->sa_data);
+ if (!err)
+ fm10k_uc_unsync(dev, hw->mac.addr);
+
+ fm10k_mbx_unlock(interface);
+ }
+
+ if (!err) {
+ ether_addr_copy(dev->dev_addr, addr->sa_data);
+ ether_addr_copy(hw->mac.addr, addr->sa_data);
+ dev->addr_assign_type &= ~NET_ADDR_RANDOM;
+ }
+
+ /* if we had a mailbox error suggest trying again */
+ return err ? -EAGAIN : 0;
+}
+
+static int __fm10k_mc_sync(struct net_device *dev,
+ const unsigned char *addr, bool sync)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+ u16 vid, glort = interface->glort;
+ s32 err;
+
+ if (!is_multicast_ether_addr(addr))
+ return -EADDRNOTAVAIL;
+
+ /* update table with current entries */
+ for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 0;
+ vid < VLAN_N_VID;
+ vid = fm10k_find_next_vlan(interface, vid)) {
+ err = hw->mac.ops.update_mc_addr(hw, glort, addr, vid, sync);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int fm10k_mc_sync(struct net_device *dev,
+ const unsigned char *addr)
+{
+ return __fm10k_mc_sync(dev, addr, true);
+}
+
+static int fm10k_mc_unsync(struct net_device *dev,
+ const unsigned char *addr)
+{
+ return __fm10k_mc_sync(dev, addr, false);
+}
+
+static void fm10k_set_rx_mode(struct net_device *dev)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_hw *hw = &interface->hw;
+ int xcast_mode;
+
+ /* no need to update the harwdare if we are not running */
+ if (!(dev->flags & IFF_UP))
+ return;
+
+ /* determine new mode based on flags */
+ xcast_mode = (dev->flags & IFF_PROMISC) ? FM10K_XCAST_MODE_PROMISC :
+ (dev->flags & IFF_ALLMULTI) ? FM10K_XCAST_MODE_ALLMULTI :
+ (dev->flags & (IFF_BROADCAST | IFF_MULTICAST)) ?
+ FM10K_XCAST_MODE_MULTI : FM10K_XCAST_MODE_NONE;
+
+ fm10k_mbx_lock(interface);
+
+ /* syncronize all of the addresses */
+ if (xcast_mode != FM10K_XCAST_MODE_PROMISC) {
+ __dev_uc_sync(dev, fm10k_uc_sync, fm10k_uc_unsync);
+ if (xcast_mode != FM10K_XCAST_MODE_ALLMULTI)
+ __dev_mc_sync(dev, fm10k_mc_sync, fm10k_mc_unsync);
+ }
+
+ /* if we aren't changing modes there is nothing to do */
+ if (interface->xcast_mode != xcast_mode) {
+ /* update VLAN table */
+ if (xcast_mode == FM10K_XCAST_MODE_PROMISC)
+ hw->mac.ops.update_vlan(hw, FM10K_VLAN_ALL, 0, true);
+ if (interface->xcast_mode == FM10K_XCAST_MODE_PROMISC)
+ fm10k_clear_unused_vlans(interface);
+
+ /* update xcast mode */
+ hw->mac.ops.update_xcast_mode(hw, interface->glort, xcast_mode);
+
+ /* record updated xcast mode state */
+ interface->xcast_mode = xcast_mode;
+ }
+
+ fm10k_mbx_unlock(interface);
+}
+
+void fm10k_restore_rx_state(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int xcast_mode;
+ u16 vid, glort;
+
+ /* restore our address if perm_addr is set */
+ if (hw->mac.type == fm10k_mac_vf) {
+ if (is_valid_ether_addr(hw->mac.perm_addr)) {
+ ether_addr_copy(hw->mac.addr, hw->mac.perm_addr);
+ ether_addr_copy(netdev->perm_addr, hw->mac.perm_addr);
+ ether_addr_copy(netdev->dev_addr, hw->mac.perm_addr);
+ netdev->addr_assign_type &= ~NET_ADDR_RANDOM;
+ }
+
+ if (hw->mac.vlan_override)
+ netdev->features &= ~NETIF_F_HW_VLAN_CTAG_RX;
+ else
+ netdev->features |= NETIF_F_HW_VLAN_CTAG_RX;
+ }
+
+ /* record glort for this interface */
+ glort = interface->glort;
+
+ /* convert interface flags to xcast mode */
+ if (netdev->flags & IFF_PROMISC)
+ xcast_mode = FM10K_XCAST_MODE_PROMISC;
+ else if (netdev->flags & IFF_ALLMULTI)
+ xcast_mode = FM10K_XCAST_MODE_ALLMULTI;
+ else if (netdev->flags & (IFF_BROADCAST | IFF_MULTICAST))
+ xcast_mode = FM10K_XCAST_MODE_MULTI;
+ else
+ xcast_mode = FM10K_XCAST_MODE_NONE;
+
+ fm10k_mbx_lock(interface);
+
+ /* Enable logical port */
+ hw->mac.ops.update_lport_state(hw, glort, interface->glort_count, true);
+
+ /* update VLAN table */
+ hw->mac.ops.update_vlan(hw, FM10K_VLAN_ALL, 0,
+ xcast_mode == FM10K_XCAST_MODE_PROMISC);
+
+ /* Add filter for VLAN 0 */
+ hw->mac.ops.update_vlan(hw, 0, 0, true);
+
+ /* update table with current entries */
+ for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 0;
+ vid < VLAN_N_VID;
+ vid = fm10k_find_next_vlan(interface, vid)) {
+ hw->mac.ops.update_vlan(hw, vid, 0, true);
+ hw->mac.ops.update_uc_addr(hw, glort, hw->mac.addr,
+ vid, true, 0);
+ }
+
+ /* syncronize all of the addresses */
+ if (xcast_mode != FM10K_XCAST_MODE_PROMISC) {
+ __dev_uc_sync(netdev, fm10k_uc_sync, fm10k_uc_unsync);
+ if (xcast_mode != FM10K_XCAST_MODE_ALLMULTI)
+ __dev_mc_sync(netdev, fm10k_mc_sync, fm10k_mc_unsync);
+ }
+
+ /* update xcast mode */
+ hw->mac.ops.update_xcast_mode(hw, glort, xcast_mode);
+
+ fm10k_mbx_unlock(interface);
+
+ /* record updated xcast mode state */
+ interface->xcast_mode = xcast_mode;
+
+ /* Restore tunnel configuration */
+ fm10k_restore_vxlan_port(interface);
+}
+
+void fm10k_reset_rx_state(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+
+ fm10k_mbx_lock(interface);
+
+ /* clear the logical port state on lower device */
+ hw->mac.ops.update_lport_state(hw, interface->glort,
+ interface->glort_count, false);
+
+ fm10k_mbx_unlock(interface);
+
+ /* reset flags to default state */
+ interface->xcast_mode = FM10K_XCAST_MODE_NONE;
+
+ /* clear the sync flag since the lport has been dropped */
+ __dev_uc_unsync(netdev, NULL);
+ __dev_mc_unsync(netdev, NULL);
+}
+
+/**
+ * fm10k_get_stats64 - Get System Network Statistics
+ * @netdev: network interface device structure
+ * @stats: storage space for 64bit statistics
+ *
+ * Returns 64bit statistics, for use in the ndo_get_stats64 callback. This
+ * function replaces fm10k_get_stats for kernels which support it.
+ */
+static struct rtnl_link_stats64 *fm10k_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct fm10k_ring *ring;
+ unsigned int start, i;
+ u64 bytes, packets;
+
+ rcu_read_lock();
+
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ ring = ACCESS_ONCE(interface->rx_ring[i]);
+
+ if (!ring)
+ continue;
+
+ do {
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ packets = ring->stats.packets;
+ bytes = ring->stats.bytes;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+
+ stats->rx_packets += packets;
+ stats->rx_bytes += bytes;
+ }
+
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ ring = ACCESS_ONCE(interface->rx_ring[i]);
+
+ if (!ring)
+ continue;
+
+ do {
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ packets = ring->stats.packets;
+ bytes = ring->stats.bytes;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+
+ stats->tx_packets += packets;
+ stats->tx_bytes += bytes;
+ }
+
+ rcu_read_unlock();
+
+ /* following stats updated by fm10k_service_task() */
+ stats->rx_missed_errors = netdev->stats.rx_missed_errors;
+
+ return stats;
+}
+
+int fm10k_setup_tc(struct net_device *dev, u8 tc)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+
+ /* Currently only the PF supports priority classes */
+ if (tc && (interface->hw.mac.type != fm10k_mac_pf))
+ return -EINVAL;
+
+ /* Hardware supports up to 8 traffic classes */
+ if (tc > 8)
+ return -EINVAL;
+
+ /* Hardware has to reinitialize queues to match packet
+ * buffer alignment. Unfortunately, the hardware is not
+ * flexible enough to do this dynamically.
+ */
+ if (netif_running(dev))
+ fm10k_close(dev);
+
+ fm10k_mbx_free_irq(interface);
+
+ fm10k_clear_queueing_scheme(interface);
+
+ /* we expect the prio_tc map to be repopulated later */
+ netdev_reset_tc(dev);
+ netdev_set_num_tc(dev, tc);
+
+ fm10k_init_queueing_scheme(interface);
+
+ fm10k_mbx_request_irq(interface);
+
+ if (netif_running(dev))
+ fm10k_open(dev);
+
+ /* flag to indicate SWPRI has yet to be updated */
+ interface->flags |= FM10K_FLAG_SWPRI_CONFIG;
+
+ return 0;
+}
+
+static int fm10k_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
+{
+ switch (cmd) {
+ case SIOCGHWTSTAMP:
+ return fm10k_get_ts_config(netdev, ifr);
+ case SIOCSHWTSTAMP:
+ return fm10k_set_ts_config(netdev, ifr);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void fm10k_assign_l2_accel(struct fm10k_intfc *interface,
+ struct fm10k_l2_accel *l2_accel)
+{
+ struct fm10k_ring *ring;
+ int i;
+
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ ring = interface->rx_ring[i];
+ rcu_assign_pointer(ring->l2_accel, l2_accel);
+ }
+
+ interface->l2_accel = l2_accel;
+}
+
+static void *fm10k_dfwd_add_station(struct net_device *dev,
+ struct net_device *sdev)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_l2_accel *l2_accel = interface->l2_accel;
+ struct fm10k_l2_accel *old_l2_accel = NULL;
+ struct fm10k_dglort_cfg dglort = { 0 };
+ struct fm10k_hw *hw = &interface->hw;
+ int size = 0, i;
+ u16 glort;
+
+ /* allocate l2 accel structure if it is not available */
+ if (!l2_accel) {
+ /* verify there is enough free GLORTs to support l2_accel */
+ if (interface->glort_count < 7)
+ return ERR_PTR(-EBUSY);
+
+ size = offsetof(struct fm10k_l2_accel, macvlan[7]);
+ l2_accel = kzalloc(size, GFP_KERNEL);
+ if (!l2_accel)
+ return ERR_PTR(-ENOMEM);
+
+ l2_accel->size = 7;
+ l2_accel->dglort = interface->glort;
+
+ /* update pointers */
+ fm10k_assign_l2_accel(interface, l2_accel);
+ /* do not expand if we are at our limit */
+ } else if ((l2_accel->count == FM10K_MAX_STATIONS) ||
+ (l2_accel->count == (interface->glort_count - 1))) {
+ return ERR_PTR(-EBUSY);
+ /* expand if we have hit the size limit */
+ } else if (l2_accel->count == l2_accel->size) {
+ old_l2_accel = l2_accel;
+ size = offsetof(struct fm10k_l2_accel,
+ macvlan[(l2_accel->size * 2) + 1]);
+ l2_accel = kzalloc(size, GFP_KERNEL);
+ if (!l2_accel)
+ return ERR_PTR(-ENOMEM);
+
+ memcpy(l2_accel, old_l2_accel,
+ offsetof(struct fm10k_l2_accel,
+ macvlan[old_l2_accel->size]));
+
+ l2_accel->size = (old_l2_accel->size * 2) + 1;
+
+ /* update pointers */
+ fm10k_assign_l2_accel(interface, l2_accel);
+ kfree_rcu(old_l2_accel, rcu);
+ }
+
+ /* add macvlan to accel table, and record GLORT for position */
+ for (i = 0; i < l2_accel->size; i++) {
+ if (!l2_accel->macvlan[i])
+ break;
+ }
+
+ /* record station */
+ l2_accel->macvlan[i] = sdev;
+ l2_accel->count++;
+
+ /* configure default DGLORT mapping for RSS/DCB */
+ dglort.idx = fm10k_dglort_pf_rss;
+ dglort.inner_rss = 1;
+ dglort.rss_l = fls(interface->ring_feature[RING_F_RSS].mask);
+ dglort.pc_l = fls(interface->ring_feature[RING_F_QOS].mask);
+ dglort.glort = interface->glort;
+ dglort.shared_l = fls(l2_accel->size);
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+
+ /* Add rules for this specific dglort to the switch */
+ fm10k_mbx_lock(interface);
+
+ glort = l2_accel->dglort + 1 + i;
+ hw->mac.ops.update_xcast_mode(hw, glort, FM10K_XCAST_MODE_MULTI);
+ hw->mac.ops.update_uc_addr(hw, glort, sdev->dev_addr, 0, true, 0);
+
+ fm10k_mbx_unlock(interface);
+
+ return sdev;
+}
+
+static void fm10k_dfwd_del_station(struct net_device *dev, void *priv)
+{
+ struct fm10k_intfc *interface = netdev_priv(dev);
+ struct fm10k_l2_accel *l2_accel = ACCESS_ONCE(interface->l2_accel);
+ struct fm10k_dglort_cfg dglort = { 0 };
+ struct fm10k_hw *hw = &interface->hw;
+ struct net_device *sdev = priv;
+ int i;
+ u16 glort;
+
+ if (!l2_accel)
+ return;
+
+ /* search table for matching interface */
+ for (i = 0; i < l2_accel->size; i++) {
+ if (l2_accel->macvlan[i] == sdev)
+ break;
+ }
+
+ /* exit if macvlan not found */
+ if (i == l2_accel->size)
+ return;
+
+ /* Remove any rules specific to this dglort */
+ fm10k_mbx_lock(interface);
+
+ glort = l2_accel->dglort + 1 + i;
+ hw->mac.ops.update_xcast_mode(hw, glort, FM10K_XCAST_MODE_NONE);
+ hw->mac.ops.update_uc_addr(hw, glort, sdev->dev_addr, 0, false, 0);
+
+ fm10k_mbx_unlock(interface);
+
+ /* record removal */
+ l2_accel->macvlan[i] = NULL;
+ l2_accel->count--;
+
+ /* configure default DGLORT mapping for RSS/DCB */
+ dglort.idx = fm10k_dglort_pf_rss;
+ dglort.inner_rss = 1;
+ dglort.rss_l = fls(interface->ring_feature[RING_F_RSS].mask);
+ dglort.pc_l = fls(interface->ring_feature[RING_F_QOS].mask);
+ dglort.glort = interface->glort;
+ if (l2_accel)
+ dglort.shared_l = fls(l2_accel->size);
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+
+ /* If table is empty remove it */
+ if (l2_accel->count == 0) {
+ fm10k_assign_l2_accel(interface, NULL);
+ kfree_rcu(l2_accel, rcu);
+ }
+}
+
+static const struct net_device_ops fm10k_netdev_ops = {
+ .ndo_open = fm10k_open,
+ .ndo_stop = fm10k_close,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_start_xmit = fm10k_xmit_frame,
+ .ndo_set_mac_address = fm10k_set_mac,
+ .ndo_change_mtu = fm10k_change_mtu,
+ .ndo_tx_timeout = fm10k_tx_timeout,
+ .ndo_vlan_rx_add_vid = fm10k_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = fm10k_vlan_rx_kill_vid,
+ .ndo_set_rx_mode = fm10k_set_rx_mode,
+ .ndo_get_stats64 = fm10k_get_stats64,
+ .ndo_setup_tc = fm10k_setup_tc,
+ .ndo_set_vf_mac = fm10k_ndo_set_vf_mac,
+ .ndo_set_vf_vlan = fm10k_ndo_set_vf_vlan,
+ .ndo_set_vf_rate = fm10k_ndo_set_vf_bw,
+ .ndo_get_vf_config = fm10k_ndo_get_vf_config,
+ .ndo_add_vxlan_port = fm10k_add_vxlan_port,
+ .ndo_del_vxlan_port = fm10k_del_vxlan_port,
+ .ndo_do_ioctl = fm10k_ioctl,
+ .ndo_dfwd_add_station = fm10k_dfwd_add_station,
+ .ndo_dfwd_del_station = fm10k_dfwd_del_station,
+};
+
+#define DEFAULT_DEBUG_LEVEL_SHIFT 3
+
+struct net_device *fm10k_alloc_netdev(void)
+{
+ struct fm10k_intfc *interface;
+ struct net_device *dev;
+
+ dev = alloc_etherdev_mq(sizeof(struct fm10k_intfc), MAX_QUEUES);
+ if (!dev)
+ return NULL;
+
+ /* set net device and ethtool ops */
+ dev->netdev_ops = &fm10k_netdev_ops;
+ fm10k_set_ethtool_ops(dev);
+
+ /* configure default debug level */
+ interface = netdev_priv(dev);
+ interface->msg_enable = (1 << DEFAULT_DEBUG_LEVEL_SHIFT) - 1;
+
+ /* configure default features */
+ dev->features |= NETIF_F_IP_CSUM |
+ NETIF_F_IPV6_CSUM |
+ NETIF_F_SG |
+ NETIF_F_TSO |
+ NETIF_F_TSO6 |
+ NETIF_F_TSO_ECN |
+ NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_RXHASH |
+ NETIF_F_RXCSUM;
+
+ /* all features defined to this point should be changeable */
+ dev->hw_features |= dev->features;
+
+ /* allow user to enable L2 forwarding acceleration */
+ dev->hw_features |= NETIF_F_HW_L2FW_DOFFLOAD;
+
+ /* configure VLAN features */
+ dev->vlan_features |= dev->features;
+
+ /* configure tunnel offloads */
+ dev->hw_enc_features = NETIF_F_IP_CSUM |
+ NETIF_F_TSO |
+ NETIF_F_TSO6 |
+ NETIF_F_TSO_ECN |
+ NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_IPV6_CSUM |
+ NETIF_F_SG;
+
+ /* we want to leave these both on as we cannot disable VLAN tag
+ * insertion or stripping on the hardware since it is contained
+ * in the FTAG and not in the frame itself.
+ */
+ dev->features |= NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_CTAG_FILTER;
+
+ dev->priv_flags |= IFF_UNICAST_FLT;
+
+ return dev;
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
new file mode 100644
index 0000000..e02036c
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
@@ -0,0 +1,2166 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include <linux/module.h>
+#include <linux/aer.h>
+
+#include "fm10k.h"
+
+static const struct fm10k_info *fm10k_info_tbl[] = {
+ [fm10k_device_pf] = &fm10k_pf_info,
+ [fm10k_device_vf] = &fm10k_vf_info,
+};
+
+/**
+ * fm10k_pci_tbl - PCI Device ID Table
+ *
+ * Wildcard entries (PCI_ANY_ID) should come last
+ * Last entry must be all 0s
+ *
+ * { Vendor ID, Device ID, SubVendor ID, SubDevice ID,
+ * Class, Class Mask, private data (not used) }
+ */
+static const struct pci_device_id fm10k_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, FM10K_DEV_ID_PF), fm10k_device_pf },
+ { PCI_VDEVICE(INTEL, FM10K_DEV_ID_VF), fm10k_device_vf },
+ /* required last entry */
+ { 0, }
+};
+MODULE_DEVICE_TABLE(pci, fm10k_pci_tbl);
+
+u16 fm10k_read_pci_cfg_word(struct fm10k_hw *hw, u32 reg)
+{
+ struct fm10k_intfc *interface = hw->back;
+ u16 value = 0;
+
+ if (FM10K_REMOVED(hw->hw_addr))
+ return ~value;
+
+ pci_read_config_word(interface->pdev, reg, &value);
+ if (value == 0xFFFF)
+ fm10k_write_flush(hw);
+
+ return value;
+}
+
+u32 fm10k_read_reg(struct fm10k_hw *hw, int reg)
+{
+ u32 __iomem *hw_addr = ACCESS_ONCE(hw->hw_addr);
+ u32 value = 0;
+
+ if (FM10K_REMOVED(hw_addr))
+ return ~value;
+
+ value = readl(&hw_addr[reg]);
+ if (!(~value) && (!reg || !(~readl(hw_addr)))) {
+ struct fm10k_intfc *interface = hw->back;
+ struct net_device *netdev = interface->netdev;
+
+ hw->hw_addr = NULL;
+ netif_device_detach(netdev);
+ netdev_err(netdev, "PCIe link lost, device now detached\n");
+ }
+
+ return value;
+}
+
+static int fm10k_hw_ready(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+
+ fm10k_write_flush(hw);
+
+ return FM10K_REMOVED(hw->hw_addr) ? -ENODEV : 0;
+}
+
+void fm10k_service_event_schedule(struct fm10k_intfc *interface)
+{
+ if (!test_bit(__FM10K_SERVICE_DISABLE, &interface->state) &&
+ !test_and_set_bit(__FM10K_SERVICE_SCHED, &interface->state))
+ schedule_work(&interface->service_task);
+}
+
+static void fm10k_service_event_complete(struct fm10k_intfc *interface)
+{
+ BUG_ON(!test_bit(__FM10K_SERVICE_SCHED, &interface->state));
+
+ /* flush memory to make sure state is correct before next watchog */
+ smp_mb__before_atomic();
+ clear_bit(__FM10K_SERVICE_SCHED, &interface->state);
+}
+
+/**
+ * fm10k_service_timer - Timer Call-back
+ * @data: pointer to interface cast into an unsigned long
+ **/
+static void fm10k_service_timer(unsigned long data)
+{
+ struct fm10k_intfc *interface = (struct fm10k_intfc *)data;
+
+ /* Reset the timer */
+ mod_timer(&interface->service_timer, (HZ * 2) + jiffies);
+
+ fm10k_service_event_schedule(interface);
+}
+
+static void fm10k_detach_subtask(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+
+ /* do nothing if device is still present or hw_addr is set */
+ if (netif_device_present(netdev) || interface->hw.hw_addr)
+ return;
+
+ rtnl_lock();
+
+ if (netif_running(netdev))
+ dev_close(netdev);
+
+ rtnl_unlock();
+}
+
+static void fm10k_reinit(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int err;
+
+ WARN_ON(in_interrupt());
+
+ /* put off any impending NetWatchDogTimeout */
+ netdev->trans_start = jiffies;
+
+ while (test_and_set_bit(__FM10K_RESETTING, &interface->state))
+ usleep_range(1000, 2000);
+
+ rtnl_lock();
+
+ fm10k_iov_suspend(interface->pdev);
+
+ if (netif_running(netdev))
+ fm10k_close(netdev);
+
+ fm10k_mbx_free_irq(interface);
+
+ /* delay any future reset requests */
+ interface->last_reset = jiffies + (10 * HZ);
+
+ /* reset and initialize the hardware so it is in a known state */
+ err = hw->mac.ops.reset_hw(hw) ? : hw->mac.ops.init_hw(hw);
+ if (err)
+ dev_err(&interface->pdev->dev, "init_hw failed: %d\n", err);
+
+ /* reassociate interrupts */
+ fm10k_mbx_request_irq(interface);
+
+ /* reset clock */
+ fm10k_ts_reset(interface);
+
+ if (netif_running(netdev))
+ fm10k_open(netdev);
+
+ fm10k_iov_resume(interface->pdev);
+
+ rtnl_unlock();
+
+ clear_bit(__FM10K_RESETTING, &interface->state);
+}
+
+static void fm10k_reset_subtask(struct fm10k_intfc *interface)
+{
+ if (!(interface->flags & FM10K_FLAG_RESET_REQUESTED))
+ return;
+
+ interface->flags &= ~FM10K_FLAG_RESET_REQUESTED;
+
+ netdev_err(interface->netdev, "Reset interface\n");
+ interface->tx_timeout_count++;
+
+ fm10k_reinit(interface);
+}
+
+/**
+ * fm10k_configure_swpri_map - Configure Receive SWPRI to PC mapping
+ * @interface: board private structure
+ *
+ * Configure the SWPRI to PC mapping for the port.
+ **/
+static void fm10k_configure_swpri_map(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int i;
+
+ /* clear flag indicating update is needed */
+ interface->flags &= ~FM10K_FLAG_SWPRI_CONFIG;
+
+ /* these registers are only available on the PF */
+ if (hw->mac.type != fm10k_mac_pf)
+ return;
+
+ /* configure SWPRI to PC map */
+ for (i = 0; i < FM10K_SWPRI_MAX; i++)
+ fm10k_write_reg(hw, FM10K_SWPRI_MAP(i),
+ netdev_get_prio_tc_map(netdev, i));
+}
+
+/**
+ * fm10k_watchdog_update_host_state - Update the link status based on host.
+ * @interface: board private structure
+ **/
+static void fm10k_watchdog_update_host_state(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ s32 err;
+
+ if (test_bit(__FM10K_LINK_DOWN, &interface->state)) {
+ interface->host_ready = false;
+ if (time_is_after_jiffies(interface->link_down_event))
+ return;
+ clear_bit(__FM10K_LINK_DOWN, &interface->state);
+ }
+
+ if (interface->flags & FM10K_FLAG_SWPRI_CONFIG) {
+ if (rtnl_trylock()) {
+ fm10k_configure_swpri_map(interface);
+ rtnl_unlock();
+ }
+ }
+
+ /* lock the mailbox for transmit and receive */
+ fm10k_mbx_lock(interface);
+
+ err = hw->mac.ops.get_host_state(hw, &interface->host_ready);
+ if (err && time_is_before_jiffies(interface->last_reset))
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+
+ /* free the lock */
+ fm10k_mbx_unlock(interface);
+}
+
+/**
+ * fm10k_mbx_subtask - Process upstream and downstream mailboxes
+ * @interface: board private structure
+ *
+ * This function will process both the upstream and downstream mailboxes.
+ * It is necessary for us to hold the rtnl_lock while doing this as the
+ * mailbox accesses are protected by this lock.
+ **/
+static void fm10k_mbx_subtask(struct fm10k_intfc *interface)
+{
+ /* process upstream mailbox and update device state */
+ fm10k_watchdog_update_host_state(interface);
+
+ /* process downstream mailboxes */
+ fm10k_iov_mbx(interface);
+}
+
+/**
+ * fm10k_watchdog_host_is_ready - Update netdev status based on host ready
+ * @interface: board private structure
+ **/
+static void fm10k_watchdog_host_is_ready(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+
+ /* only continue if link state is currently down */
+ if (netif_carrier_ok(netdev))
+ return;
+
+ netif_info(interface, drv, netdev, "NIC Link is up\n");
+
+ netif_carrier_on(netdev);
+ netif_tx_wake_all_queues(netdev);
+}
+
+/**
+ * fm10k_watchdog_host_not_ready - Update netdev status based on host not ready
+ * @interface: board private structure
+ **/
+static void fm10k_watchdog_host_not_ready(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+
+ /* only continue if link state is currently up */
+ if (!netif_carrier_ok(netdev))
+ return;
+
+ netif_info(interface, drv, netdev, "NIC Link is down\n");
+
+ netif_carrier_off(netdev);
+ netif_tx_stop_all_queues(netdev);
+}
+
+/**
+ * fm10k_update_stats - Update the board statistics counters.
+ * @interface: board private structure
+ **/
+void fm10k_update_stats(struct fm10k_intfc *interface)
+{
+ struct net_device_stats *net_stats = &interface->netdev->stats;
+ struct fm10k_hw *hw = &interface->hw;
+ u64 rx_errors = 0, rx_csum_errors = 0, tx_csum_errors = 0;
+ u64 restart_queue = 0, tx_busy = 0, alloc_failed = 0;
+ u64 rx_bytes_nic = 0, rx_pkts_nic = 0, rx_drops_nic = 0;
+ u64 tx_bytes_nic = 0, tx_pkts_nic = 0;
+ u64 bytes, pkts;
+ int i;
+
+ /* do not allow stats update via service task for next second */
+ interface->next_stats_update = jiffies + HZ;
+
+ /* gather some stats to the interface struct that are per queue */
+ for (bytes = 0, pkts = 0, i = 0; i < interface->num_tx_queues; i++) {
+ struct fm10k_ring *tx_ring = interface->tx_ring[i];
+
+ restart_queue += tx_ring->tx_stats.restart_queue;
+ tx_busy += tx_ring->tx_stats.tx_busy;
+ tx_csum_errors += tx_ring->tx_stats.csum_err;
+ bytes += tx_ring->stats.bytes;
+ pkts += tx_ring->stats.packets;
+ }
+
+ interface->restart_queue = restart_queue;
+ interface->tx_busy = tx_busy;
+ net_stats->tx_bytes = bytes;
+ net_stats->tx_packets = pkts;
+ interface->tx_csum_errors = tx_csum_errors;
+ /* gather some stats to the interface struct that are per queue */
+ for (bytes = 0, pkts = 0, i = 0; i < interface->num_rx_queues; i++) {
+ struct fm10k_ring *rx_ring = interface->rx_ring[i];
+
+ bytes += rx_ring->stats.bytes;
+ pkts += rx_ring->stats.packets;
+ alloc_failed += rx_ring->rx_stats.alloc_failed;
+ rx_csum_errors += rx_ring->rx_stats.csum_err;
+ rx_errors += rx_ring->rx_stats.errors;
+ }
+
+ net_stats->rx_bytes = bytes;
+ net_stats->rx_packets = pkts;
+ interface->alloc_failed = alloc_failed;
+ interface->rx_csum_errors = rx_csum_errors;
+ interface->rx_errors = rx_errors;
+
+ hw->mac.ops.update_hw_stats(hw, &interface->stats);
+
+ for (i = 0; i < FM10K_MAX_QUEUES_PF; i++) {
+ struct fm10k_hw_stats_q *q = &interface->stats.q[i];
+
+ tx_bytes_nic += q->tx_bytes.count;
+ tx_pkts_nic += q->tx_packets.count;
+ rx_bytes_nic += q->rx_bytes.count;
+ rx_pkts_nic += q->rx_packets.count;
+ rx_drops_nic += q->rx_drops.count;
+ }
+
+ interface->tx_bytes_nic = tx_bytes_nic;
+ interface->tx_packets_nic = tx_pkts_nic;
+ interface->rx_bytes_nic = rx_bytes_nic;
+ interface->rx_packets_nic = rx_pkts_nic;
+ interface->rx_drops_nic = rx_drops_nic;
+
+ /* Fill out the OS statistics structure */
+ net_stats->rx_errors = interface->stats.xec.count;
+ net_stats->rx_dropped = interface->stats.nodesc_drop.count;
+}
+
+/**
+ * fm10k_watchdog_flush_tx - flush queues on host not ready
+ * @interface - pointer to the device interface structure
+ **/
+static void fm10k_watchdog_flush_tx(struct fm10k_intfc *interface)
+{
+ int some_tx_pending = 0;
+ int i;
+
+ /* nothing to do if carrier is up */
+ if (netif_carrier_ok(interface->netdev))
+ return;
+
+ for (i = 0; i < interface->num_tx_queues; i++) {
+ struct fm10k_ring *tx_ring = interface->tx_ring[i];
+
+ if (tx_ring->next_to_use != tx_ring->next_to_clean) {
+ some_tx_pending = 1;
+ break;
+ }
+ }
+
+ /* We've lost link, so the controller stops DMA, but we've got
+ * queued Tx work that's never going to get done, so reset
+ * controller to flush Tx.
+ */
+ if (some_tx_pending)
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+}
+
+/**
+ * fm10k_watchdog_subtask - check and bring link up
+ * @interface - pointer to the device interface structure
+ **/
+static void fm10k_watchdog_subtask(struct fm10k_intfc *interface)
+{
+ /* if interface is down do nothing */
+ if (test_bit(__FM10K_DOWN, &interface->state) ||
+ test_bit(__FM10K_RESETTING, &interface->state))
+ return;
+
+ if (interface->host_ready)
+ fm10k_watchdog_host_is_ready(interface);
+ else
+ fm10k_watchdog_host_not_ready(interface);
+
+ /* update stats only once every second */
+ if (time_is_before_jiffies(interface->next_stats_update))
+ fm10k_update_stats(interface);
+
+ /* flush any uncompleted work */
+ fm10k_watchdog_flush_tx(interface);
+}
+
+/**
+ * fm10k_check_hang_subtask - check for hung queues and dropped interrupts
+ * @interface - pointer to the device interface structure
+ *
+ * This function serves two purposes. First it strobes the interrupt lines
+ * in order to make certain interrupts are occurring. Secondly it sets the
+ * bits needed to check for TX hangs. As a result we should immediately
+ * determine if a hang has occurred.
+ */
+static void fm10k_check_hang_subtask(struct fm10k_intfc *interface)
+{
+ int i;
+
+ /* If we're down or resetting, just bail */
+ if (test_bit(__FM10K_DOWN, &interface->state) ||
+ test_bit(__FM10K_RESETTING, &interface->state))
+ return;
+
+ /* rate limit tx hang checks to only once every 2 seconds */
+ if (time_is_after_eq_jiffies(interface->next_tx_hang_check))
+ return;
+ interface->next_tx_hang_check = jiffies + (2 * HZ);
+
+ if (netif_carrier_ok(interface->netdev)) {
+ /* Force detection of hung controller */
+ for (i = 0; i < interface->num_tx_queues; i++)
+ set_check_for_tx_hang(interface->tx_ring[i]);
+
+ /* Rearm all in-use q_vectors for immediate firing */
+ for (i = 0; i < interface->num_q_vectors; i++) {
+ struct fm10k_q_vector *qv = interface->q_vector[i];
+
+ if (!qv->tx.count && !qv->rx.count)
+ continue;
+ writel(FM10K_ITR_ENABLE | FM10K_ITR_PENDING2, qv->itr);
+ }
+ }
+}
+
+/**
+ * fm10k_service_task - manages and runs subtasks
+ * @work: pointer to work_struct containing our data
+ **/
+static void fm10k_service_task(struct work_struct *work)
+{
+ struct fm10k_intfc *interface;
+
+ interface = container_of(work, struct fm10k_intfc, service_task);
+
+ /* tasks always capable of running, but must be rtnl protected */
+ fm10k_mbx_subtask(interface);
+ fm10k_detach_subtask(interface);
+ fm10k_reset_subtask(interface);
+
+ /* tasks only run when interface is up */
+ fm10k_watchdog_subtask(interface);
+ fm10k_check_hang_subtask(interface);
+ fm10k_ts_tx_subtask(interface);
+
+ /* release lock on service events to allow scheduling next event */
+ fm10k_service_event_complete(interface);
+}
+
+/**
+ * fm10k_configure_tx_ring - Configure Tx ring after Reset
+ * @interface: board private structure
+ * @ring: structure containing ring specific data
+ *
+ * Configure the Tx descriptor ring after a reset.
+ **/
+static void fm10k_configure_tx_ring(struct fm10k_intfc *interface,
+ struct fm10k_ring *ring)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ u64 tdba = ring->dma;
+ u32 size = ring->count * sizeof(struct fm10k_tx_desc);
+ u32 txint = FM10K_INT_MAP_DISABLE;
+ u32 txdctl = FM10K_TXDCTL_ENABLE | (1 << FM10K_TXDCTL_MAX_TIME_SHIFT);
+ u8 reg_idx = ring->reg_idx;
+
+ /* disable queue to avoid issues while updating state */
+ fm10k_write_reg(hw, FM10K_TXDCTL(reg_idx), 0);
+ fm10k_write_flush(hw);
+
+ /* possible poll here to verify ring resources have been cleaned */
+
+ /* set location and size for descriptor ring */
+ fm10k_write_reg(hw, FM10K_TDBAL(reg_idx), tdba & DMA_BIT_MASK(32));
+ fm10k_write_reg(hw, FM10K_TDBAH(reg_idx), tdba >> 32);
+ fm10k_write_reg(hw, FM10K_TDLEN(reg_idx), size);
+
+ /* reset head and tail pointers */
+ fm10k_write_reg(hw, FM10K_TDH(reg_idx), 0);
+ fm10k_write_reg(hw, FM10K_TDT(reg_idx), 0);
+
+ /* store tail pointer */
+ ring->tail = &interface->uc_addr[FM10K_TDT(reg_idx)];
+
+ /* reset ntu and ntc to place SW in sync with hardwdare */
+ ring->next_to_clean = 0;
+ ring->next_to_use = 0;
+
+ /* Map interrupt */
+ if (ring->q_vector) {
+ txint = ring->q_vector->v_idx + NON_Q_VECTORS(hw);
+ txint |= FM10K_INT_MAP_TIMER0;
+ }
+
+ fm10k_write_reg(hw, FM10K_TXINT(reg_idx), txint);
+
+ /* enable use of FTAG bit in Tx descriptor, register is RO for VF */
+ fm10k_write_reg(hw, FM10K_PFVTCTL(reg_idx),
+ FM10K_PFVTCTL_FTAG_DESC_ENABLE);
+
+ /* enable queue */
+ fm10k_write_reg(hw, FM10K_TXDCTL(reg_idx), txdctl);
+}
+
+/**
+ * fm10k_enable_tx_ring - Verify Tx ring is enabled after configuration
+ * @interface: board private structure
+ * @ring: structure containing ring specific data
+ *
+ * Verify the Tx descriptor ring is ready for transmit.
+ **/
+static void fm10k_enable_tx_ring(struct fm10k_intfc *interface,
+ struct fm10k_ring *ring)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ int wait_loop = 10;
+ u32 txdctl;
+ u8 reg_idx = ring->reg_idx;
+
+ /* if we are already enabled just exit */
+ if (fm10k_read_reg(hw, FM10K_TXDCTL(reg_idx)) & FM10K_TXDCTL_ENABLE)
+ return;
+
+ /* poll to verify queue is enabled */
+ do {
+ usleep_range(1000, 2000);
+ txdctl = fm10k_read_reg(hw, FM10K_TXDCTL(reg_idx));
+ } while (!(txdctl & FM10K_TXDCTL_ENABLE) && --wait_loop);
+ if (!wait_loop)
+ netif_err(interface, drv, interface->netdev,
+ "Could not enable Tx Queue %d\n", reg_idx);
+}
+
+/**
+ * fm10k_configure_tx - Configure Transmit Unit after Reset
+ * @interface: board private structure
+ *
+ * Configure the Tx unit of the MAC after a reset.
+ **/
+static void fm10k_configure_tx(struct fm10k_intfc *interface)
+{
+ int i;
+
+ /* Setup the HW Tx Head and Tail descriptor pointers */
+ for (i = 0; i < interface->num_tx_queues; i++)
+ fm10k_configure_tx_ring(interface, interface->tx_ring[i]);
+
+ /* poll here to verify that Tx rings are now enabled */
+ for (i = 0; i < interface->num_tx_queues; i++)
+ fm10k_enable_tx_ring(interface, interface->tx_ring[i]);
+}
+
+/**
+ * fm10k_configure_rx_ring - Configure Rx ring after Reset
+ * @interface: board private structure
+ * @ring: structure containing ring specific data
+ *
+ * Configure the Rx descriptor ring after a reset.
+ **/
+static void fm10k_configure_rx_ring(struct fm10k_intfc *interface,
+ struct fm10k_ring *ring)
+{
+ u64 rdba = ring->dma;
+ struct fm10k_hw *hw = &interface->hw;
+ u32 size = ring->count * sizeof(union fm10k_rx_desc);
+ u32 rxqctl = FM10K_RXQCTL_ENABLE | FM10K_RXQCTL_PF;
+ u32 rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY;
+ u32 srrctl = FM10K_SRRCTL_BUFFER_CHAINING_EN;
+ u32 rxint = FM10K_INT_MAP_DISABLE;
+ u8 rx_pause = interface->rx_pause;
+ u8 reg_idx = ring->reg_idx;
+
+ /* disable queue to avoid issues while updating state */
+ fm10k_write_reg(hw, FM10K_RXQCTL(reg_idx), 0);
+ fm10k_write_flush(hw);
+
+ /* possible poll here to verify ring resources have been cleaned */
+
+ /* set location and size for descriptor ring */
+ fm10k_write_reg(hw, FM10K_RDBAL(reg_idx), rdba & DMA_BIT_MASK(32));
+ fm10k_write_reg(hw, FM10K_RDBAH(reg_idx), rdba >> 32);
+ fm10k_write_reg(hw, FM10K_RDLEN(reg_idx), size);
+
+ /* reset head and tail pointers */
+ fm10k_write_reg(hw, FM10K_RDH(reg_idx), 0);
+ fm10k_write_reg(hw, FM10K_RDT(reg_idx), 0);
+
+ /* store tail pointer */
+ ring->tail = &interface->uc_addr[FM10K_RDT(reg_idx)];
+
+ /* reset ntu and ntc to place SW in sync with hardwdare */
+ ring->next_to_clean = 0;
+ ring->next_to_use = 0;
+ ring->next_to_alloc = 0;
+
+ /* Configure the Rx buffer size for one buff without split */
+ srrctl |= FM10K_RX_BUFSZ >> FM10K_SRRCTL_BSIZEPKT_SHIFT;
+
+ /* Configure the Rx ring to supress loopback packets */
+ srrctl |= FM10K_SRRCTL_LOOPBACK_SUPPRESS;
+ fm10k_write_reg(hw, FM10K_SRRCTL(reg_idx), srrctl);
+
+ /* Enable drop on empty */
+#ifdef CONFIG_DCB
+ if (interface->pfc_en)
+ rx_pause = interface->pfc_en;
+#endif
+ if (!(rx_pause & (1 << ring->qos_pc)))
+ rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY;
+
+ fm10k_write_reg(hw, FM10K_RXDCTL(reg_idx), rxdctl);
+
+ /* assign default VLAN to queue */
+ ring->vid = hw->mac.default_vid;
+
+ /* Map interrupt */
+ if (ring->q_vector) {
+ rxint = ring->q_vector->v_idx + NON_Q_VECTORS(hw);
+ rxint |= FM10K_INT_MAP_TIMER1;
+ }
+
+ fm10k_write_reg(hw, FM10K_RXINT(reg_idx), rxint);
+
+ /* enable queue */
+ fm10k_write_reg(hw, FM10K_RXQCTL(reg_idx), rxqctl);
+
+ /* place buffers on ring for receive data */
+ fm10k_alloc_rx_buffers(ring, fm10k_desc_unused(ring));
+}
+
+/**
+ * fm10k_update_rx_drop_en - Configures the drop enable bits for Rx rings
+ * @interface: board private structure
+ *
+ * Configure the drop enable bits for the Rx rings.
+ **/
+void fm10k_update_rx_drop_en(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ u8 rx_pause = interface->rx_pause;
+ int i;
+
+#ifdef CONFIG_DCB
+ if (interface->pfc_en)
+ rx_pause = interface->pfc_en;
+
+#endif
+ for (i = 0; i < interface->num_rx_queues; i++) {
+ struct fm10k_ring *ring = interface->rx_ring[i];
+ u32 rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY;
+ u8 reg_idx = ring->reg_idx;
+
+ if (!(rx_pause & (1 << ring->qos_pc)))
+ rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY;
+
+ fm10k_write_reg(hw, FM10K_RXDCTL(reg_idx), rxdctl);
+ }
+}
+
+/**
+ * fm10k_configure_dglort - Configure Receive DGLORT after reset
+ * @interface: board private structure
+ *
+ * Configure the DGLORT description and RSS tables.
+ **/
+static void fm10k_configure_dglort(struct fm10k_intfc *interface)
+{
+ struct fm10k_dglort_cfg dglort = { 0 };
+ struct fm10k_hw *hw = &interface->hw;
+ int i;
+ u32 mrqc;
+
+ /* Fill out hash function seeds */
+ for (i = 0; i < FM10K_RSSRK_SIZE; i++)
+ fm10k_write_reg(hw, FM10K_RSSRK(0, i), interface->rssrk[i]);
+
+ /* Write RETA table to hardware */
+ for (i = 0; i < FM10K_RETA_SIZE; i++)
+ fm10k_write_reg(hw, FM10K_RETA(0, i), interface->reta[i]);
+
+ /* Generate RSS hash based on packet types, TCP/UDP
+ * port numbers and/or IPv4/v6 src and dst addresses
+ */
+ mrqc = FM10K_MRQC_IPV4 |
+ FM10K_MRQC_TCP_IPV4 |
+ FM10K_MRQC_IPV6 |
+ FM10K_MRQC_TCP_IPV6;
+
+ if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV4_UDP)
+ mrqc |= FM10K_MRQC_UDP_IPV4;
+ if (interface->flags & FM10K_FLAG_RSS_FIELD_IPV6_UDP)
+ mrqc |= FM10K_MRQC_UDP_IPV6;
+
+ fm10k_write_reg(hw, FM10K_MRQC(0), mrqc);
+
+ /* configure default DGLORT mapping for RSS/DCB */
+ dglort.inner_rss = 1;
+ dglort.rss_l = fls(interface->ring_feature[RING_F_RSS].mask);
+ dglort.pc_l = fls(interface->ring_feature[RING_F_QOS].mask);
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+
+ /* assign GLORT per queue for queue mapped testing */
+ if (interface->glort_count > 64) {
+ memset(&dglort, 0, sizeof(dglort));
+ dglort.inner_rss = 1;
+ dglort.glort = interface->glort + 64;
+ dglort.idx = fm10k_dglort_pf_queue;
+ dglort.queue_l = fls(interface->num_rx_queues - 1);
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+ }
+
+ /* assign glort value for RSS/DCB specific to this interface */
+ memset(&dglort, 0, sizeof(dglort));
+ dglort.inner_rss = 1;
+ dglort.glort = interface->glort;
+ dglort.rss_l = fls(interface->ring_feature[RING_F_RSS].mask);
+ dglort.pc_l = fls(interface->ring_feature[RING_F_QOS].mask);
+ /* configure DGLORT mapping for RSS/DCB */
+ dglort.idx = fm10k_dglort_pf_rss;
+ if (interface->l2_accel)
+ dglort.shared_l = fls(interface->l2_accel->size);
+ hw->mac.ops.configure_dglort_map(hw, &dglort);
+}
+
+/**
+ * fm10k_configure_rx - Configure Receive Unit after Reset
+ * @interface: board private structure
+ *
+ * Configure the Rx unit of the MAC after a reset.
+ **/
+static void fm10k_configure_rx(struct fm10k_intfc *interface)
+{
+ int i;
+
+ /* Configure SWPRI to PC map */
+ fm10k_configure_swpri_map(interface);
+
+ /* Configure RSS and DGLORT map */
+ fm10k_configure_dglort(interface);
+
+ /* Setup the HW Rx Head and Tail descriptor pointers */
+ for (i = 0; i < interface->num_rx_queues; i++)
+ fm10k_configure_rx_ring(interface, interface->rx_ring[i]);
+
+ /* possible poll here to verify that Rx rings are now enabled */
+}
+
+static void fm10k_napi_enable_all(struct fm10k_intfc *interface)
+{
+ struct fm10k_q_vector *q_vector;
+ int q_idx;
+
+ for (q_idx = 0; q_idx < interface->num_q_vectors; q_idx++) {
+ q_vector = interface->q_vector[q_idx];
+ napi_enable(&q_vector->napi);
+ }
+}
+
+static irqreturn_t fm10k_msix_clean_rings(int irq, void *data)
+{
+ struct fm10k_q_vector *q_vector = data;
+
+ if (q_vector->rx.count || q_vector->tx.count)
+ napi_schedule(&q_vector->napi);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t fm10k_msix_mbx_vf(int irq, void *data)
+{
+ struct fm10k_intfc *interface = data;
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+
+ /* re-enable mailbox interrupt and indicate 20us delay */
+ fm10k_write_reg(hw, FM10K_VFITR(FM10K_MBX_VECTOR),
+ FM10K_ITR_ENABLE | FM10K_MBX_INT_DELAY);
+
+ /* service upstream mailbox */
+ if (fm10k_mbx_trylock(interface)) {
+ mbx->ops.process(hw, mbx);
+ fm10k_mbx_unlock(interface);
+ }
+
+ hw->mac.get_host_state = 1;
+ fm10k_service_event_schedule(interface);
+
+ return IRQ_HANDLED;
+}
+
+#define FM10K_ERR_MSG(type) case (type): error = #type; break
+static void fm10k_print_fault(struct fm10k_intfc *interface, int type,
+ struct fm10k_fault *fault)
+{
+ struct pci_dev *pdev = interface->pdev;
+ char *error;
+
+ switch (type) {
+ case FM10K_PCA_FAULT:
+ switch (fault->type) {
+ default:
+ error = "Unknown PCA error";
+ break;
+ FM10K_ERR_MSG(PCA_NO_FAULT);
+ FM10K_ERR_MSG(PCA_UNMAPPED_ADDR);
+ FM10K_ERR_MSG(PCA_BAD_QACCESS_PF);
+ FM10K_ERR_MSG(PCA_BAD_QACCESS_VF);
+ FM10K_ERR_MSG(PCA_MALICIOUS_REQ);
+ FM10K_ERR_MSG(PCA_POISONED_TLP);
+ FM10K_ERR_MSG(PCA_TLP_ABORT);
+ }
+ break;
+ case FM10K_THI_FAULT:
+ switch (fault->type) {
+ default:
+ error = "Unknown THI error";
+ break;
+ FM10K_ERR_MSG(THI_NO_FAULT);
+ FM10K_ERR_MSG(THI_MAL_DIS_Q_FAULT);
+ }
+ break;
+ case FM10K_FUM_FAULT:
+ switch (fault->type) {
+ default:
+ error = "Unknown FUM error";
+ break;
+ FM10K_ERR_MSG(FUM_NO_FAULT);
+ FM10K_ERR_MSG(FUM_UNMAPPED_ADDR);
+ FM10K_ERR_MSG(FUM_BAD_VF_QACCESS);
+ FM10K_ERR_MSG(FUM_ADD_DECODE_ERR);
+ FM10K_ERR_MSG(FUM_RO_ERROR);
+ FM10K_ERR_MSG(FUM_QPRC_CRC_ERROR);
+ FM10K_ERR_MSG(FUM_CSR_TIMEOUT);
+ FM10K_ERR_MSG(FUM_INVALID_TYPE);
+ FM10K_ERR_MSG(FUM_INVALID_LENGTH);
+ FM10K_ERR_MSG(FUM_INVALID_BE);
+ FM10K_ERR_MSG(FUM_INVALID_ALIGN);
+ }
+ break;
+ default:
+ error = "Undocumented fault";
+ break;
+ }
+
+ dev_warn(&pdev->dev,
+ "%s Address: 0x%llx SpecInfo: 0x%x Func: %02x.%0x\n",
+ error, fault->address, fault->specinfo,
+ PCI_SLOT(fault->func), PCI_FUNC(fault->func));
+}
+
+static void fm10k_report_fault(struct fm10k_intfc *interface, u32 eicr)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_fault fault = { 0 };
+ int type, err;
+
+ for (eicr &= FM10K_EICR_FAULT_MASK, type = FM10K_PCA_FAULT;
+ eicr;
+ eicr >>= 1, type += FM10K_FAULT_SIZE) {
+ /* only check if there is an error reported */
+ if (!(eicr & 0x1))
+ continue;
+
+ /* retrieve fault info */
+ err = hw->mac.ops.get_fault(hw, type, &fault);
+ if (err) {
+ dev_err(&interface->pdev->dev,
+ "error reading fault\n");
+ continue;
+ }
+
+ fm10k_print_fault(interface, type, &fault);
+ }
+}
+
+static void fm10k_reset_drop_on_empty(struct fm10k_intfc *interface, u32 eicr)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ const u32 rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY;
+ u32 maxholdq;
+ int q;
+
+ if (!(eicr & FM10K_EICR_MAXHOLDTIME))
+ return;
+
+ maxholdq = fm10k_read_reg(hw, FM10K_MAXHOLDQ(7));
+ if (maxholdq)
+ fm10k_write_reg(hw, FM10K_MAXHOLDQ(7), maxholdq);
+ for (q = 255;;) {
+ if (maxholdq & (1 << 31)) {
+ if (q < FM10K_MAX_QUEUES_PF) {
+ interface->rx_overrun_pf++;
+ fm10k_write_reg(hw, FM10K_RXDCTL(q), rxdctl);
+ } else {
+ interface->rx_overrun_vf++;
+ }
+ }
+
+ maxholdq *= 2;
+ if (!maxholdq)
+ q &= ~(32 - 1);
+
+ if (!q)
+ break;
+
+ if (q-- % 32)
+ continue;
+
+ maxholdq = fm10k_read_reg(hw, FM10K_MAXHOLDQ(q / 32));
+ if (maxholdq)
+ fm10k_write_reg(hw, FM10K_MAXHOLDQ(q / 32), maxholdq);
+ }
+}
+
+static irqreturn_t fm10k_msix_mbx_pf(int irq, void *data)
+{
+ struct fm10k_intfc *interface = data;
+ struct fm10k_hw *hw = &interface->hw;
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 eicr;
+
+ /* unmask any set bits related to this interrupt */
+ eicr = fm10k_read_reg(hw, FM10K_EICR);
+ fm10k_write_reg(hw, FM10K_EICR, eicr & (FM10K_EICR_MAILBOX |
+ FM10K_EICR_SWITCHREADY |
+ FM10K_EICR_SWITCHNOTREADY));
+
+ /* report any faults found to the message log */
+ fm10k_report_fault(interface, eicr);
+
+ /* reset any queues disabled due to receiver overrun */
+ fm10k_reset_drop_on_empty(interface, eicr);
+
+ /* service mailboxes */
+ if (fm10k_mbx_trylock(interface)) {
+ mbx->ops.process(hw, mbx);
+ fm10k_iov_event(interface);
+ fm10k_mbx_unlock(interface);
+ }
+
+ /* if switch toggled state we should reset GLORTs */
+ if (eicr & FM10K_EICR_SWITCHNOTREADY) {
+ /* force link down for at least 4 seconds */
+ interface->link_down_event = jiffies + (4 * HZ);
+ set_bit(__FM10K_LINK_DOWN, &interface->state);
+
+ /* reset dglort_map back to no config */
+ hw->mac.dglort_map = FM10K_DGLORTMAP_NONE;
+ }
+
+ /* we should validate host state after interrupt event */
+ hw->mac.get_host_state = 1;
+ fm10k_service_event_schedule(interface);
+
+ /* re-enable mailbox interrupt and indicate 20us delay */
+ fm10k_write_reg(hw, FM10K_ITR(FM10K_MBX_VECTOR),
+ FM10K_ITR_ENABLE | FM10K_MBX_INT_DELAY);
+
+ return IRQ_HANDLED;
+}
+
+void fm10k_mbx_free_irq(struct fm10k_intfc *interface)
+{
+ struct msix_entry *entry = &interface->msix_entries[FM10K_MBX_VECTOR];
+ struct fm10k_hw *hw = &interface->hw;
+ int itr_reg;
+
+ /* disconnect the mailbox */
+ hw->mbx.ops.disconnect(hw, &hw->mbx);
+
+ /* disable Mailbox cause */
+ if (hw->mac.type == fm10k_mac_pf) {
+ fm10k_write_reg(hw, FM10K_EIMR,
+ FM10K_EIMR_DISABLE(PCA_FAULT) |
+ FM10K_EIMR_DISABLE(FUM_FAULT) |
+ FM10K_EIMR_DISABLE(MAILBOX) |
+ FM10K_EIMR_DISABLE(SWITCHREADY) |
+ FM10K_EIMR_DISABLE(SWITCHNOTREADY) |
+ FM10K_EIMR_DISABLE(SRAMERROR) |
+ FM10K_EIMR_DISABLE(VFLR) |
+ FM10K_EIMR_DISABLE(MAXHOLDTIME));
+ itr_reg = FM10K_ITR(FM10K_MBX_VECTOR);
+ } else {
+ itr_reg = FM10K_VFITR(FM10K_MBX_VECTOR);
+ }
+
+ fm10k_write_reg(hw, itr_reg, FM10K_ITR_MASK_SET);
+
+ free_irq(entry->vector, interface);
+}
+
+static s32 fm10k_mbx_mac_addr(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ bool vlan_override = hw->mac.vlan_override;
+ u16 default_vid = hw->mac.default_vid;
+ struct fm10k_intfc *interface;
+ s32 err;
+
+ err = fm10k_msg_mac_vlan_vf(hw, results, mbx);
+ if (err)
+ return err;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+
+ /* MAC was changed so we need reset */
+ if (is_valid_ether_addr(hw->mac.perm_addr) &&
+ memcmp(hw->mac.perm_addr, hw->mac.addr, ETH_ALEN))
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+
+ /* VLAN override was changed, or default VLAN changed */
+ if ((vlan_override != hw->mac.vlan_override) ||
+ (default_vid != hw->mac.default_vid))
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+
+ return 0;
+}
+
+static s32 fm10k_1588_msg_vf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_intfc *interface;
+ u64 timestamp;
+ s32 err;
+
+ err = fm10k_tlv_attr_get_u64(results[FM10K_1588_MSG_TIMESTAMP],
+ &timestamp);
+ if (err)
+ return err;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+
+ fm10k_ts_tx_hwtstamp(interface, 0, timestamp);
+
+ return 0;
+}
+
+/* generic error handler for mailbox issues */
+static s32 fm10k_mbx_error(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_intfc *interface;
+ struct pci_dev *pdev;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+ pdev = interface->pdev;
+
+ dev_err(&pdev->dev, "Unknown message ID %u\n",
+ **results & FM10K_TLV_ID_MASK);
+
+ return 0;
+}
+
+static const struct fm10k_msg_data vf_mbx_data[] = {
+ FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test),
+ FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_mbx_mac_addr),
+ FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf),
+ FM10K_VF_MSG_1588_HANDLER(fm10k_1588_msg_vf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_mbx_error),
+};
+
+static int fm10k_mbx_request_irq_vf(struct fm10k_intfc *interface)
+{
+ struct msix_entry *entry = &interface->msix_entries[FM10K_MBX_VECTOR];
+ struct net_device *dev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int err;
+
+ /* Use timer0 for interrupt moderation on the mailbox */
+ u32 itr = FM10K_INT_MAP_TIMER0 | entry->entry;
+
+ /* register mailbox handlers */
+ err = hw->mbx.ops.register_handlers(&hw->mbx, vf_mbx_data);
+ if (err)
+ return err;
+
+ /* request the IRQ */
+ err = request_irq(entry->vector, fm10k_msix_mbx_vf, 0,
+ dev->name, interface);
+ if (err) {
+ netif_err(interface, probe, dev,
+ "request_irq for msix_mbx failed: %d\n", err);
+ return err;
+ }
+
+ /* map all of the interrupt sources */
+ fm10k_write_reg(hw, FM10K_VFINT_MAP, itr);
+
+ /* enable interrupt */
+ fm10k_write_reg(hw, FM10K_VFITR(entry->entry), FM10K_ITR_ENABLE);
+
+ return 0;
+}
+
+static s32 fm10k_lport_map(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_intfc *interface;
+ u32 dglort_map = hw->mac.dglort_map;
+ s32 err;
+
+ err = fm10k_msg_lport_map_pf(hw, results, mbx);
+ if (err)
+ return err;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+
+ /* we need to reset if port count was just updated */
+ if (dglort_map != hw->mac.dglort_map)
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+
+ return 0;
+}
+
+static s32 fm10k_update_pvid(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_intfc *interface;
+ u16 glort, pvid;
+ u32 pvid_update;
+ s32 err;
+
+ err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_UPDATE_PVID],
+ &pvid_update);
+ if (err)
+ return err;
+
+ /* extract values from the pvid update */
+ glort = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_GLORT);
+ pvid = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_PVID);
+
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ /* verify VID is valid */
+ if (pvid >= FM10K_VLAN_TABLE_VID_MAX)
+ return FM10K_ERR_PARAM;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+
+ /* check to see if this belongs to one of the VFs */
+ err = fm10k_iov_update_pvid(interface, glort, pvid);
+ if (!err)
+ return 0;
+
+ /* we need to reset if default VLAN was just updated */
+ if (pvid != hw->mac.default_vid)
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+
+ hw->mac.default_vid = pvid;
+
+ return 0;
+}
+
+static s32 fm10k_1588_msg_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_swapi_1588_timestamp timestamp;
+ struct fm10k_iov_data *iov_data;
+ struct fm10k_intfc *interface;
+ u16 sglort, vf_idx;
+ s32 err;
+
+ err = fm10k_tlv_attr_get_le_struct(
+ results[FM10K_PF_ATTR_ID_1588_TIMESTAMP],
+ &timestamp, sizeof(timestamp));
+ if (err)
+ return err;
+
+ interface = container_of(hw, struct fm10k_intfc, hw);
+
+ if (timestamp.dglort) {
+ fm10k_ts_tx_hwtstamp(interface, timestamp.dglort,
+ le64_to_cpu(timestamp.egress));
+ return 0;
+ }
+
+ /* either dglort or sglort must be set */
+ if (!timestamp.sglort)
+ return FM10K_ERR_PARAM;
+
+ /* verify GLORT is at least one of the ones we own */
+ sglort = le16_to_cpu(timestamp.sglort);
+ if (!fm10k_glort_valid_pf(hw, sglort))
+ return FM10K_ERR_PARAM;
+
+ if (sglort == interface->glort) {
+ fm10k_ts_tx_hwtstamp(interface, 0,
+ le64_to_cpu(timestamp.ingress));
+ return 0;
+ }
+
+ /* if there is no iov_data then there is no mailboxes to process */
+ if (!ACCESS_ONCE(interface->iov_data))
+ return FM10K_ERR_PARAM;
+
+ rcu_read_lock();
+
+ /* notify VF if this timestamp belongs to it */
+ iov_data = interface->iov_data;
+ vf_idx = (hw->mac.dglort_map & FM10K_DGLORTMAP_NONE) - sglort;
+
+ if (!iov_data || vf_idx >= iov_data->num_vfs) {
+ err = FM10K_ERR_PARAM;
+ goto err_unlock;
+ }
+
+ err = hw->iov.ops.report_timestamp(hw, &iov_data->vf_info[vf_idx],
+ le64_to_cpu(timestamp.ingress));
+
+err_unlock:
+ rcu_read_unlock();
+
+ return err;
+}
+
+static const struct fm10k_msg_data pf_mbx_data[] = {
+ FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf),
+ FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_lport_map),
+ FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_update_pvid),
+ FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(fm10k_1588_msg_pf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_mbx_error),
+};
+
+static int fm10k_mbx_request_irq_pf(struct fm10k_intfc *interface)
+{
+ struct msix_entry *entry = &interface->msix_entries[FM10K_MBX_VECTOR];
+ struct net_device *dev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int err;
+
+ /* Use timer0 for interrupt moderation on the mailbox */
+ u32 mbx_itr = FM10K_INT_MAP_TIMER0 | entry->entry;
+ u32 other_itr = FM10K_INT_MAP_IMMEDIATE | entry->entry;
+
+ /* register mailbox handlers */
+ err = hw->mbx.ops.register_handlers(&hw->mbx, pf_mbx_data);
+ if (err)
+ return err;
+
+ /* request the IRQ */
+ err = request_irq(entry->vector, fm10k_msix_mbx_pf, 0,
+ dev->name, interface);
+ if (err) {
+ netif_err(interface, probe, dev,
+ "request_irq for msix_mbx failed: %d\n", err);
+ return err;
+ }
+
+ /* Enable interrupts w/ no moderation for "other" interrupts */
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_PCIeFault), other_itr);
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_SwitchUpDown), other_itr);
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_SRAM), other_itr);
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_MaxHoldTime), other_itr);
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_VFLR), other_itr);
+
+ /* Enable interrupts w/ moderation for mailbox */
+ fm10k_write_reg(hw, FM10K_INT_MAP(fm10k_int_Mailbox), mbx_itr);
+
+ /* Enable individual interrupt causes */
+ fm10k_write_reg(hw, FM10K_EIMR, FM10K_EIMR_ENABLE(PCA_FAULT) |
+ FM10K_EIMR_ENABLE(FUM_FAULT) |
+ FM10K_EIMR_ENABLE(MAILBOX) |
+ FM10K_EIMR_ENABLE(SWITCHREADY) |
+ FM10K_EIMR_ENABLE(SWITCHNOTREADY) |
+ FM10K_EIMR_ENABLE(SRAMERROR) |
+ FM10K_EIMR_ENABLE(VFLR) |
+ FM10K_EIMR_ENABLE(MAXHOLDTIME));
+
+ /* enable interrupt */
+ fm10k_write_reg(hw, FM10K_ITR(entry->entry), FM10K_ITR_ENABLE);
+
+ return 0;
+}
+
+int fm10k_mbx_request_irq(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+ int err;
+
+ /* enable Mailbox cause */
+ if (hw->mac.type == fm10k_mac_pf)
+ err = fm10k_mbx_request_irq_pf(interface);
+ else
+ err = fm10k_mbx_request_irq_vf(interface);
+
+ /* connect mailbox */
+ if (!err)
+ err = hw->mbx.ops.connect(hw, &hw->mbx);
+
+ return err;
+}
+
+/**
+ * fm10k_qv_free_irq - release interrupts associated with queue vectors
+ * @interface: board private structure
+ *
+ * Release all interrupts associated with this interface
+ **/
+void fm10k_qv_free_irq(struct fm10k_intfc *interface)
+{
+ int vector = interface->num_q_vectors;
+ struct fm10k_hw *hw = &interface->hw;
+ struct msix_entry *entry;
+
+ entry = &interface->msix_entries[NON_Q_VECTORS(hw) + vector];
+
+ while (vector) {
+ struct fm10k_q_vector *q_vector;
+
+ vector--;
+ entry--;
+ q_vector = interface->q_vector[vector];
+
+ if (!q_vector->tx.count && !q_vector->rx.count)
+ continue;
+
+ /* disable interrupts */
+
+ writel(FM10K_ITR_MASK_SET, q_vector->itr);
+
+ free_irq(entry->vector, q_vector);
+ }
+}
+
+/**
+ * fm10k_qv_request_irq - initialize interrupts for queue vectors
+ * @interface: board private structure
+ *
+ * Attempts to configure interrupts using the best available
+ * capabilities of the hardware and kernel.
+ **/
+int fm10k_qv_request_irq(struct fm10k_intfc *interface)
+{
+ struct net_device *dev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ struct msix_entry *entry;
+ int ri = 0, ti = 0;
+ int vector, err;
+
+ entry = &interface->msix_entries[NON_Q_VECTORS(hw)];
+
+ for (vector = 0; vector < interface->num_q_vectors; vector++) {
+ struct fm10k_q_vector *q_vector = interface->q_vector[vector];
+
+ /* name the vector */
+ if (q_vector->tx.count && q_vector->rx.count) {
+ snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+ "%s-TxRx-%d", dev->name, ri++);
+ ti++;
+ } else if (q_vector->rx.count) {
+ snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+ "%s-rx-%d", dev->name, ri++);
+ } else if (q_vector->tx.count) {
+ snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+ "%s-tx-%d", dev->name, ti++);
+ } else {
+ /* skip this unused q_vector */
+ continue;
+ }
+
+ /* Assign ITR register to q_vector */
+ q_vector->itr = (hw->mac.type == fm10k_mac_pf) ?
+ &interface->uc_addr[FM10K_ITR(entry->entry)] :
+ &interface->uc_addr[FM10K_VFITR(entry->entry)];
+
+ /* request the IRQ */
+ err = request_irq(entry->vector, &fm10k_msix_clean_rings, 0,
+ q_vector->name, q_vector);
+ if (err) {
+ netif_err(interface, probe, dev,
+ "request_irq failed for MSIX interrupt Error: %d\n",
+ err);
+ goto err_out;
+ }
+
+ /* Enable q_vector */
+ writel(FM10K_ITR_ENABLE, q_vector->itr);
+
+ entry++;
+ }
+
+ return 0;
+
+err_out:
+ /* wind through the ring freeing all entries and vectors */
+ while (vector) {
+ struct fm10k_q_vector *q_vector;
+
+ entry--;
+ vector--;
+ q_vector = interface->q_vector[vector];
+
+ if (!q_vector->tx.count && !q_vector->rx.count)
+ continue;
+
+ /* disable interrupts */
+
+ writel(FM10K_ITR_MASK_SET, q_vector->itr);
+
+ free_irq(entry->vector, q_vector);
+ }
+
+ return err;
+}
+
+void fm10k_up(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+
+ /* Enable Tx/Rx DMA */
+ hw->mac.ops.start_hw(hw);
+
+ /* configure Tx descriptor rings */
+ fm10k_configure_tx(interface);
+
+ /* configure Rx descriptor rings */
+ fm10k_configure_rx(interface);
+
+ /* configure interrupts */
+ hw->mac.ops.update_int_moderator(hw);
+
+ /* clear down bit to indicate we are ready to go */
+ clear_bit(__FM10K_DOWN, &interface->state);
+
+ /* enable polling cleanups */
+ fm10k_napi_enable_all(interface);
+
+ /* re-establish Rx filters */
+ fm10k_restore_rx_state(interface);
+
+ /* enable transmits */
+ netif_tx_start_all_queues(interface->netdev);
+
+ /* kick off the service timer */
+ mod_timer(&interface->service_timer, jiffies);
+}
+
+static void fm10k_napi_disable_all(struct fm10k_intfc *interface)
+{
+ struct fm10k_q_vector *q_vector;
+ int q_idx;
+
+ for (q_idx = 0; q_idx < interface->num_q_vectors; q_idx++) {
+ q_vector = interface->q_vector[q_idx];
+ napi_disable(&q_vector->napi);
+ }
+}
+
+void fm10k_down(struct fm10k_intfc *interface)
+{
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+
+ /* signal that we are down to the interrupt handler and service task */
+ set_bit(__FM10K_DOWN, &interface->state);
+
+ /* call carrier off first to avoid false dev_watchdog timeouts */
+ netif_carrier_off(netdev);
+
+ /* disable transmits */
+ netif_tx_stop_all_queues(netdev);
+ netif_tx_disable(netdev);
+
+ /* reset Rx filters */
+ fm10k_reset_rx_state(interface);
+
+ /* allow 10ms for device to quiesce */
+ usleep_range(10000, 20000);
+
+ /* disable polling routines */
+ fm10k_napi_disable_all(interface);
+
+ del_timer_sync(&interface->service_timer);
+
+ /* capture stats one last time before stopping interface */
+ fm10k_update_stats(interface);
+
+ /* Disable DMA engine for Tx/Rx */
+ hw->mac.ops.stop_hw(hw);
+
+ /* free any buffers still on the rings */
+ fm10k_clean_all_tx_rings(interface);
+}
+
+/**
+ * fm10k_sw_init - Initialize general software structures
+ * @interface: host interface private structure to initialize
+ *
+ * fm10k_sw_init initializes the interface private data structure.
+ * Fields are initialized based on PCI device information and
+ * OS network device settings (MTU size).
+ **/
+static int fm10k_sw_init(struct fm10k_intfc *interface,
+ const struct pci_device_id *ent)
+{
+ static const u32 seed[FM10K_RSSRK_SIZE] = { 0xda565a6d, 0xc20e5b25,
+ 0x3d256741, 0xb08fa343,
+ 0xcb2bcad0, 0xb4307bae,
+ 0xa32dcb77, 0x0cf23080,
+ 0x3bb7426a, 0xfa01acbe };
+ const struct fm10k_info *fi = fm10k_info_tbl[ent->driver_data];
+ struct fm10k_hw *hw = &interface->hw;
+ struct pci_dev *pdev = interface->pdev;
+ struct net_device *netdev = interface->netdev;
+ unsigned int rss;
+ int err;
+
+ /* initialize back pointer */
+ hw->back = interface;
+ hw->hw_addr = interface->uc_addr;
+
+ /* PCI config space info */
+ hw->vendor_id = pdev->vendor;
+ hw->device_id = pdev->device;
+ hw->revision_id = pdev->revision;
+ hw->subsystem_vendor_id = pdev->subsystem_vendor;
+ hw->subsystem_device_id = pdev->subsystem_device;
+
+ /* Setup hw api */
+ memcpy(&hw->mac.ops, fi->mac_ops, sizeof(hw->mac.ops));
+ hw->mac.type = fi->mac;
+
+ /* Setup IOV handlers */
+ if (fi->iov_ops)
+ memcpy(&hw->iov.ops, fi->iov_ops, sizeof(hw->iov.ops));
+
+ /* Set common capability flags and settings */
+ rss = min_t(int, FM10K_MAX_RSS_INDICES, num_online_cpus());
+ interface->ring_feature[RING_F_RSS].limit = rss;
+ fi->get_invariants(hw);
+
+ /* pick up the PCIe bus settings for reporting later */
+ if (hw->mac.ops.get_bus_info)
+ hw->mac.ops.get_bus_info(hw);
+
+ /* limit the usable DMA range */
+ if (hw->mac.ops.set_dma_mask)
+ hw->mac.ops.set_dma_mask(hw, dma_get_mask(&pdev->dev));
+
+ /* update netdev with DMA restrictions */
+ if (dma_get_mask(&pdev->dev) > DMA_BIT_MASK(32)) {
+ netdev->features |= NETIF_F_HIGHDMA;
+ netdev->vlan_features |= NETIF_F_HIGHDMA;
+ }
+
+ /* delay any future reset requests */
+ interface->last_reset = jiffies + (10 * HZ);
+
+ /* reset and initialize the hardware so it is in a known state */
+ err = hw->mac.ops.reset_hw(hw) ? : hw->mac.ops.init_hw(hw);
+ if (err) {
+ dev_err(&pdev->dev, "init_hw failed: %d\n", err);
+ return err;
+ }
+
+ /* initialize hardware statistics */
+ hw->mac.ops.update_hw_stats(hw, &interface->stats);
+
+ /* Set upper limit on IOV VFs that can be allocated */
+ pci_sriov_set_totalvfs(pdev, hw->iov.total_vfs);
+
+ /* Start with random Ethernet address */
+ eth_random_addr(hw->mac.addr);
+
+ /* Initialize MAC address from hardware */
+ err = hw->mac.ops.read_mac_addr(hw);
+ if (err) {
+ dev_warn(&pdev->dev,
+ "Failed to obtain MAC address defaulting to random\n");
+ /* tag address assignment as random */
+ netdev->addr_assign_type |= NET_ADDR_RANDOM;
+ }
+
+ memcpy(netdev->dev_addr, hw->mac.addr, netdev->addr_len);
+ memcpy(netdev->perm_addr, hw->mac.addr, netdev->addr_len);
+
+ if (!is_valid_ether_addr(netdev->perm_addr)) {
+ dev_err(&pdev->dev, "Invalid MAC Address\n");
+ return -EIO;
+ }
+
+ /* assign BAR 4 resources for use with PTP */
+ if (fm10k_read_reg(hw, FM10K_CTRL) & FM10K_CTRL_BAR4_ALLOWED)
+ interface->sw_addr = ioremap(pci_resource_start(pdev, 4),
+ pci_resource_len(pdev, 4));
+ hw->sw_addr = interface->sw_addr;
+
+ /* Only the PF can support VXLAN and NVGRE offloads */
+ if (hw->mac.type != fm10k_mac_pf) {
+ netdev->hw_enc_features = 0;
+ netdev->features &= ~NETIF_F_GSO_UDP_TUNNEL;
+ netdev->hw_features &= ~NETIF_F_GSO_UDP_TUNNEL;
+ }
+
+ /* initialize DCBNL interface */
+ fm10k_dcbnl_set_ops(netdev);
+
+ /* Initialize service timer and service task */
+ set_bit(__FM10K_SERVICE_DISABLE, &interface->state);
+ setup_timer(&interface->service_timer, &fm10k_service_timer,
+ (unsigned long)interface);
+ INIT_WORK(&interface->service_task, fm10k_service_task);
+
+ /* Intitialize timestamp data */
+ fm10k_ts_init(interface);
+
+ /* set default ring sizes */
+ interface->tx_ring_count = FM10K_DEFAULT_TXD;
+ interface->rx_ring_count = FM10K_DEFAULT_RXD;
+
+ /* set default interrupt moderation */
+ interface->tx_itr = FM10K_ITR_10K;
+ interface->rx_itr = FM10K_ITR_ADAPTIVE | FM10K_ITR_20K;
+
+ /* initialize vxlan_port list */
+ INIT_LIST_HEAD(&interface->vxlan_port);
+
+ /* initialize RSS key */
+ memcpy(interface->rssrk, seed, sizeof(seed));
+
+ /* Start off interface as being down */
+ set_bit(__FM10K_DOWN, &interface->state);
+
+ return 0;
+}
+
+static void fm10k_slot_warn(struct fm10k_intfc *interface)
+{
+ struct device *dev = &interface->pdev->dev;
+ struct fm10k_hw *hw = &interface->hw;
+
+ if (hw->mac.ops.is_slot_appropriate(hw))
+ return;
+
+ dev_warn(dev,
+ "For optimal performance, a %s %s slot is recommended.\n",
+ (hw->bus_caps.width == fm10k_bus_width_pcie_x1 ? "x1" :
+ hw->bus_caps.width == fm10k_bus_width_pcie_x4 ? "x4" :
+ "x8"),
+ (hw->bus_caps.speed == fm10k_bus_speed_2500 ? "2.5GT/s" :
+ hw->bus_caps.speed == fm10k_bus_speed_5000 ? "5.0GT/s" :
+ "8.0GT/s"));
+ dev_warn(dev,
+ "A slot with more lanes and/or higher speed is suggested.\n");
+}
+
+/**
+ * fm10k_probe - Device Initialization Routine
+ * @pdev: PCI device information struct
+ * @ent: entry in fm10k_pci_tbl
+ *
+ * Returns 0 on success, negative on failure
+ *
+ * fm10k_probe initializes an interface identified by a pci_dev structure.
+ * The OS initialization, configuring of the interface private structure,
+ * and a hardware reset occur.
+ **/
+static int fm10k_probe(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ struct net_device *netdev;
+ struct fm10k_intfc *interface;
+ struct fm10k_hw *hw;
+ int err;
+ u64 dma_mask;
+
+ err = pci_enable_device_mem(pdev);
+ if (err)
+ return err;
+
+ /* By default fm10k only supports a 48 bit DMA mask */
+ dma_mask = DMA_BIT_MASK(48) | dma_get_required_mask(&pdev->dev);
+
+ if ((dma_mask <= DMA_BIT_MASK(32)) ||
+ dma_set_mask_and_coherent(&pdev->dev, dma_mask)) {
+ dma_mask &= DMA_BIT_MASK(32);
+
+ err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+ err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
+ if (err) {
+ err = dma_set_coherent_mask(&pdev->dev,
+ DMA_BIT_MASK(32));
+ if (err) {
+ dev_err(&pdev->dev,
+ "No usable DMA configuration, aborting\n");
+ goto err_dma;
+ }
+ }
+ }
+
+ err = pci_request_selected_regions(pdev,
+ pci_select_bars(pdev,
+ IORESOURCE_MEM),
+ fm10k_driver_name);
+ if (err) {
+ dev_err(&pdev->dev,
+ "pci_request_selected_regions failed 0x%x\n", err);
+ goto err_pci_reg;
+ }
+
+ pci_enable_pcie_error_reporting(pdev);
+
+ pci_set_master(pdev);
+ pci_save_state(pdev);
+
+ netdev = fm10k_alloc_netdev();
+ if (!netdev) {
+ err = -ENOMEM;
+ goto err_alloc_netdev;
+ }
+
+ SET_NETDEV_DEV(netdev, &pdev->dev);
+
+ interface = netdev_priv(netdev);
+ pci_set_drvdata(pdev, interface);
+
+ interface->netdev = netdev;
+ interface->pdev = pdev;
+ hw = &interface->hw;
+
+ interface->uc_addr = ioremap(pci_resource_start(pdev, 0),
+ FM10K_UC_ADDR_SIZE);
+ if (!interface->uc_addr) {
+ err = -EIO;
+ goto err_ioremap;
+ }
+
+ err = fm10k_sw_init(interface, ent);
+ if (err)
+ goto err_sw_init;
+
+ /* enable debugfs support */
+ fm10k_dbg_intfc_init(interface);
+
+ err = fm10k_init_queueing_scheme(interface);
+ if (err)
+ goto err_sw_init;
+
+ err = fm10k_mbx_request_irq(interface);
+ if (err)
+ goto err_mbx_interrupt;
+
+ /* final check of hardware state before registering the interface */
+ err = fm10k_hw_ready(interface);
+ if (err)
+ goto err_register;
+
+ err = register_netdev(netdev);
+ if (err)
+ goto err_register;
+
+ /* carrier off reporting is important to ethtool even BEFORE open */
+ netif_carrier_off(netdev);
+
+ /* stop all the transmit queues from transmitting until link is up */
+ netif_tx_stop_all_queues(netdev);
+
+ /* Register PTP interface */
+ fm10k_ptp_register(interface);
+
+ /* print bus type/speed/width info */
+ dev_info(&pdev->dev, "(PCI Express:%s Width: %s Payload: %s)\n",
+ (hw->bus.speed == fm10k_bus_speed_8000 ? "8.0GT/s" :
+ hw->bus.speed == fm10k_bus_speed_5000 ? "5.0GT/s" :
+ hw->bus.speed == fm10k_bus_speed_2500 ? "2.5GT/s" :
+ "Unknown"),
+ (hw->bus.width == fm10k_bus_width_pcie_x8 ? "x8" :
+ hw->bus.width == fm10k_bus_width_pcie_x4 ? "x4" :
+ hw->bus.width == fm10k_bus_width_pcie_x1 ? "x1" :
+ "Unknown"),
+ (hw->bus.payload == fm10k_bus_payload_128 ? "128B" :
+ hw->bus.payload == fm10k_bus_payload_256 ? "256B" :
+ hw->bus.payload == fm10k_bus_payload_512 ? "512B" :
+ "Unknown"));
+
+ /* print warning for non-optimal configurations */
+ fm10k_slot_warn(interface);
+
+ /* enable SR-IOV after registering netdev to enforce PF/VF ordering */
+ fm10k_iov_configure(pdev, 0);
+
+ /* clear the service task disable bit to allow service task to start */
+ clear_bit(__FM10K_SERVICE_DISABLE, &interface->state);
+
+ return 0;
+
+err_register:
+ fm10k_mbx_free_irq(interface);
+err_mbx_interrupt:
+ fm10k_clear_queueing_scheme(interface);
+err_sw_init:
+ if (interface->sw_addr)
+ iounmap(interface->sw_addr);
+ iounmap(interface->uc_addr);
+err_ioremap:
+ free_netdev(netdev);
+err_alloc_netdev:
+ pci_release_selected_regions(pdev,
+ pci_select_bars(pdev, IORESOURCE_MEM));
+err_pci_reg:
+err_dma:
+ pci_disable_device(pdev);
+ return err;
+}
+
+/**
+ * fm10k_remove - Device Removal Routine
+ * @pdev: PCI device information struct
+ *
+ * fm10k_remove is called by the PCI subsystem to alert the driver
+ * that it should release a PCI device. The could be caused by a
+ * Hot-Plug event, or because the driver is going to be removed from
+ * memory.
+ **/
+static void fm10k_remove(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct net_device *netdev = interface->netdev;
+
+ set_bit(__FM10K_SERVICE_DISABLE, &interface->state);
+ cancel_work_sync(&interface->service_task);
+
+ /* free netdev, this may bounce the interrupts due to setup_tc */
+ if (netdev->reg_state == NETREG_REGISTERED)
+ unregister_netdev(netdev);
+
+ /* cleanup timestamp handling */
+ fm10k_ptp_unregister(interface);
+
+ /* release VFs */
+ fm10k_iov_disable(pdev);
+
+ /* disable mailbox interrupt */
+ fm10k_mbx_free_irq(interface);
+
+ /* free interrupts */
+ fm10k_clear_queueing_scheme(interface);
+
+ /* remove any debugfs interfaces */
+ fm10k_dbg_intfc_exit(interface);
+
+ if (interface->sw_addr)
+ iounmap(interface->sw_addr);
+ iounmap(interface->uc_addr);
+
+ free_netdev(netdev);
+
+ pci_release_selected_regions(pdev,
+ pci_select_bars(pdev, IORESOURCE_MEM));
+
+ pci_disable_pcie_error_reporting(pdev);
+
+ pci_disable_device(pdev);
+}
+
+#ifdef CONFIG_PM
+/**
+ * fm10k_resume - Restore device to pre-sleep state
+ * @pdev: PCI device information struct
+ *
+ * fm10k_resume is called after the system has powered back up from a sleep
+ * state and is ready to resume operation. This function is meant to restore
+ * the device back to its pre-sleep state.
+ **/
+static int fm10k_resume(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ u32 err;
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+
+ /* pci_restore_state clears dev->state_saved so call
+ * pci_save_state to restore it.
+ */
+ pci_save_state(pdev);
+
+ err = pci_enable_device_mem(pdev);
+ if (err) {
+ dev_err(&pdev->dev, "Cannot enable PCI device from suspend\n");
+ return err;
+ }
+ pci_set_master(pdev);
+
+ pci_wake_from_d3(pdev, false);
+
+ /* refresh hw_addr in case it was dropped */
+ hw->hw_addr = interface->uc_addr;
+
+ /* reset hardware to known state */
+ err = hw->mac.ops.init_hw(&interface->hw);
+ if (err)
+ return err;
+
+ /* reset statistics starting values */
+ hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
+
+ /* reset clock */
+ fm10k_ts_reset(interface);
+
+ rtnl_lock();
+
+ err = fm10k_init_queueing_scheme(interface);
+ if (!err) {
+ fm10k_mbx_request_irq(interface);
+ if (netif_running(netdev))
+ err = fm10k_open(netdev);
+ }
+
+ rtnl_unlock();
+
+ if (err)
+ return err;
+
+ /* restore SR-IOV interface */
+ fm10k_iov_resume(pdev);
+
+ netif_device_attach(netdev);
+
+ return 0;
+}
+
+/**
+ * fm10k_suspend - Prepare the device for a system sleep state
+ * @pdev: PCI device information struct
+ *
+ * fm10k_suspend is meant to shutdown the device prior to the system entering
+ * a sleep state. The fm10k hardware does not support wake on lan so the
+ * driver simply needs to shut down the device so it is in a low power state.
+ **/
+static int fm10k_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct net_device *netdev = interface->netdev;
+ int err = 0;
+
+ netif_device_detach(netdev);
+
+ fm10k_iov_suspend(pdev);
+
+ rtnl_lock();
+
+ if (netif_running(netdev))
+ fm10k_close(netdev);
+
+ fm10k_mbx_free_irq(interface);
+
+ fm10k_clear_queueing_scheme(interface);
+
+ rtnl_unlock();
+
+ err = pci_save_state(pdev);
+ if (err)
+ return err;
+
+ pci_disable_device(pdev);
+ pci_wake_from_d3(pdev, false);
+ pci_set_power_state(pdev, PCI_D3hot);
+
+ return 0;
+}
+
+#endif /* CONFIG_PM */
+/**
+ * fm10k_io_error_detected - called when PCI error is detected
+ * @pdev: Pointer to PCI device
+ * @state: The current pci connection state
+ *
+ * This function is called after a PCI bus error affecting
+ * this device has been detected.
+ */
+static pci_ers_result_t fm10k_io_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct net_device *netdev = interface->netdev;
+
+ netif_device_detach(netdev);
+
+ if (state == pci_channel_io_perm_failure)
+ return PCI_ERS_RESULT_DISCONNECT;
+
+ if (netif_running(netdev))
+ fm10k_close(netdev);
+
+ fm10k_mbx_free_irq(interface);
+
+ pci_disable_device(pdev);
+
+ /* Request a slot reset. */
+ return PCI_ERS_RESULT_NEED_RESET;
+}
+
+/**
+ * fm10k_io_slot_reset - called after the pci bus has been reset.
+ * @pdev: Pointer to PCI device
+ *
+ * Restart the card from scratch, as if from a cold-boot.
+ */
+static pci_ers_result_t fm10k_io_slot_reset(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ pci_ers_result_t result;
+
+ if (pci_enable_device_mem(pdev)) {
+ dev_err(&pdev->dev,
+ "Cannot re-enable PCI device after reset.\n");
+ result = PCI_ERS_RESULT_DISCONNECT;
+ } else {
+ pci_set_master(pdev);
+ pci_restore_state(pdev);
+
+ /* After second error pci->state_saved is false, this
+ * resets it so EEH doesn't break.
+ */
+ pci_save_state(pdev);
+
+ pci_wake_from_d3(pdev, false);
+
+ /* refresh hw_addr in case it was dropped */
+ interface->hw.hw_addr = interface->uc_addr;
+
+ interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+ fm10k_service_event_schedule(interface);
+
+ result = PCI_ERS_RESULT_RECOVERED;
+ }
+
+ pci_cleanup_aer_uncorrect_error_status(pdev);
+
+ return result;
+}
+
+/**
+ * fm10k_io_resume - called when traffic can start flowing again.
+ * @pdev: Pointer to PCI device
+ *
+ * This callback is called when the error recovery driver tells us that
+ * its OK to resume normal operation.
+ */
+static void fm10k_io_resume(struct pci_dev *pdev)
+{
+ struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+ struct net_device *netdev = interface->netdev;
+ struct fm10k_hw *hw = &interface->hw;
+ int err = 0;
+
+ /* reset hardware to known state */
+ hw->mac.ops.init_hw(&interface->hw);
+
+ /* reset statistics starting values */
+ hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
+
+ /* reassociate interrupts */
+ fm10k_mbx_request_irq(interface);
+
+ /* reset clock */
+ fm10k_ts_reset(interface);
+
+ if (netif_running(netdev))
+ err = fm10k_open(netdev);
+
+ /* final check of hardware state before registering the interface */
+ err = err ? : fm10k_hw_ready(interface);
+
+ if (!err)
+ netif_device_attach(netdev);
+}
+
+static const struct pci_error_handlers fm10k_err_handler = {
+ .error_detected = fm10k_io_error_detected,
+ .slot_reset = fm10k_io_slot_reset,
+ .resume = fm10k_io_resume,
+};
+
+static struct pci_driver fm10k_driver = {
+ .name = fm10k_driver_name,
+ .id_table = fm10k_pci_tbl,
+ .probe = fm10k_probe,
+ .remove = fm10k_remove,
+#ifdef CONFIG_PM
+ .suspend = fm10k_suspend,
+ .resume = fm10k_resume,
+#endif
+ .sriov_configure = fm10k_iov_configure,
+ .err_handler = &fm10k_err_handler
+};
+
+/**
+ * fm10k_register_pci_driver - register driver interface
+ *
+ * This funciton is called on module load in order to register the driver.
+ **/
+int fm10k_register_pci_driver(void)
+{
+ return pci_register_driver(&fm10k_driver);
+}
+
+/**
+ * fm10k_unregister_pci_driver - unregister driver interface
+ *
+ * This funciton is called on module unload in order to remove the driver.
+ **/
+void fm10k_unregister_pci_driver(void)
+{
+ pci_unregister_driver(&fm10k_driver);
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c
new file mode 100644
index 0000000..275423d
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c
@@ -0,0 +1,1880 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k_pf.h"
+#include "fm10k_vf.h"
+
+/**
+ * fm10k_reset_hw_pf - PF hardware reset
+ * @hw: pointer to hardware structure
+ *
+ * This function should return the hardware to a state similar to the
+ * one it is in after being powered on.
+ **/
+static s32 fm10k_reset_hw_pf(struct fm10k_hw *hw)
+{
+ s32 err;
+ u32 reg;
+ u16 i;
+
+ /* Disable interrupts */
+ fm10k_write_reg(hw, FM10K_EIMR, FM10K_EIMR_DISABLE(ALL));
+
+ /* Lock ITR2 reg 0 into itself and disable interrupt moderation */
+ fm10k_write_reg(hw, FM10K_ITR2(0), 0);
+ fm10k_write_reg(hw, FM10K_INT_CTRL, 0);
+
+ /* We assume here Tx and Rx queue 0 are owned by the PF */
+
+ /* Shut off VF access to their queues forcing them to queue 0 */
+ for (i = 0; i < FM10K_TQMAP_TABLE_SIZE; i++) {
+ fm10k_write_reg(hw, FM10K_TQMAP(i), 0);
+ fm10k_write_reg(hw, FM10K_RQMAP(i), 0);
+ }
+
+ /* shut down all rings */
+ err = fm10k_disable_queues_generic(hw, FM10K_MAX_QUEUES);
+ if (err)
+ return err;
+
+ /* Verify that DMA is no longer active */
+ reg = fm10k_read_reg(hw, FM10K_DMA_CTRL);
+ if (reg & (FM10K_DMA_CTRL_TX_ACTIVE | FM10K_DMA_CTRL_RX_ACTIVE))
+ return FM10K_ERR_DMA_PENDING;
+
+ /* Inititate data path reset */
+ reg |= FM10K_DMA_CTRL_DATAPATH_RESET;
+ fm10k_write_reg(hw, FM10K_DMA_CTRL, reg);
+
+ /* Flush write and allow 100us for reset to complete */
+ fm10k_write_flush(hw);
+ udelay(FM10K_RESET_TIMEOUT);
+
+ /* Verify we made it out of reset */
+ reg = fm10k_read_reg(hw, FM10K_IP);
+ if (!(reg & FM10K_IP_NOTINRESET))
+ err = FM10K_ERR_RESET_FAILED;
+
+ return err;
+}
+
+/**
+ * fm10k_is_ari_hierarchy_pf - Indicate ARI hierarchy support
+ * @hw: pointer to hardware structure
+ *
+ * Looks at the ARI hierarchy bit to determine whether ARI is supported or not.
+ **/
+static bool fm10k_is_ari_hierarchy_pf(struct fm10k_hw *hw)
+{
+ u16 sriov_ctrl = fm10k_read_pci_cfg_word(hw, FM10K_PCIE_SRIOV_CTRL);
+
+ return !!(sriov_ctrl & FM10K_PCIE_SRIOV_CTRL_VFARI);
+}
+
+/**
+ * fm10k_init_hw_pf - PF hardware initialization
+ * @hw: pointer to hardware structure
+ *
+ **/
+static s32 fm10k_init_hw_pf(struct fm10k_hw *hw)
+{
+ u32 dma_ctrl, txqctl;
+ u16 i;
+
+ /* Establish default VSI as valid */
+ fm10k_write_reg(hw, FM10K_DGLORTDEC(fm10k_dglort_default), 0);
+ fm10k_write_reg(hw, FM10K_DGLORTMAP(fm10k_dglort_default),
+ FM10K_DGLORTMAP_ANY);
+
+ /* Invalidate all other GLORT entries */
+ for (i = 1; i < FM10K_DGLORT_COUNT; i++)
+ fm10k_write_reg(hw, FM10K_DGLORTMAP(i), FM10K_DGLORTMAP_NONE);
+
+ /* reset ITR2(0) to point to itself */
+ fm10k_write_reg(hw, FM10K_ITR2(0), 0);
+
+ /* reset VF ITR2(0) to point to 0 avoid PF registers */
+ fm10k_write_reg(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), 0);
+
+ /* loop through all PF ITR2 registers pointing them to the previous */
+ for (i = 1; i < FM10K_ITR_REG_COUNT_PF; i++)
+ fm10k_write_reg(hw, FM10K_ITR2(i), i - 1);
+
+ /* Enable interrupt moderator if not already enabled */
+ fm10k_write_reg(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR);
+
+ /* compute the default txqctl configuration */
+ txqctl = FM10K_TXQCTL_PF | FM10K_TXQCTL_UNLIMITED_BW |
+ (hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT);
+
+ for (i = 0; i < FM10K_MAX_QUEUES; i++) {
+ /* configure rings for 256 Queue / 32 Descriptor cache mode */
+ fm10k_write_reg(hw, FM10K_TQDLOC(i),
+ (i * FM10K_TQDLOC_BASE_32_DESC) |
+ FM10K_TQDLOC_SIZE_32_DESC);
+ fm10k_write_reg(hw, FM10K_TXQCTL(i), txqctl);
+
+ /* configure rings to provide TPH processing hints */
+ fm10k_write_reg(hw, FM10K_TPH_TXCTRL(i),
+ FM10K_TPH_TXCTRL_DESC_TPHEN |
+ FM10K_TPH_TXCTRL_DESC_RROEN |
+ FM10K_TPH_TXCTRL_DESC_WROEN |
+ FM10K_TPH_TXCTRL_DATA_RROEN);
+ fm10k_write_reg(hw, FM10K_TPH_RXCTRL(i),
+ FM10K_TPH_RXCTRL_DESC_TPHEN |
+ FM10K_TPH_RXCTRL_DESC_RROEN |
+ FM10K_TPH_RXCTRL_DATA_WROEN |
+ FM10K_TPH_RXCTRL_HDR_WROEN);
+ }
+
+ /* set max hold interval to align with 1.024 usec in all modes */
+ switch (hw->bus.speed) {
+ case fm10k_bus_speed_2500:
+ dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1;
+ break;
+ case fm10k_bus_speed_5000:
+ dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2;
+ break;
+ case fm10k_bus_speed_8000:
+ dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3;
+ break;
+ default:
+ dma_ctrl = 0;
+ break;
+ }
+
+ /* Configure TSO flags */
+ fm10k_write_reg(hw, FM10K_DTXTCPFLGL, FM10K_TSO_FLAGS_LOW);
+ fm10k_write_reg(hw, FM10K_DTXTCPFLGH, FM10K_TSO_FLAGS_HI);
+
+ /* Enable DMA engine
+ * Set Rx Descriptor size to 32
+ * Set Minimum MSS to 64
+ * Set Maximum number of Rx queues to 256 / 32 Descriptor
+ */
+ dma_ctrl |= FM10K_DMA_CTRL_TX_ENABLE | FM10K_DMA_CTRL_RX_ENABLE |
+ FM10K_DMA_CTRL_RX_DESC_SIZE | FM10K_DMA_CTRL_MINMSS_64 |
+ FM10K_DMA_CTRL_32_DESC;
+
+ fm10k_write_reg(hw, FM10K_DMA_CTRL, dma_ctrl);
+
+ /* record maximum queue count, we limit ourselves to 128 */
+ hw->mac.max_queues = FM10K_MAX_QUEUES_PF;
+
+ /* We support either 64 VFs or 7 VFs depending on if we have ARI */
+ hw->iov.total_vfs = fm10k_is_ari_hierarchy_pf(hw) ? 64 : 7;
+
+ return 0;
+}
+
+/**
+ * fm10k_is_slot_appropriate_pf - Indicate appropriate slot for this SKU
+ * @hw: pointer to hardware structure
+ *
+ * Looks at the PCIe bus info to confirm whether or not this slot can support
+ * the necessary bandwidth for this device.
+ **/
+static bool fm10k_is_slot_appropriate_pf(struct fm10k_hw *hw)
+{
+ return (hw->bus.speed == hw->bus_caps.speed) &&
+ (hw->bus.width == hw->bus_caps.width);
+}
+
+/**
+ * fm10k_update_vlan_pf - Update status of VLAN ID in VLAN filter table
+ * @hw: pointer to hardware structure
+ * @vid: VLAN ID to add to table
+ * @vsi: Index indicating VF ID or PF ID in table
+ * @set: Indicates if this is a set or clear operation
+ *
+ * This function adds or removes the corresponding VLAN ID from the VLAN
+ * filter table for the corresponding function. In addition to the
+ * standard set/clear that supports one bit a multi-bit write is
+ * supported to set 64 bits at a time.
+ **/
+static s32 fm10k_update_vlan_pf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set)
+{
+ u32 vlan_table, reg, mask, bit, len;
+
+ /* verify the VSI index is valid */
+ if (vsi > FM10K_VLAN_TABLE_VSI_MAX)
+ return FM10K_ERR_PARAM;
+
+ /* VLAN multi-bit write:
+ * The multi-bit write has several parts to it.
+ * 3 2 1 0
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | RSVD0 | Length |C|RSVD0| VLAN ID |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ *
+ * VLAN ID: Vlan Starting value
+ * RSVD0: Reserved section, must be 0
+ * C: Flag field, 0 is set, 1 is clear (Used in VF VLAN message)
+ * Length: Number of times to repeat the bit being set
+ */
+ len = vid >> 16;
+ vid = (vid << 17) >> 17;
+
+ /* verify the reserved 0 fields are 0 */
+ if (len >= FM10K_VLAN_TABLE_VID_MAX ||
+ vid >= FM10K_VLAN_TABLE_VID_MAX)
+ return FM10K_ERR_PARAM;
+
+ /* Loop through the table updating all required VLANs */
+ for (reg = FM10K_VLAN_TABLE(vsi, vid / 32), bit = vid % 32;
+ len < FM10K_VLAN_TABLE_VID_MAX;
+ len -= 32 - bit, reg++, bit = 0) {
+ /* record the initial state of the register */
+ vlan_table = fm10k_read_reg(hw, reg);
+
+ /* truncate mask if we are at the start or end of the run */
+ mask = (~(u32)0 >> ((len < 31) ? 31 - len : 0)) << bit;
+
+ /* make necessary modifications to the register */
+ mask &= set ? ~vlan_table : vlan_table;
+ if (mask)
+ fm10k_write_reg(hw, reg, vlan_table ^ mask);
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_read_mac_addr_pf - Read device MAC address
+ * @hw: pointer to the HW structure
+ *
+ * Reads the device MAC address from the SM_AREA and stores the value.
+ **/
+static s32 fm10k_read_mac_addr_pf(struct fm10k_hw *hw)
+{
+ u8 perm_addr[ETH_ALEN];
+ u32 serial_num;
+ int i;
+
+ serial_num = fm10k_read_reg(hw, FM10K_SM_AREA(1));
+
+ /* last byte should be all 1's */
+ if ((~serial_num) << 24)
+ return FM10K_ERR_INVALID_MAC_ADDR;
+
+ perm_addr[0] = (u8)(serial_num >> 24);
+ perm_addr[1] = (u8)(serial_num >> 16);
+ perm_addr[2] = (u8)(serial_num >> 8);
+
+ serial_num = fm10k_read_reg(hw, FM10K_SM_AREA(0));
+
+ /* first byte should be all 1's */
+ if ((~serial_num) >> 24)
+ return FM10K_ERR_INVALID_MAC_ADDR;
+
+ perm_addr[3] = (u8)(serial_num >> 16);
+ perm_addr[4] = (u8)(serial_num >> 8);
+ perm_addr[5] = (u8)(serial_num);
+
+ for (i = 0; i < ETH_ALEN; i++) {
+ hw->mac.perm_addr[i] = perm_addr[i];
+ hw->mac.addr[i] = perm_addr[i];
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_glort_valid_pf - Validate that the provided glort is valid
+ * @hw: pointer to the HW structure
+ * @glort: base glort to be validated
+ *
+ * This function will return an error if the provided glort is invalid
+ **/
+bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort)
+{
+ glort &= hw->mac.dglort_map >> FM10K_DGLORTMAP_MASK_SHIFT;
+
+ return glort == (hw->mac.dglort_map & FM10K_DGLORTMAP_NONE);
+}
+
+/**
+ * fm10k_update_uc_addr_pf - Update device unicast addresss
+ * @hw: pointer to the HW structure
+ * @glort: base resource tag for this request
+ * @mac: MAC address to add/remove from table
+ * @vid: VLAN ID to add/remove from table
+ * @add: Indicates if this is an add or remove operation
+ * @flags: flags field to indicate add and secure
+ *
+ * This function generates a message to the Switch API requesting
+ * that the given logical port add/remove the given L2 MAC/VLAN address.
+ **/
+static s32 fm10k_update_xc_addr_pf(struct fm10k_hw *hw, u16 glort,
+ const u8 *mac, u16 vid, bool add, u8 flags)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ struct fm10k_mac_update mac_update;
+ u32 msg[5];
+
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ /* drop upper 4 bits of VLAN ID */
+ vid = (vid << 4) >> 4;
+
+ /* record fields */
+ mac_update.mac_lower = cpu_to_le32(((u32)mac[2] << 24) |
+ ((u32)mac[3] << 16) |
+ ((u32)mac[4] << 8) |
+ ((u32)mac[5]));
+ mac_update.mac_upper = cpu_to_le16(((u32)mac[0] << 8) |
+ ((u32)mac[1]));
+ mac_update.vlan = cpu_to_le16(vid);
+ mac_update.glort = cpu_to_le16(glort);
+ mac_update.action = add ? 0 : 1;
+ mac_update.flags = flags;
+
+ /* populate mac_update fields */
+ fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE);
+ fm10k_tlv_attr_put_le_struct(msg, FM10K_PF_ATTR_ID_MAC_UPDATE,
+ &mac_update, sizeof(mac_update));
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_update_uc_addr_pf - Update device unicast addresss
+ * @hw: pointer to the HW structure
+ * @glort: base resource tag for this request
+ * @mac: MAC address to add/remove from table
+ * @vid: VLAN ID to add/remove from table
+ * @add: Indicates if this is an add or remove operation
+ * @flags: flags field to indicate add and secure
+ *
+ * This function is used to add or remove unicast addresses for
+ * the PF.
+ **/
+static s32 fm10k_update_uc_addr_pf(struct fm10k_hw *hw, u16 glort,
+ const u8 *mac, u16 vid, bool add, u8 flags)
+{
+ /* verify MAC address is valid */
+ if (!is_valid_ether_addr(mac))
+ return FM10K_ERR_PARAM;
+
+ return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, flags);
+}
+
+/**
+ * fm10k_update_mc_addr_pf - Update device multicast addresses
+ * @hw: pointer to the HW structure
+ * @glort: base resource tag for this request
+ * @mac: MAC address to add/remove from table
+ * @vid: VLAN ID to add/remove from table
+ * @add: Indicates if this is an add or remove operation
+ *
+ * This function is used to add or remove multicast MAC addresses for
+ * the PF.
+ **/
+static s32 fm10k_update_mc_addr_pf(struct fm10k_hw *hw, u16 glort,
+ const u8 *mac, u16 vid, bool add)
+{
+ /* verify multicast address is valid */
+ if (!is_multicast_ether_addr(mac))
+ return FM10K_ERR_PARAM;
+
+ return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, 0);
+}
+
+/**
+ * fm10k_update_xcast_mode_pf - Request update of multicast mode
+ * @hw: pointer to hardware structure
+ * @glort: base resource tag for this request
+ * @mode: integer value indicating mode being requested
+ *
+ * This function will attempt to request a higher mode for the port
+ * so that it can enable either multicast, multicast promiscuous, or
+ * promiscuous mode of operation.
+ **/
+static s32 fm10k_update_xcast_mode_pf(struct fm10k_hw *hw, u16 glort, u8 mode)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[3], xcast_mode;
+
+ if (mode > FM10K_XCAST_MODE_NONE)
+ return FM10K_ERR_PARAM;
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ /* write xcast mode as a single u32 value,
+ * lower 16 bits: glort
+ * upper 16 bits: mode
+ */
+ xcast_mode = ((u32)mode << 16) | glort;
+
+ /* generate message requesting to change xcast mode */
+ fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_XCAST_MODES);
+ fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_XCAST_MODE, xcast_mode);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_update_int_moderator_pf - Update interrupt moderator linked list
+ * @hw: pointer to hardware structure
+ *
+ * This function walks through the MSI-X vector table to determine the
+ * number of active interrupts and based on that information updates the
+ * interrupt moderator linked list.
+ **/
+static void fm10k_update_int_moderator_pf(struct fm10k_hw *hw)
+{
+ u32 i;
+
+ /* Disable interrupt moderator */
+ fm10k_write_reg(hw, FM10K_INT_CTRL, 0);
+
+ /* loop through PF from last to first looking enabled vectors */
+ for (i = FM10K_ITR_REG_COUNT_PF - 1; i; i--) {
+ if (!fm10k_read_reg(hw, FM10K_MSIX_VECTOR_MASK(i)))
+ break;
+ }
+
+ /* always reset VFITR2[0] to point to last enabled PF vector*/
+ fm10k_write_reg(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), i);
+
+ /* reset ITR2[0] to point to last enabled PF vector */
+ if (!hw->iov.num_vfs)
+ fm10k_write_reg(hw, FM10K_ITR2(0), i);
+
+ /* Enable interrupt moderator */
+ fm10k_write_reg(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR);
+}
+
+/**
+ * fm10k_update_lport_state_pf - Notify the switch of a change in port state
+ * @hw: pointer to the HW structure
+ * @glort: base resource tag for this request
+ * @count: number of logical ports being updated
+ * @enable: boolean value indicating enable or disable
+ *
+ * This function is used to add/remove a logical port from the switch.
+ **/
+static s32 fm10k_update_lport_state_pf(struct fm10k_hw *hw, u16 glort,
+ u16 count, bool enable)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[3], lport_msg;
+
+ /* do nothing if we are being asked to create or destroy 0 ports */
+ if (!count)
+ return 0;
+
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ /* construct the lport message from the 2 pieces of data we have */
+ lport_msg = ((u32)count << 16) | glort;
+
+ /* generate lport create/delete message */
+ fm10k_tlv_msg_init(msg, enable ? FM10K_PF_MSG_ID_LPORT_CREATE :
+ FM10K_PF_MSG_ID_LPORT_DELETE);
+ fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_PORT, lport_msg);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_configure_dglort_map_pf - Configures GLORT entry and queues
+ * @hw: pointer to hardware structure
+ * @dglort: pointer to dglort configuration structure
+ *
+ * Reads the configuration structure contained in dglort_cfg and uses
+ * that information to then populate a DGLORTMAP/DEC entry and the queues
+ * to which it has been assigned.
+ **/
+static s32 fm10k_configure_dglort_map_pf(struct fm10k_hw *hw,
+ struct fm10k_dglort_cfg *dglort)
+{
+ u16 glort, queue_count, vsi_count, pc_count;
+ u16 vsi, queue, pc, q_idx;
+ u32 txqctl, dglortdec, dglortmap;
+
+ /* verify the dglort pointer */
+ if (!dglort)
+ return FM10K_ERR_PARAM;
+
+ /* verify the dglort values */
+ if ((dglort->idx > 7) || (dglort->rss_l > 7) || (dglort->pc_l > 3) ||
+ (dglort->vsi_l > 6) || (dglort->vsi_b > 64) ||
+ (dglort->queue_l > 8) || (dglort->queue_b >= 256))
+ return FM10K_ERR_PARAM;
+
+ /* determine count of VSIs and queues */
+ queue_count = 1 << (dglort->rss_l + dglort->pc_l);
+ vsi_count = 1 << (dglort->vsi_l + dglort->queue_l);
+ glort = dglort->glort;
+ q_idx = dglort->queue_b;
+
+ /* configure SGLORT for queues */
+ for (vsi = 0; vsi < vsi_count; vsi++, glort++) {
+ for (queue = 0; queue < queue_count; queue++, q_idx++) {
+ if (q_idx >= FM10K_MAX_QUEUES)
+ break;
+
+ fm10k_write_reg(hw, FM10K_TX_SGLORT(q_idx), glort);
+ fm10k_write_reg(hw, FM10K_RX_SGLORT(q_idx), glort);
+ }
+ }
+
+ /* determine count of PCs and queues */
+ queue_count = 1 << (dglort->queue_l + dglort->rss_l + dglort->vsi_l);
+ pc_count = 1 << dglort->pc_l;
+
+ /* configure PC for Tx queues */
+ for (pc = 0; pc < pc_count; pc++) {
+ q_idx = pc + dglort->queue_b;
+ for (queue = 0; queue < queue_count; queue++) {
+ if (q_idx >= FM10K_MAX_QUEUES)
+ break;
+
+ txqctl = fm10k_read_reg(hw, FM10K_TXQCTL(q_idx));
+ txqctl &= ~FM10K_TXQCTL_PC_MASK;
+ txqctl |= pc << FM10K_TXQCTL_PC_SHIFT;
+ fm10k_write_reg(hw, FM10K_TXQCTL(q_idx), txqctl);
+
+ q_idx += pc_count;
+ }
+ }
+
+ /* configure DGLORTDEC */
+ dglortdec = ((u32)(dglort->rss_l) << FM10K_DGLORTDEC_RSSLENGTH_SHIFT) |
+ ((u32)(dglort->queue_b) << FM10K_DGLORTDEC_QBASE_SHIFT) |
+ ((u32)(dglort->pc_l) << FM10K_DGLORTDEC_PCLENGTH_SHIFT) |
+ ((u32)(dglort->vsi_b) << FM10K_DGLORTDEC_VSIBASE_SHIFT) |
+ ((u32)(dglort->vsi_l) << FM10K_DGLORTDEC_VSILENGTH_SHIFT) |
+ ((u32)(dglort->queue_l));
+ if (dglort->inner_rss)
+ dglortdec |= FM10K_DGLORTDEC_INNERRSS_ENABLE;
+
+ /* configure DGLORTMAP */
+ dglortmap = (dglort->idx == fm10k_dglort_default) ?
+ FM10K_DGLORTMAP_ANY : FM10K_DGLORTMAP_ZERO;
+ dglortmap <<= dglort->vsi_l + dglort->queue_l + dglort->shared_l;
+ dglortmap |= dglort->glort;
+
+ /* write values to hardware */
+ fm10k_write_reg(hw, FM10K_DGLORTDEC(dglort->idx), dglortdec);
+ fm10k_write_reg(hw, FM10K_DGLORTMAP(dglort->idx), dglortmap);
+
+ return 0;
+}
+
+u16 fm10k_queues_per_pool(struct fm10k_hw *hw)
+{
+ u16 num_pools = hw->iov.num_pools;
+
+ return (num_pools > 32) ? 2 : (num_pools > 16) ? 4 : (num_pools > 8) ?
+ 8 : FM10K_MAX_QUEUES_POOL;
+}
+
+u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx)
+{
+ u16 num_vfs = hw->iov.num_vfs;
+ u16 vf_q_idx = FM10K_MAX_QUEUES;
+
+ vf_q_idx -= fm10k_queues_per_pool(hw) * (num_vfs - vf_idx);
+
+ return vf_q_idx;
+}
+
+static u16 fm10k_vectors_per_pool(struct fm10k_hw *hw)
+{
+ u16 num_pools = hw->iov.num_pools;
+
+ return (num_pools > 32) ? 8 : (num_pools > 16) ? 16 :
+ FM10K_MAX_VECTORS_POOL;
+}
+
+static u16 fm10k_vf_vector_index(struct fm10k_hw *hw, u16 vf_idx)
+{
+ u16 vf_v_idx = FM10K_MAX_VECTORS_PF;
+
+ vf_v_idx += fm10k_vectors_per_pool(hw) * vf_idx;
+
+ return vf_v_idx;
+}
+
+/**
+ * fm10k_iov_assign_resources_pf - Assign pool resources for virtualization
+ * @hw: pointer to the HW structure
+ * @num_vfs: number of VFs to be allocated
+ * @num_pools: number of virtualization pools to be allocated
+ *
+ * Allocates queues and traffic classes to virtualization entities to prepare
+ * the PF for SR-IOV and VMDq
+ **/
+static s32 fm10k_iov_assign_resources_pf(struct fm10k_hw *hw, u16 num_vfs,
+ u16 num_pools)
+{
+ u16 qmap_stride, qpp, vpp, vf_q_idx, vf_q_idx0, qmap_idx;
+ u32 vid = hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT;
+ int i, j;
+
+ /* hardware only supports up to 64 pools */
+ if (num_pools > 64)
+ return FM10K_ERR_PARAM;
+
+ /* the number of VFs cannot exceed the number of pools */
+ if ((num_vfs > num_pools) || (num_vfs > hw->iov.total_vfs))
+ return FM10K_ERR_PARAM;
+
+ /* record number of virtualization entities */
+ hw->iov.num_vfs = num_vfs;
+ hw->iov.num_pools = num_pools;
+
+ /* determine qmap offsets and counts */
+ qmap_stride = (num_vfs > 8) ? 32 : 256;
+ qpp = fm10k_queues_per_pool(hw);
+ vpp = fm10k_vectors_per_pool(hw);
+
+ /* calculate starting index for queues */
+ vf_q_idx = fm10k_vf_queue_index(hw, 0);
+ qmap_idx = 0;
+
+ /* establish TCs with -1 credits and no quanta to prevent transmit */
+ for (i = 0; i < num_vfs; i++) {
+ fm10k_write_reg(hw, FM10K_TC_MAXCREDIT(i), 0);
+ fm10k_write_reg(hw, FM10K_TC_RATE(i), 0);
+ fm10k_write_reg(hw, FM10K_TC_CREDIT(i),
+ FM10K_TC_CREDIT_CREDIT_MASK);
+ }
+
+ /* zero out all mbmem registers */
+ for (i = FM10K_VFMBMEM_LEN * num_vfs; i--;)
+ fm10k_write_reg(hw, FM10K_MBMEM(i), 0);
+
+ /* clear event notification of VF FLR */
+ fm10k_write_reg(hw, FM10K_PFVFLREC(0), ~0);
+ fm10k_write_reg(hw, FM10K_PFVFLREC(1), ~0);
+
+ /* loop through unallocated rings assigning them back to PF */
+ for (i = FM10K_MAX_QUEUES_PF; i < vf_q_idx; i++) {
+ fm10k_write_reg(hw, FM10K_TXDCTL(i), 0);
+ fm10k_write_reg(hw, FM10K_TXQCTL(i), FM10K_TXQCTL_PF | vid);
+ fm10k_write_reg(hw, FM10K_RXQCTL(i), FM10K_RXQCTL_PF);
+ }
+
+ /* PF should have already updated VFITR2[0] */
+
+ /* update all ITR registers to flow to VFITR2[0] */
+ for (i = FM10K_ITR_REG_COUNT_PF + 1; i < FM10K_ITR_REG_COUNT; i++) {
+ if (!(i & (vpp - 1)))
+ fm10k_write_reg(hw, FM10K_ITR2(i), i - vpp);
+ else
+ fm10k_write_reg(hw, FM10K_ITR2(i), i - 1);
+ }
+
+ /* update PF ITR2[0] to reference the last vector */
+ fm10k_write_reg(hw, FM10K_ITR2(0),
+ fm10k_vf_vector_index(hw, num_vfs - 1));
+
+ /* loop through rings populating rings and TCs */
+ for (i = 0; i < num_vfs; i++) {
+ /* record index for VF queue 0 for use in end of loop */
+ vf_q_idx0 = vf_q_idx;
+
+ for (j = 0; j < qpp; j++, qmap_idx++, vf_q_idx++) {
+ /* assign VF and locked TC to queues */
+ fm10k_write_reg(hw, FM10K_TXDCTL(vf_q_idx), 0);
+ fm10k_write_reg(hw, FM10K_TXQCTL(vf_q_idx),
+ (i << FM10K_TXQCTL_TC_SHIFT) | i |
+ FM10K_TXQCTL_VF | vid);
+ fm10k_write_reg(hw, FM10K_RXDCTL(vf_q_idx),
+ FM10K_RXDCTL_WRITE_BACK_MIN_DELAY |
+ FM10K_RXDCTL_DROP_ON_EMPTY);
+ fm10k_write_reg(hw, FM10K_RXQCTL(vf_q_idx),
+ FM10K_RXQCTL_VF |
+ (i << FM10K_RXQCTL_VF_SHIFT));
+
+ /* map queue pair to VF */
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), vf_q_idx);
+ fm10k_write_reg(hw, FM10K_RQMAP(qmap_idx), vf_q_idx);
+ }
+
+ /* repeat the first ring for all of the remaining VF rings */
+ for (; j < qmap_stride; j++, qmap_idx++) {
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), vf_q_idx0);
+ fm10k_write_reg(hw, FM10K_RQMAP(qmap_idx), vf_q_idx0);
+ }
+ }
+
+ /* loop through remaining indexes assigning all to queue 0 */
+ while (qmap_idx < FM10K_TQMAP_TABLE_SIZE) {
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), 0);
+ fm10k_write_reg(hw, FM10K_RQMAP(qmap_idx), 0);
+ qmap_idx++;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_iov_configure_tc_pf - Configure the shaping group for VF
+ * @hw: pointer to the HW structure
+ * @vf_idx: index of VF receiving GLORT
+ * @rate: Rate indicated in Mb/s
+ *
+ * Configured the TC for a given VF to allow only up to a given number
+ * of Mb/s of outgoing Tx throughput.
+ **/
+static s32 fm10k_iov_configure_tc_pf(struct fm10k_hw *hw, u16 vf_idx, int rate)
+{
+ /* configure defaults */
+ u32 interval = FM10K_TC_RATE_INTERVAL_4US_GEN3;
+ u32 tc_rate = FM10K_TC_RATE_QUANTA_MASK;
+
+ /* verify vf is in range */
+ if (vf_idx >= hw->iov.num_vfs)
+ return FM10K_ERR_PARAM;
+
+ /* set interval to align with 4.096 usec in all modes */
+ switch (hw->bus.speed) {
+ case fm10k_bus_speed_2500:
+ interval = FM10K_TC_RATE_INTERVAL_4US_GEN1;
+ break;
+ case fm10k_bus_speed_5000:
+ interval = FM10K_TC_RATE_INTERVAL_4US_GEN2;
+ break;
+ default:
+ break;
+ }
+
+ if (rate) {
+ if (rate > FM10K_VF_TC_MAX || rate < FM10K_VF_TC_MIN)
+ return FM10K_ERR_PARAM;
+
+ /* The quanta is measured in Bytes per 4.096 or 8.192 usec
+ * The rate is provided in Mbits per second
+ * To tralslate from rate to quanta we need to multiply the
+ * rate by 8.192 usec and divide by 8 bits/byte. To avoid
+ * dealing with floating point we can round the values up
+ * to the nearest whole number ratio which gives us 128 / 125.
+ */
+ tc_rate = (rate * 128) / 125;
+
+ /* try to keep the rate limiting accurate by increasing
+ * the number of credits and interval for rates less than 4Gb/s
+ */
+ if (rate < 4000)
+ interval <<= 1;
+ else
+ tc_rate >>= 1;
+ }
+
+ /* update rate limiter with new values */
+ fm10k_write_reg(hw, FM10K_TC_RATE(vf_idx), tc_rate | interval);
+ fm10k_write_reg(hw, FM10K_TC_MAXCREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K);
+ fm10k_write_reg(hw, FM10K_TC_CREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K);
+
+ return 0;
+}
+
+/**
+ * fm10k_iov_assign_int_moderator_pf - Add VF interrupts to moderator list
+ * @hw: pointer to the HW structure
+ * @vf_idx: index of VF receiving GLORT
+ *
+ * Update the interrupt moderator linked list to include any MSI-X
+ * interrupts which the VF has enabled in the MSI-X vector table.
+ **/
+static s32 fm10k_iov_assign_int_moderator_pf(struct fm10k_hw *hw, u16 vf_idx)
+{
+ u16 vf_v_idx, vf_v_limit, i;
+
+ /* verify vf is in range */
+ if (vf_idx >= hw->iov.num_vfs)
+ return FM10K_ERR_PARAM;
+
+ /* determine vector offset and count*/
+ vf_v_idx = fm10k_vf_vector_index(hw, vf_idx);
+ vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw);
+
+ /* search for first vector that is not masked */
+ for (i = vf_v_limit - 1; i > vf_v_idx; i--) {
+ if (!fm10k_read_reg(hw, FM10K_MSIX_VECTOR_MASK(i)))
+ break;
+ }
+
+ /* reset linked list so it now includes our active vectors */
+ if (vf_idx == (hw->iov.num_vfs - 1))
+ fm10k_write_reg(hw, FM10K_ITR2(0), i);
+ else
+ fm10k_write_reg(hw, FM10K_ITR2(vf_v_limit), i);
+
+ return 0;
+}
+
+/**
+ * fm10k_iov_assign_default_mac_vlan_pf - Assign a MAC and VLAN to VF
+ * @hw: pointer to the HW structure
+ * @vf_info: pointer to VF information structure
+ *
+ * Assign a MAC address and default VLAN to a VF and notify it of the update
+ **/
+static s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw,
+ struct fm10k_vf_info *vf_info)
+{
+ u16 qmap_stride, queues_per_pool, vf_q_idx, timeout, qmap_idx, i;
+ u32 msg[4], txdctl, txqctl, tdbal = 0, tdbah = 0;
+ s32 err = 0;
+ u16 vf_idx, vf_vid;
+
+ /* verify vf is in range */
+ if (!vf_info || vf_info->vf_idx >= hw->iov.num_vfs)
+ return FM10K_ERR_PARAM;
+
+ /* determine qmap offsets and counts */
+ qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256;
+ queues_per_pool = fm10k_queues_per_pool(hw);
+
+ /* calculate starting index for queues */
+ vf_idx = vf_info->vf_idx;
+ vf_q_idx = fm10k_vf_queue_index(hw, vf_idx);
+ qmap_idx = qmap_stride * vf_idx;
+
+ /* MAP Tx queue back to 0 temporarily, and disable it */
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), 0);
+ fm10k_write_reg(hw, FM10K_TXDCTL(vf_q_idx), 0);
+
+ /* determine correct default VLAN ID */
+ if (vf_info->pf_vid)
+ vf_vid = vf_info->pf_vid | FM10K_VLAN_CLEAR;
+ else
+ vf_vid = vf_info->sw_vid;
+
+ /* generate MAC_ADDR request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN);
+ fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_DEFAULT_MAC,
+ vf_info->mac, vf_vid);
+
+ /* load onto outgoing mailbox, ignore any errors on enqueue */
+ if (vf_info->mbx.ops.enqueue_tx)
+ vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg);
+
+ /* verify ring has disabled before modifying base address registers */
+ txdctl = fm10k_read_reg(hw, FM10K_TXDCTL(vf_q_idx));
+ for (timeout = 0; txdctl & FM10K_TXDCTL_ENABLE; timeout++) {
+ /* limit ourselves to a 1ms timeout */
+ if (timeout == 10) {
+ err = FM10K_ERR_DMA_PENDING;
+ goto err_out;
+ }
+
+ usleep_range(100, 200);
+ txdctl = fm10k_read_reg(hw, FM10K_TXDCTL(vf_q_idx));
+ }
+
+ /* Update base address registers to contain MAC address */
+ if (is_valid_ether_addr(vf_info->mac)) {
+ tdbal = (((u32)vf_info->mac[3]) << 24) |
+ (((u32)vf_info->mac[4]) << 16) |
+ (((u32)vf_info->mac[5]) << 8);
+
+ tdbah = (((u32)0xFF) << 24) |
+ (((u32)vf_info->mac[0]) << 16) |
+ (((u32)vf_info->mac[1]) << 8) |
+ ((u32)vf_info->mac[2]);
+ }
+
+ /* Record the base address into queue 0 */
+ fm10k_write_reg(hw, FM10K_TDBAL(vf_q_idx), tdbal);
+ fm10k_write_reg(hw, FM10K_TDBAH(vf_q_idx), tdbah);
+
+err_out:
+ /* configure Queue control register */
+ txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) &
+ FM10K_TXQCTL_VID_MASK;
+ txqctl |= (vf_idx << FM10K_TXQCTL_TC_SHIFT) |
+ FM10K_TXQCTL_VF | vf_idx;
+
+ /* assign VID */
+ for (i = 0; i < queues_per_pool; i++)
+ fm10k_write_reg(hw, FM10K_TXQCTL(vf_q_idx + i), txqctl);
+
+ /* restore the queue back to VF ownership */
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), vf_q_idx);
+ return err;
+}
+
+/**
+ * fm10k_iov_reset_resources_pf - Reassign queues and interrupts to a VF
+ * @hw: pointer to the HW structure
+ * @vf_info: pointer to VF information structure
+ *
+ * Reassign the interrupts and queues to a VF following an FLR
+ **/
+static s32 fm10k_iov_reset_resources_pf(struct fm10k_hw *hw,
+ struct fm10k_vf_info *vf_info)
+{
+ u16 qmap_stride, queues_per_pool, vf_q_idx, qmap_idx;
+ u32 tdbal = 0, tdbah = 0, txqctl, rxqctl;
+ u16 vf_v_idx, vf_v_limit, vf_vid;
+ u8 vf_idx = vf_info->vf_idx;
+ int i;
+
+ /* verify vf is in range */
+ if (vf_idx >= hw->iov.num_vfs)
+ return FM10K_ERR_PARAM;
+
+ /* clear event notification of VF FLR */
+ fm10k_write_reg(hw, FM10K_PFVFLREC(vf_idx / 32), 1 << (vf_idx % 32));
+
+ /* force timeout and then disconnect the mailbox */
+ vf_info->mbx.timeout = 0;
+ if (vf_info->mbx.ops.disconnect)
+ vf_info->mbx.ops.disconnect(hw, &vf_info->mbx);
+
+ /* determine vector offset and count*/
+ vf_v_idx = fm10k_vf_vector_index(hw, vf_idx);
+ vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw);
+
+ /* determine qmap offsets and counts */
+ qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256;
+ queues_per_pool = fm10k_queues_per_pool(hw);
+ qmap_idx = qmap_stride * vf_idx;
+
+ /* make all the queues inaccessible to the VF */
+ for (i = qmap_idx; i < (qmap_idx + qmap_stride); i++) {
+ fm10k_write_reg(hw, FM10K_TQMAP(i), 0);
+ fm10k_write_reg(hw, FM10K_RQMAP(i), 0);
+ }
+
+ /* calculate starting index for queues */
+ vf_q_idx = fm10k_vf_queue_index(hw, vf_idx);
+
+ /* determine correct default VLAN ID */
+ if (vf_info->pf_vid)
+ vf_vid = vf_info->pf_vid;
+ else
+ vf_vid = vf_info->sw_vid;
+
+ /* configure Queue control register */
+ txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) |
+ (vf_idx << FM10K_TXQCTL_TC_SHIFT) |
+ FM10K_TXQCTL_VF | vf_idx;
+ rxqctl = FM10K_RXQCTL_VF | (vf_idx << FM10K_RXQCTL_VF_SHIFT);
+
+ /* stop further DMA and reset queue ownership back to VF */
+ for (i = vf_q_idx; i < (queues_per_pool + vf_q_idx); i++) {
+ fm10k_write_reg(hw, FM10K_TXDCTL(i), 0);
+ fm10k_write_reg(hw, FM10K_TXQCTL(i), txqctl);
+ fm10k_write_reg(hw, FM10K_RXDCTL(i),
+ FM10K_RXDCTL_WRITE_BACK_MIN_DELAY |
+ FM10K_RXDCTL_DROP_ON_EMPTY);
+ fm10k_write_reg(hw, FM10K_RXQCTL(i), rxqctl);
+ }
+
+ /* reset TC with -1 credits and no quanta to prevent transmit */
+ fm10k_write_reg(hw, FM10K_TC_MAXCREDIT(vf_idx), 0);
+ fm10k_write_reg(hw, FM10K_TC_RATE(vf_idx), 0);
+ fm10k_write_reg(hw, FM10K_TC_CREDIT(vf_idx),
+ FM10K_TC_CREDIT_CREDIT_MASK);
+
+ /* update our first entry in the table based on previous VF */
+ if (!vf_idx)
+ hw->mac.ops.update_int_moderator(hw);
+ else
+ hw->iov.ops.assign_int_moderator(hw, vf_idx - 1);
+
+ /* reset linked list so it now includes our active vectors */
+ if (vf_idx == (hw->iov.num_vfs - 1))
+ fm10k_write_reg(hw, FM10K_ITR2(0), vf_v_idx);
+ else
+ fm10k_write_reg(hw, FM10K_ITR2(vf_v_limit), vf_v_idx);
+
+ /* link remaining vectors so that next points to previous */
+ for (vf_v_idx++; vf_v_idx < vf_v_limit; vf_v_idx++)
+ fm10k_write_reg(hw, FM10K_ITR2(vf_v_idx), vf_v_idx - 1);
+
+ /* zero out MBMEM, VLAN_TABLE, RETA, RSSRK, and MRQC registers */
+ for (i = FM10K_VFMBMEM_LEN; i--;)
+ fm10k_write_reg(hw, FM10K_MBMEM_VF(vf_idx, i), 0);
+ for (i = FM10K_VLAN_TABLE_SIZE; i--;)
+ fm10k_write_reg(hw, FM10K_VLAN_TABLE(vf_info->vsi, i), 0);
+ for (i = FM10K_RETA_SIZE; i--;)
+ fm10k_write_reg(hw, FM10K_RETA(vf_info->vsi, i), 0);
+ for (i = FM10K_RSSRK_SIZE; i--;)
+ fm10k_write_reg(hw, FM10K_RSSRK(vf_info->vsi, i), 0);
+ fm10k_write_reg(hw, FM10K_MRQC(vf_info->vsi), 0);
+
+ /* Update base address registers to contain MAC address */
+ if (is_valid_ether_addr(vf_info->mac)) {
+ tdbal = (((u32)vf_info->mac[3]) << 24) |
+ (((u32)vf_info->mac[4]) << 16) |
+ (((u32)vf_info->mac[5]) << 8);
+ tdbah = (((u32)0xFF) << 24) |
+ (((u32)vf_info->mac[0]) << 16) |
+ (((u32)vf_info->mac[1]) << 8) |
+ ((u32)vf_info->mac[2]);
+ }
+
+ /* map queue pairs back to VF from last to first*/
+ for (i = queues_per_pool; i--;) {
+ fm10k_write_reg(hw, FM10K_TDBAL(vf_q_idx + i), tdbal);
+ fm10k_write_reg(hw, FM10K_TDBAH(vf_q_idx + i), tdbah);
+ fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx + i), vf_q_idx + i);
+ fm10k_write_reg(hw, FM10K_RQMAP(qmap_idx + i), vf_q_idx + i);
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_iov_set_lport_pf - Assign and enable a logical port for a given VF
+ * @hw: pointer to hardware structure
+ * @vf_info: pointer to VF information structure
+ * @lport_idx: Logical port offset from the hardware glort
+ * @flags: Set of capability flags to extend port beyond basic functionality
+ *
+ * This function allows enabling a VF port by assigning it a GLORT and
+ * setting the flags so that it can enable an Rx mode.
+ **/
+static s32 fm10k_iov_set_lport_pf(struct fm10k_hw *hw,
+ struct fm10k_vf_info *vf_info,
+ u16 lport_idx, u8 flags)
+{
+ u16 glort = (hw->mac.dglort_map + lport_idx) & FM10K_DGLORTMAP_NONE;
+
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ vf_info->vf_flags = flags | FM10K_VF_FLAG_NONE_CAPABLE;
+ vf_info->glort = glort;
+
+ return 0;
+}
+
+/**
+ * fm10k_iov_reset_lport_pf - Disable a logical port for a given VF
+ * @hw: pointer to hardware structure
+ * @vf_info: pointer to VF information structure
+ *
+ * This function disables a VF port by stripping it of a GLORT and
+ * setting the flags so that it cannot enable any Rx mode.
+ **/
+static void fm10k_iov_reset_lport_pf(struct fm10k_hw *hw,
+ struct fm10k_vf_info *vf_info)
+{
+ u32 msg[1];
+
+ /* need to disable the port if it is already enabled */
+ if (FM10K_VF_FLAG_ENABLED(vf_info)) {
+ /* notify switch that this port has been disabled */
+ fm10k_update_lport_state_pf(hw, vf_info->glort, 1, false);
+
+ /* generate port state response to notify VF it is not ready */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE);
+ vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg);
+ }
+
+ /* clear flags and glort if it exists */
+ vf_info->vf_flags = 0;
+ vf_info->glort = 0;
+}
+
+/**
+ * fm10k_iov_update_stats_pf - Updates hardware related statistics for VFs
+ * @hw: pointer to hardware structure
+ * @q: stats for all queues of a VF
+ * @vf_idx: index of VF
+ *
+ * This function collects queue stats for VFs.
+ **/
+static void fm10k_iov_update_stats_pf(struct fm10k_hw *hw,
+ struct fm10k_hw_stats_q *q,
+ u16 vf_idx)
+{
+ u32 idx, qpp;
+
+ /* get stats for all of the queues */
+ qpp = fm10k_queues_per_pool(hw);
+ idx = fm10k_vf_queue_index(hw, vf_idx);
+ fm10k_update_hw_stats_q(hw, q, idx, qpp);
+}
+
+static s32 fm10k_iov_report_timestamp_pf(struct fm10k_hw *hw,
+ struct fm10k_vf_info *vf_info,
+ u64 timestamp)
+{
+ u32 msg[4];
+
+ /* generate port state response to notify VF it is not ready */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_1588);
+ fm10k_tlv_attr_put_u64(msg, FM10K_1588_MSG_TIMESTAMP, timestamp);
+
+ return vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg);
+}
+
+/**
+ * fm10k_iov_msg_msix_pf - Message handler for MSI-X request from VF
+ * @hw: Pointer to hardware structure
+ * @results: Pointer array to message, results[0] is pointer to message
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This function is a default handler for MSI-X requests from the VF. The
+ * assumption is that in this case it is acceptable to just directly
+ * hand off the message form the VF to the underlying shared code.
+ **/
+s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx;
+ u8 vf_idx = vf_info->vf_idx;
+
+ return hw->iov.ops.assign_int_moderator(hw, vf_idx);
+}
+
+/**
+ * fm10k_iov_msg_mac_vlan_pf - Message handler for MAC/VLAN request from VF
+ * @hw: Pointer to hardware structure
+ * @results: Pointer array to message, results[0] is pointer to message
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This function is a default handler for MAC/VLAN requests from the VF.
+ * The assumption is that in this case it is acceptable to just directly
+ * hand off the message form the VF to the underlying shared code.
+ **/
+s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx;
+ int err = 0;
+ u8 mac[ETH_ALEN];
+ u32 *result;
+ u16 vlan;
+ u32 vid;
+
+ /* we shouldn't be updating rules on a disabled interface */
+ if (!FM10K_VF_FLAG_ENABLED(vf_info))
+ err = FM10K_ERR_PARAM;
+
+ if (!err && !!results[FM10K_MAC_VLAN_MSG_VLAN]) {
+ result = results[FM10K_MAC_VLAN_MSG_VLAN];
+
+ /* record VLAN id requested */
+ err = fm10k_tlv_attr_get_u32(result, &vid);
+ if (err)
+ return err;
+
+ /* if VLAN ID is 0, set the default VLAN ID instead of 0 */
+ if (!vid || (vid == FM10K_VLAN_CLEAR)) {
+ if (vf_info->pf_vid)
+ vid |= vf_info->pf_vid;
+ else
+ vid |= vf_info->sw_vid;
+ } else if (vid != vf_info->pf_vid) {
+ return FM10K_ERR_PARAM;
+ }
+
+ /* update VSI info for VF in regards to VLAN table */
+ err = hw->mac.ops.update_vlan(hw, vid, vf_info->vsi,
+ !(vid & FM10K_VLAN_CLEAR));
+ }
+
+ if (!err && !!results[FM10K_MAC_VLAN_MSG_MAC]) {
+ result = results[FM10K_MAC_VLAN_MSG_MAC];
+
+ /* record unicast MAC address requested */
+ err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan);
+ if (err)
+ return err;
+
+ /* block attempts to set MAC for a locked device */
+ if (is_valid_ether_addr(vf_info->mac) &&
+ memcmp(mac, vf_info->mac, ETH_ALEN))
+ return FM10K_ERR_PARAM;
+
+ /* if VLAN ID is 0, set the default VLAN ID instead of 0 */
+ if (!vlan || (vlan == FM10K_VLAN_CLEAR)) {
+ if (vf_info->pf_vid)
+ vlan |= vf_info->pf_vid;
+ else
+ vlan |= vf_info->sw_vid;
+ } else if (vf_info->pf_vid) {
+ return FM10K_ERR_PARAM;
+ }
+
+ /* notify switch of request for new unicast address */
+ err = hw->mac.ops.update_uc_addr(hw, vf_info->glort, mac, vlan,
+ !(vlan & FM10K_VLAN_CLEAR), 0);
+ }
+
+ if (!err && !!results[FM10K_MAC_VLAN_MSG_MULTICAST]) {
+ result = results[FM10K_MAC_VLAN_MSG_MULTICAST];
+
+ /* record multicast MAC address requested */
+ err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan);
+ if (err)
+ return err;
+
+ /* verify that the VF is allowed to request multicast */
+ if (!(vf_info->vf_flags & FM10K_VF_FLAG_MULTI_ENABLED))
+ return FM10K_ERR_PARAM;
+
+ /* if VLAN ID is 0, set the default VLAN ID instead of 0 */
+ if (!vlan || (vlan == FM10K_VLAN_CLEAR)) {
+ if (vf_info->pf_vid)
+ vlan |= vf_info->pf_vid;
+ else
+ vlan |= vf_info->sw_vid;
+ } else if (vf_info->pf_vid) {
+ return FM10K_ERR_PARAM;
+ }
+
+ /* notify switch of request for new multicast address */
+ err = hw->mac.ops.update_mc_addr(hw, vf_info->glort, mac,
+ !(vlan & FM10K_VLAN_CLEAR), 0);
+ }
+
+ return err;
+}
+
+/**
+ * fm10k_iov_supported_xcast_mode_pf - Determine best match for xcast mode
+ * @vf_info: VF info structure containing capability flags
+ * @mode: Requested xcast mode
+ *
+ * This function outputs the mode that most closely matches the requested
+ * mode. If not modes match it will request we disable the port
+ **/
+static u8 fm10k_iov_supported_xcast_mode_pf(struct fm10k_vf_info *vf_info,
+ u8 mode)
+{
+ u8 vf_flags = vf_info->vf_flags;
+
+ /* match up mode to capabilities as best as possible */
+ switch (mode) {
+ case FM10K_XCAST_MODE_PROMISC:
+ if (vf_flags & FM10K_VF_FLAG_PROMISC_CAPABLE)
+ return FM10K_XCAST_MODE_PROMISC;
+ /* fallthough */
+ case FM10K_XCAST_MODE_ALLMULTI:
+ if (vf_flags & FM10K_VF_FLAG_ALLMULTI_CAPABLE)
+ return FM10K_XCAST_MODE_ALLMULTI;
+ /* fallthough */
+ case FM10K_XCAST_MODE_MULTI:
+ if (vf_flags & FM10K_VF_FLAG_MULTI_CAPABLE)
+ return FM10K_XCAST_MODE_MULTI;
+ /* fallthough */
+ case FM10K_XCAST_MODE_NONE:
+ if (vf_flags & FM10K_VF_FLAG_NONE_CAPABLE)
+ return FM10K_XCAST_MODE_NONE;
+ /* fallthough */
+ default:
+ break;
+ }
+
+ /* disable interface as it should not be able to request any */
+ return FM10K_XCAST_MODE_DISABLE;
+}
+
+/**
+ * fm10k_iov_msg_lport_state_pf - Message handler for port state requests
+ * @hw: Pointer to hardware structure
+ * @results: Pointer array to message, results[0] is pointer to message
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This function is a default handler for port state requests. The port
+ * state requests for now are basic and consist of enabling or disabling
+ * the port.
+ **/
+s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx;
+ u32 *result;
+ s32 err = 0;
+ u32 msg[2];
+ u8 mode = 0;
+
+ /* verify VF is allowed to enable even minimal mode */
+ if (!(vf_info->vf_flags & FM10K_VF_FLAG_NONE_CAPABLE))
+ return FM10K_ERR_PARAM;
+
+ if (!!results[FM10K_LPORT_STATE_MSG_XCAST_MODE]) {
+ result = results[FM10K_LPORT_STATE_MSG_XCAST_MODE];
+
+ /* XCAST mode update requested */
+ err = fm10k_tlv_attr_get_u8(result, &mode);
+ if (err)
+ return FM10K_ERR_PARAM;
+
+ /* prep for possible demotion depending on capabilities */
+ mode = fm10k_iov_supported_xcast_mode_pf(vf_info, mode);
+
+ /* if mode is not currently enabled, enable it */
+ if (!(FM10K_VF_FLAG_ENABLED(vf_info) & (1 << mode)))
+ fm10k_update_xcast_mode_pf(hw, vf_info->glort, mode);
+
+ /* swap mode back to a bit flag */
+ mode = FM10K_VF_FLAG_SET_MODE(mode);
+ } else if (!results[FM10K_LPORT_STATE_MSG_DISABLE]) {
+ /* need to disable the port if it is already enabled */
+ if (FM10K_VF_FLAG_ENABLED(vf_info))
+ err = fm10k_update_lport_state_pf(hw, vf_info->glort,
+ 1, false);
+
+ /* when enabling the port we should reset the rate limiters */
+ hw->iov.ops.configure_tc(hw, vf_info->vf_idx, vf_info->rate);
+
+ /* set mode for minimal functionality */
+ mode = FM10K_VF_FLAG_SET_MODE_NONE;
+
+ /* generate port state response to notify VF it is ready */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE);
+ fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_READY);
+ mbx->ops.enqueue_tx(hw, mbx, msg);
+ }
+
+ /* if enable state toggled note the update */
+ if (!err && (!FM10K_VF_FLAG_ENABLED(vf_info) != !mode))
+ err = fm10k_update_lport_state_pf(hw, vf_info->glort, 1,
+ !!mode);
+
+ /* if state change succeeded, then update our stored state */
+ mode |= FM10K_VF_FLAG_CAPABLE(vf_info);
+ if (!err)
+ vf_info->vf_flags = mode;
+
+ return err;
+}
+
+const struct fm10k_msg_data fm10k_iov_msg_data_pf[] = {
+ FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test),
+ FM10K_VF_MSG_MSIX_HANDLER(fm10k_iov_msg_msix_pf),
+ FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_mac_vlan_pf),
+ FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_iov_msg_lport_state_pf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error),
+};
+
+/**
+ * fm10k_update_stats_hw_pf - Updates hardware related statistics of PF
+ * @hw: pointer to hardware structure
+ * @stats: pointer to the stats structure to update
+ *
+ * This function collects and aggregates global and per queue hardware
+ * statistics.
+ **/
+static void fm10k_update_hw_stats_pf(struct fm10k_hw *hw,
+ struct fm10k_hw_stats *stats)
+{
+ u32 timeout, ur, ca, um, xec, vlan_drop, loopback_drop, nodesc_drop;
+ u32 id, id_prev;
+
+ /* Use Tx queue 0 as a canary to detect a reset */
+ id = fm10k_read_reg(hw, FM10K_TXQCTL(0));
+
+ /* Read Global Statistics */
+ do {
+ timeout = fm10k_read_hw_stats_32b(hw, FM10K_STATS_TIMEOUT,
+ &stats->timeout);
+ ur = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UR, &stats->ur);
+ ca = fm10k_read_hw_stats_32b(hw, FM10K_STATS_CA, &stats->ca);
+ um = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UM, &stats->um);
+ xec = fm10k_read_hw_stats_32b(hw, FM10K_STATS_XEC, &stats->xec);
+ vlan_drop = fm10k_read_hw_stats_32b(hw, FM10K_STATS_VLAN_DROP,
+ &stats->vlan_drop);
+ loopback_drop = fm10k_read_hw_stats_32b(hw,
+ FM10K_STATS_LOOPBACK_DROP,
+ &stats->loopback_drop);
+ nodesc_drop = fm10k_read_hw_stats_32b(hw,
+ FM10K_STATS_NODESC_DROP,
+ &stats->nodesc_drop);
+
+ /* if value has not changed then we have consistent data */
+ id_prev = id;
+ id = fm10k_read_reg(hw, FM10K_TXQCTL(0));
+ } while ((id ^ id_prev) & FM10K_TXQCTL_ID_MASK);
+
+ /* drop non-ID bits and set VALID ID bit */
+ id &= FM10K_TXQCTL_ID_MASK;
+ id |= FM10K_STAT_VALID;
+
+ /* Update Global Statistics */
+ if (stats->stats_idx == id) {
+ stats->timeout.count += timeout;
+ stats->ur.count += ur;
+ stats->ca.count += ca;
+ stats->um.count += um;
+ stats->xec.count += xec;
+ stats->vlan_drop.count += vlan_drop;
+ stats->loopback_drop.count += loopback_drop;
+ stats->nodesc_drop.count += nodesc_drop;
+ }
+
+ /* Update bases and record current PF id */
+ fm10k_update_hw_base_32b(&stats->timeout, timeout);
+ fm10k_update_hw_base_32b(&stats->ur, ur);
+ fm10k_update_hw_base_32b(&stats->ca, ca);
+ fm10k_update_hw_base_32b(&stats->um, um);
+ fm10k_update_hw_base_32b(&stats->xec, xec);
+ fm10k_update_hw_base_32b(&stats->vlan_drop, vlan_drop);
+ fm10k_update_hw_base_32b(&stats->loopback_drop, loopback_drop);
+ fm10k_update_hw_base_32b(&stats->nodesc_drop, nodesc_drop);
+ stats->stats_idx = id;
+
+ /* Update Queue Statistics */
+ fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues);
+}
+
+/**
+ * fm10k_rebind_hw_stats_pf - Resets base for hardware statistics of PF
+ * @hw: pointer to hardware structure
+ * @stats: pointer to the stats structure to update
+ *
+ * This function resets the base for global and per queue hardware
+ * statistics.
+ **/
+static void fm10k_rebind_hw_stats_pf(struct fm10k_hw *hw,
+ struct fm10k_hw_stats *stats)
+{
+ /* Unbind Global Statistics */
+ fm10k_unbind_hw_stats_32b(&stats->timeout);
+ fm10k_unbind_hw_stats_32b(&stats->ur);
+ fm10k_unbind_hw_stats_32b(&stats->ca);
+ fm10k_unbind_hw_stats_32b(&stats->um);
+ fm10k_unbind_hw_stats_32b(&stats->xec);
+ fm10k_unbind_hw_stats_32b(&stats->vlan_drop);
+ fm10k_unbind_hw_stats_32b(&stats->loopback_drop);
+ fm10k_unbind_hw_stats_32b(&stats->nodesc_drop);
+
+ /* Unbind Queue Statistics */
+ fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues);
+
+ /* Reinitialize bases for all stats */
+ fm10k_update_hw_stats_pf(hw, stats);
+}
+
+/**
+ * fm10k_set_dma_mask_pf - Configures PhyAddrSpace to limit DMA to system
+ * @hw: pointer to hardware structure
+ * @dma_mask: 64 bit DMA mask required for platform
+ *
+ * This function sets the PHYADDR.PhyAddrSpace bits for the endpoint in order
+ * to limit the access to memory beyond what is physically in the system.
+ **/
+static void fm10k_set_dma_mask_pf(struct fm10k_hw *hw, u64 dma_mask)
+{
+ /* we need to write the upper 32 bits of DMA mask to PhyAddrSpace */
+ u32 phyaddr = (u32)(dma_mask >> 32);
+
+ fm10k_write_reg(hw, FM10K_PHYADDR, phyaddr);
+}
+
+/**
+ * fm10k_get_fault_pf - Record a fault in one of the interface units
+ * @hw: pointer to hardware structure
+ * @type: pointer to fault type register offset
+ * @fault: pointer to memory location to record the fault
+ *
+ * Record the fault register contents to the fault data structure and
+ * clear the entry from the register.
+ *
+ * Returns ERR_PARAM if invalid register is specified or no error is present.
+ **/
+static s32 fm10k_get_fault_pf(struct fm10k_hw *hw, int type,
+ struct fm10k_fault *fault)
+{
+ u32 func;
+
+ /* verify the fault register is in range and is aligned */
+ switch (type) {
+ case FM10K_PCA_FAULT:
+ case FM10K_THI_FAULT:
+ case FM10K_FUM_FAULT:
+ break;
+ default:
+ return FM10K_ERR_PARAM;
+ }
+
+ /* only service faults that are valid */
+ func = fm10k_read_reg(hw, type + FM10K_FAULT_FUNC);
+ if (!(func & FM10K_FAULT_FUNC_VALID))
+ return FM10K_ERR_PARAM;
+
+ /* read remaining fields */
+ fault->address = fm10k_read_reg(hw, type + FM10K_FAULT_ADDR_HI);
+ fault->address <<= 32;
+ fault->address = fm10k_read_reg(hw, type + FM10K_FAULT_ADDR_LO);
+ fault->specinfo = fm10k_read_reg(hw, type + FM10K_FAULT_SPECINFO);
+
+ /* clear valid bit to allow for next error */
+ fm10k_write_reg(hw, type + FM10K_FAULT_FUNC, FM10K_FAULT_FUNC_VALID);
+
+ /* Record which function triggered the error */
+ if (func & FM10K_FAULT_FUNC_PF)
+ fault->func = 0;
+ else
+ fault->func = 1 + ((func & FM10K_FAULT_FUNC_VF_MASK) >>
+ FM10K_FAULT_FUNC_VF_SHIFT);
+
+ /* record fault type */
+ fault->type = func & FM10K_FAULT_FUNC_TYPE_MASK;
+
+ return 0;
+}
+
+/**
+ * fm10k_request_lport_map_pf - Request LPORT map from the switch API
+ * @hw: pointer to hardware structure
+ *
+ **/
+static s32 fm10k_request_lport_map_pf(struct fm10k_hw *hw)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[1];
+
+ /* issue request asking for LPORT map */
+ fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_LPORT_MAP);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_get_host_state_pf - Returns the state of the switch and mailbox
+ * @hw: pointer to hardware structure
+ * @switch_ready: pointer to boolean value that will record switch state
+ *
+ * This funciton will check the DMA_CTRL2 register and mailbox in order
+ * to determine if the switch is ready for the PF to begin requesting
+ * addresses and mapping traffic to the local interface.
+ **/
+static s32 fm10k_get_host_state_pf(struct fm10k_hw *hw, bool *switch_ready)
+{
+ s32 ret_val = 0;
+ u32 dma_ctrl2;
+
+ /* verify the switch is ready for interraction */
+ dma_ctrl2 = fm10k_read_reg(hw, FM10K_DMA_CTRL2);
+ if (!(dma_ctrl2 & FM10K_DMA_CTRL2_SWITCH_READY))
+ goto out;
+
+ /* retrieve generic host state info */
+ ret_val = fm10k_get_host_state_generic(hw, switch_ready);
+ if (ret_val)
+ goto out;
+
+ /* interface cannot receive traffic without logical ports */
+ if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE)
+ ret_val = fm10k_request_lport_map_pf(hw);
+
+out:
+ return ret_val;
+}
+
+/* This structure defines the attibutes to be parsed below */
+const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[] = {
+ FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_LPORT_MAP),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_msg_lport_map_pf - Message handler for lport_map message from SM
+ * @hw: Pointer to hardware structure
+ * @results: pointer array containing parsed data
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This handler configures the lport mapping based on the reply from the
+ * switch API.
+ **/
+s32 fm10k_msg_lport_map_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ u16 glort, mask;
+ u32 dglort_map;
+ s32 err;
+
+ err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_LPORT_MAP],
+ &dglort_map);
+ if (err)
+ return err;
+
+ /* extract values out of the header */
+ glort = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_GLORT);
+ mask = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_MASK);
+
+ /* verify mask is set and none of the masked bits in glort are set */
+ if (!mask || (glort & ~mask))
+ return FM10K_ERR_PARAM;
+
+ /* verify the mask is contiguous, and that it is 1's followed by 0's */
+ if (((~(mask - 1) & mask) + mask) & FM10K_DGLORTMAP_NONE)
+ return FM10K_ERR_PARAM;
+
+ /* record the glort, mask, and port count */
+ hw->mac.dglort_map = dglort_map;
+
+ return 0;
+}
+
+const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[] = {
+ FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_UPDATE_PVID),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_msg_update_pvid_pf - Message handler for port VLAN message from SM
+ * @hw: Pointer to hardware structure
+ * @results: pointer array containing parsed data
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This handler configures the default VLAN for the PF
+ **/
+s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ u16 glort, pvid;
+ u32 pvid_update;
+ s32 err;
+
+ err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_UPDATE_PVID],
+ &pvid_update);
+ if (err)
+ return err;
+
+ /* extract values from the pvid update */
+ glort = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_GLORT);
+ pvid = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_PVID);
+
+ /* if glort is not valid return error */
+ if (!fm10k_glort_valid_pf(hw, glort))
+ return FM10K_ERR_PARAM;
+
+ /* verify VID is valid */
+ if (pvid >= FM10K_VLAN_TABLE_VID_MAX)
+ return FM10K_ERR_PARAM;
+
+ /* record the port VLAN ID value */
+ hw->mac.default_vid = pvid;
+
+ return 0;
+}
+
+/**
+ * fm10k_record_global_table_data - Move global table data to swapi table info
+ * @from: pointer to source table data structure
+ * @to: pointer to destination table info structure
+ *
+ * This function is will copy table_data to the table_info contained in
+ * the hw struct.
+ **/
+static void fm10k_record_global_table_data(struct fm10k_global_table_data *from,
+ struct fm10k_swapi_table_info *to)
+{
+ /* convert from le32 struct to CPU byte ordered values */
+ to->used = le32_to_cpu(from->used);
+ to->avail = le32_to_cpu(from->avail);
+}
+
+const struct fm10k_tlv_attr fm10k_err_msg_attr[] = {
+ FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_ERR,
+ sizeof(struct fm10k_swapi_error)),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_msg_err_pf - Message handler for error reply
+ * @hw: Pointer to hardware structure
+ * @results: pointer array containing parsed data
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This handler will capture the data for any error replies to previous
+ * messages that the PF has sent.
+ **/
+s32 fm10k_msg_err_pf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ struct fm10k_swapi_error err_msg;
+ s32 err;
+
+ /* extract structure from message */
+ err = fm10k_tlv_attr_get_le_struct(results[FM10K_PF_ATTR_ID_ERR],
+ &err_msg, sizeof(err_msg));
+ if (err)
+ return err;
+
+ /* record table status */
+ fm10k_record_global_table_data(&err_msg.mac, &hw->swapi.mac);
+ fm10k_record_global_table_data(&err_msg.nexthop, &hw->swapi.nexthop);
+ fm10k_record_global_table_data(&err_msg.ffu, &hw->swapi.ffu);
+
+ /* record SW API status value */
+ hw->swapi.status = le32_to_cpu(err_msg.status);
+
+ return 0;
+}
+
+const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[] = {
+ FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_1588_TIMESTAMP,
+ sizeof(struct fm10k_swapi_1588_timestamp)),
+ FM10K_TLV_ATTR_LAST
+};
+
+/* currently there is no shared 1588 timestamp handler */
+
+/**
+ * fm10k_adjust_systime_pf - Adjust systime frequency
+ * @hw: pointer to hardware structure
+ * @ppb: adjustment rate in parts per billion
+ *
+ * This function will adjust the SYSTIME_CFG register contained in BAR 4
+ * if this function is supported for BAR 4 access. The adjustment amount
+ * is based on the parts per billion value provided and adjusted to a
+ * value based on parts per 2^48 clock cycles.
+ *
+ * If adjustment is not supported or the requested value is too large
+ * we will return an error.
+ **/
+static s32 fm10k_adjust_systime_pf(struct fm10k_hw *hw, s32 ppb)
+{
+ u64 systime_adjust;
+
+ /* if sw_addr is not set we don't have switch register access */
+ if (!hw->sw_addr)
+ return ppb ? FM10K_ERR_PARAM : 0;
+
+ /* we must convert the value from parts per billion to parts per
+ * 2^48 cycles. In addition I have opted to only use the 30 most
+ * significant bits of the adjustment value as the 8 least
+ * significant bits are located in another register and represent
+ * a value significantly less than a part per billion, the result
+ * of dropping the 8 least significant bits is that the adjustment
+ * value is effectively multiplied by 2^8 when we write it.
+ *
+ * As a result of all this the math for this breaks down as follows:
+ * ppb / 10^9 == adjust * 2^8 / 2^48
+ * If we solve this for adjust, and simplify it comes out as:
+ * ppb * 2^31 / 5^9 == adjust
+ */
+ systime_adjust = (ppb < 0) ? -ppb : ppb;
+ systime_adjust <<= 31;
+ do_div(systime_adjust, 1953125);
+
+ /* verify the requested adjustment value is in range */
+ if (systime_adjust > FM10K_SW_SYSTIME_ADJUST_MASK)
+ return FM10K_ERR_PARAM;
+
+ if (ppb < 0)
+ systime_adjust |= FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE;
+
+ fm10k_write_sw_reg(hw, FM10K_SW_SYSTIME_ADJUST, (u32)systime_adjust);
+
+ return 0;
+}
+
+/**
+ * fm10k_read_systime_pf - Reads value of systime registers
+ * @hw: pointer to the hardware structure
+ *
+ * Function reads the content of 2 registers, combined to represent a 64 bit
+ * value measured in nanosecods. In order to guarantee the value is accurate
+ * we check the 32 most significant bits both before and after reading the
+ * 32 least significant bits to verify they didn't change as we were reading
+ * the registers.
+ **/
+static u64 fm10k_read_systime_pf(struct fm10k_hw *hw)
+{
+ u32 systime_l, systime_h, systime_tmp;
+
+ systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1);
+
+ do {
+ systime_tmp = systime_h;
+ systime_l = fm10k_read_reg(hw, FM10K_SYSTIME);
+ systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1);
+ } while (systime_tmp != systime_h);
+
+ return ((u64)systime_h << 32) | systime_l;
+}
+
+static const struct fm10k_msg_data fm10k_msg_data_pf[] = {
+ FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf),
+ FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf),
+ FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf),
+ FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error),
+};
+
+static struct fm10k_mac_ops mac_ops_pf = {
+ .get_bus_info = &fm10k_get_bus_info_generic,
+ .reset_hw = &fm10k_reset_hw_pf,
+ .init_hw = &fm10k_init_hw_pf,
+ .start_hw = &fm10k_start_hw_generic,
+ .stop_hw = &fm10k_stop_hw_generic,
+ .is_slot_appropriate = &fm10k_is_slot_appropriate_pf,
+ .update_vlan = &fm10k_update_vlan_pf,
+ .read_mac_addr = &fm10k_read_mac_addr_pf,
+ .update_uc_addr = &fm10k_update_uc_addr_pf,
+ .update_mc_addr = &fm10k_update_mc_addr_pf,
+ .update_xcast_mode = &fm10k_update_xcast_mode_pf,
+ .update_int_moderator = &fm10k_update_int_moderator_pf,
+ .update_lport_state = &fm10k_update_lport_state_pf,
+ .update_hw_stats = &fm10k_update_hw_stats_pf,
+ .rebind_hw_stats = &fm10k_rebind_hw_stats_pf,
+ .configure_dglort_map = &fm10k_configure_dglort_map_pf,
+ .set_dma_mask = &fm10k_set_dma_mask_pf,
+ .get_fault = &fm10k_get_fault_pf,
+ .get_host_state = &fm10k_get_host_state_pf,
+ .adjust_systime = &fm10k_adjust_systime_pf,
+ .read_systime = &fm10k_read_systime_pf,
+};
+
+static struct fm10k_iov_ops iov_ops_pf = {
+ .assign_resources = &fm10k_iov_assign_resources_pf,
+ .configure_tc = &fm10k_iov_configure_tc_pf,
+ .assign_int_moderator = &fm10k_iov_assign_int_moderator_pf,
+ .assign_default_mac_vlan = fm10k_iov_assign_default_mac_vlan_pf,
+ .reset_resources = &fm10k_iov_reset_resources_pf,
+ .set_lport = &fm10k_iov_set_lport_pf,
+ .reset_lport = &fm10k_iov_reset_lport_pf,
+ .update_stats = &fm10k_iov_update_stats_pf,
+ .report_timestamp = &fm10k_iov_report_timestamp_pf,
+};
+
+static s32 fm10k_get_invariants_pf(struct fm10k_hw *hw)
+{
+ fm10k_get_invariants_generic(hw);
+
+ return fm10k_sm_mbx_init(hw, &hw->mbx, fm10k_msg_data_pf);
+}
+
+struct fm10k_info fm10k_pf_info = {
+ .mac = fm10k_mac_pf,
+ .get_invariants = &fm10k_get_invariants_pf,
+ .mac_ops = &mac_ops_pf,
+ .iov_ops = &iov_ops_pf,
+};
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pf.h b/drivers/net/ethernet/intel/fm10k/fm10k_pf.h
new file mode 100644
index 0000000..7ab1db4
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_pf.h
@@ -0,0 +1,135 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_PF_H_
+#define _FM10K_PF_H_
+
+#include "fm10k_type.h"
+#include "fm10k_common.h"
+
+bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort);
+u16 fm10k_queues_per_pool(struct fm10k_hw *hw);
+u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx);
+
+enum fm10k_pf_tlv_msg_id_v1 {
+ FM10K_PF_MSG_ID_TEST = 0x000, /* msg ID reserved */
+ FM10K_PF_MSG_ID_XCAST_MODES = 0x001,
+ FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE = 0x002,
+ FM10K_PF_MSG_ID_LPORT_MAP = 0x100,
+ FM10K_PF_MSG_ID_LPORT_CREATE = 0x200,
+ FM10K_PF_MSG_ID_LPORT_DELETE = 0x201,
+ FM10K_PF_MSG_ID_CONFIG = 0x300,
+ FM10K_PF_MSG_ID_UPDATE_PVID = 0x400,
+ FM10K_PF_MSG_ID_CREATE_FLOW_TABLE = 0x501,
+ FM10K_PF_MSG_ID_DELETE_FLOW_TABLE = 0x502,
+ FM10K_PF_MSG_ID_UPDATE_FLOW = 0x503,
+ FM10K_PF_MSG_ID_DELETE_FLOW = 0x504,
+ FM10K_PF_MSG_ID_SET_FLOW_STATE = 0x505,
+ FM10K_PF_MSG_ID_GET_1588_INFO = 0x506,
+ FM10K_PF_MSG_ID_1588_TIMESTAMP = 0x701,
+};
+
+enum fm10k_pf_tlv_attr_id_v1 {
+ FM10K_PF_ATTR_ID_ERR = 0x00,
+ FM10K_PF_ATTR_ID_LPORT_MAP = 0x01,
+ FM10K_PF_ATTR_ID_XCAST_MODE = 0x02,
+ FM10K_PF_ATTR_ID_MAC_UPDATE = 0x03,
+ FM10K_PF_ATTR_ID_VLAN_UPDATE = 0x04,
+ FM10K_PF_ATTR_ID_CONFIG = 0x05,
+ FM10K_PF_ATTR_ID_CREATE_FLOW_TABLE = 0x06,
+ FM10K_PF_ATTR_ID_DELETE_FLOW_TABLE = 0x07,
+ FM10K_PF_ATTR_ID_UPDATE_FLOW = 0x08,
+ FM10K_PF_ATTR_ID_FLOW_STATE = 0x09,
+ FM10K_PF_ATTR_ID_FLOW_HANDLE = 0x0A,
+ FM10K_PF_ATTR_ID_DELETE_FLOW = 0x0B,
+ FM10K_PF_ATTR_ID_PORT = 0x0C,
+ FM10K_PF_ATTR_ID_UPDATE_PVID = 0x0D,
+ FM10K_PF_ATTR_ID_1588_TIMESTAMP = 0x10,
+};
+
+#define FM10K_MSG_LPORT_MAP_GLORT_SHIFT 0
+#define FM10K_MSG_LPORT_MAP_GLORT_SIZE 16
+#define FM10K_MSG_LPORT_MAP_MASK_SHIFT 16
+#define FM10K_MSG_LPORT_MAP_MASK_SIZE 16
+
+#define FM10K_MSG_UPDATE_PVID_GLORT_SHIFT 0
+#define FM10K_MSG_UPDATE_PVID_GLORT_SIZE 16
+#define FM10K_MSG_UPDATE_PVID_PVID_SHIFT 16
+#define FM10K_MSG_UPDATE_PVID_PVID_SIZE 16
+
+struct fm10k_mac_update {
+ __le32 mac_lower;
+ __le16 mac_upper;
+ __le16 vlan;
+ __le16 glort;
+ u8 flags;
+ u8 action;
+};
+
+struct fm10k_global_table_data {
+ __le32 used;
+ __le32 avail;
+};
+
+struct fm10k_swapi_error {
+ __le32 status;
+ struct fm10k_global_table_data mac;
+ struct fm10k_global_table_data nexthop;
+ struct fm10k_global_table_data ffu;
+};
+
+struct fm10k_swapi_1588_timestamp {
+ __le64 egress;
+ __le64 ingress;
+ __le16 dglort;
+ __le16 sglort;
+};
+
+s32 fm10k_msg_lport_map_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
+extern const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[];
+#define FM10K_PF_MSG_LPORT_MAP_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_MAP, \
+ fm10k_lport_map_msg_attr, func)
+s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *, u32 **,
+ struct fm10k_mbx_info *);
+extern const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[];
+#define FM10K_PF_MSG_UPDATE_PVID_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_UPDATE_PVID, \
+ fm10k_update_pvid_msg_attr, func)
+
+s32 fm10k_msg_err_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
+extern const struct fm10k_tlv_attr fm10k_err_msg_attr[];
+#define FM10K_PF_MSG_ERR_HANDLER(msg, func) \
+ FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_##msg, fm10k_err_msg_attr, func)
+
+extern const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[];
+#define FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_1588_TIMESTAMP, \
+ fm10k_1588_timestamp_msg_attr, func)
+
+s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
+s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *, u32 **,
+ struct fm10k_mbx_info *);
+s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *, u32 **,
+ struct fm10k_mbx_info *);
+extern const struct fm10k_msg_data fm10k_iov_msg_data_pf[];
+
+extern struct fm10k_info fm10k_pf_info;
+#endif /* _FM10K_PF_H */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_ptp.c b/drivers/net/ethernet/intel/fm10k/fm10k_ptp.c
new file mode 100644
index 0000000..7822809
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_ptp.c
@@ -0,0 +1,463 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include <linux/ptp_classify.h>
+#include <linux/ptp_clock_kernel.h>
+
+#include "fm10k.h"
+
+#define FM10K_TS_TX_TIMEOUT (HZ * 15)
+
+void fm10k_systime_to_hwtstamp(struct fm10k_intfc *interface,
+ struct skb_shared_hwtstamps *hwtstamp,
+ u64 systime)
+{
+ unsigned long flags;
+
+ read_lock_irqsave(&interface->systime_lock, flags);
+ systime += interface->ptp_adjust;
+ read_unlock_irqrestore(&interface->systime_lock, flags);
+
+ hwtstamp->hwtstamp = ns_to_ktime(systime);
+}
+
+static struct sk_buff *fm10k_ts_tx_skb(struct fm10k_intfc *interface,
+ __le16 dglort)
+{
+ struct sk_buff_head *list = &interface->ts_tx_skb_queue;
+ struct sk_buff *skb;
+
+ skb_queue_walk(list, skb) {
+ if (FM10K_CB(skb)->fi.w.dglort == dglort)
+ return skb;
+ }
+
+ return NULL;
+}
+
+void fm10k_ts_tx_enqueue(struct fm10k_intfc *interface, struct sk_buff *skb)
+{
+ struct sk_buff_head *list = &interface->ts_tx_skb_queue;
+ struct sk_buff *clone;
+ unsigned long flags;
+ __le16 dglort;
+
+ /* create clone for us to return on the Tx path */
+ clone = skb_clone_sk(skb);
+ if (!clone)
+ return;
+
+ FM10K_CB(clone)->ts_tx_timeout = jiffies + FM10K_TS_TX_TIMEOUT;
+ dglort = FM10K_CB(clone)->fi.w.dglort;
+
+ spin_lock_irqsave(&list->lock, flags);
+
+ /* attempt to locate any buffers with the same dglort,
+ * if none are present then insert skb in tail of list
+ */
+ skb = fm10k_ts_tx_skb(interface, FM10K_CB(clone)->fi.w.dglort);
+ if (!skb)
+ __skb_queue_tail(list, clone);
+
+ spin_unlock_irqrestore(&list->lock, flags);
+
+ /* if list is already has one then we just free the clone */
+ if (skb)
+ kfree_skb(skb);
+ else
+ skb_shinfo(clone)->tx_flags |= SKBTX_IN_PROGRESS;
+}
+
+void fm10k_ts_tx_hwtstamp(struct fm10k_intfc *interface, __le16 dglort,
+ u64 systime)
+{
+ struct skb_shared_hwtstamps shhwtstamps;
+ struct sk_buff_head *list = &interface->ts_tx_skb_queue;
+ struct sk_buff *skb;
+ unsigned long flags;
+
+ spin_lock_irqsave(&list->lock, flags);
+
+ /* attempt to locate and pull the sk_buff out of the list */
+ skb = fm10k_ts_tx_skb(interface, dglort);
+ if (skb)
+ __skb_unlink(skb, list);
+
+ spin_unlock_irqrestore(&list->lock, flags);
+
+ /* if not found do nothing */
+ if (!skb)
+ return;
+
+ /* timestamp the sk_buff and return it to the socket */
+ fm10k_systime_to_hwtstamp(interface, &shhwtstamps, systime);
+ skb_complete_tx_timestamp(skb, &shhwtstamps);
+}
+
+void fm10k_ts_tx_subtask(struct fm10k_intfc *interface)
+{
+ struct sk_buff_head *list = &interface->ts_tx_skb_queue;
+ struct sk_buff *skb, *tmp;
+ unsigned long flags;
+
+ /* If we're down or resetting, just bail */
+ if (test_bit(__FM10K_DOWN, &interface->state) ||
+ test_bit(__FM10K_RESETTING, &interface->state))
+ return;
+
+ spin_lock_irqsave(&list->lock, flags);
+
+ /* walk though the list and flush any expired timestamp packets */
+ skb_queue_walk_safe(list, skb, tmp) {
+ if (!time_is_after_jiffies(FM10K_CB(skb)->ts_tx_timeout))
+ continue;
+ __skb_unlink(skb, list);
+ kfree_skb(skb);
+ interface->tx_hwtstamp_timeouts++;
+ }
+
+ spin_unlock_irqrestore(&list->lock, flags);
+}
+
+static u64 fm10k_systime_read(struct fm10k_intfc *interface)
+{
+ struct fm10k_hw *hw = &interface->hw;
+
+ return hw->mac.ops.read_systime(hw);
+}
+
+void fm10k_ts_reset(struct fm10k_intfc *interface)
+{
+ s64 ns = ktime_to_ns(ktime_get_real());
+ unsigned long flags;
+
+ /* reinitialize the clock */
+ write_lock_irqsave(&interface->systime_lock, flags);
+ interface->ptp_adjust = fm10k_systime_read(interface) - ns;
+ write_unlock_irqrestore(&interface->systime_lock, flags);
+}
+
+void fm10k_ts_init(struct fm10k_intfc *interface)
+{
+ /* Initialize lock protecting systime access */
+ rwlock_init(&interface->systime_lock);
+
+ /* Initialize skb queue for pending timestamp requests */
+ skb_queue_head_init(&interface->ts_tx_skb_queue);
+
+ /* reset the clock to current kernel time */
+ fm10k_ts_reset(interface);
+}
+
+/**
+ * fm10k_get_ts_config - get current hardware timestamping configuration
+ * @netdev: network interface device structure
+ * @ifreq: ioctl data
+ *
+ * This function returns the current timestamping settings. Rather than
+ * attempt to deconstruct registers to fill in the values, simply keep a copy
+ * of the old settings around, and return a copy when requested.
+ */
+int fm10k_get_ts_config(struct net_device *netdev, struct ifreq *ifr)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct hwtstamp_config *config = &interface->ts_config;
+
+ return copy_to_user(ifr->ifr_data, config, sizeof(*config)) ?
+ -EFAULT : 0;
+}
+
+/**
+ * fm10k_set_ts_config - control hardware time stamping
+ * @netdev: network interface device structure
+ * @ifreq: ioctl data
+ *
+ * Outgoing time stamping can be enabled and disabled. Play nice and
+ * disable it when requested, although it shouldn't cause any overhead
+ * when no packet needs it. At most one packet in the queue may be
+ * marked for time stamping, otherwise it would be impossible to tell
+ * for sure to which packet the hardware time stamp belongs.
+ *
+ * Incoming time stamping has to be configured via the hardware
+ * filters. Not all combinations are supported, in particular event
+ * type has to be specified. Matching the kind of event packet is
+ * not supported, with the exception of "all V2 events regardless of
+ * level 2 or 4".
+ *
+ * Since hardware always timestamps Path delay packets when timestamping V2
+ * packets, regardless of the type specified in the register, only use V2
+ * Event mode. This more accurately tells the user what the hardware is going
+ * to do anyways.
+ */
+int fm10k_set_ts_config(struct net_device *netdev, struct ifreq *ifr)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct hwtstamp_config ts_config;
+
+ if (copy_from_user(&ts_config, ifr->ifr_data, sizeof(ts_config)))
+ return -EFAULT;
+
+ /* reserved for future extensions */
+ if (ts_config.flags)
+ return -EINVAL;
+
+ switch (ts_config.tx_type) {
+ case HWTSTAMP_TX_OFF:
+ break;
+ case HWTSTAMP_TX_ON:
+ /* we likely need some check here to see if this is supported */
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ switch (ts_config.rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+ interface->flags &= ~FM10K_FLAG_RX_TS_ENABLED;
+ break;
+ case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+ case HWTSTAMP_FILTER_ALL:
+ interface->flags |= FM10K_FLAG_RX_TS_ENABLED;
+ ts_config.rx_filter = HWTSTAMP_FILTER_ALL;
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ /* save these settings for future reference */
+ interface->ts_config = ts_config;
+
+ return copy_to_user(ifr->ifr_data, &ts_config, sizeof(ts_config)) ?
+ -EFAULT : 0;
+}
+
+static int fm10k_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
+{
+ struct fm10k_intfc *interface;
+ struct fm10k_hw *hw;
+ int err;
+
+ interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
+ hw = &interface->hw;
+
+ err = hw->mac.ops.adjust_systime(hw, ppb);
+
+ /* the only error we should see is if the value is out of range */
+ return (err == FM10K_ERR_PARAM) ? -ERANGE : err;
+}
+
+static int fm10k_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+{
+ struct fm10k_intfc *interface;
+ unsigned long flags;
+
+ interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
+
+ write_lock_irqsave(&interface->systime_lock, flags);
+ interface->ptp_adjust += delta;
+ write_unlock_irqrestore(&interface->systime_lock, flags);
+
+ return 0;
+}
+
+static int fm10k_ptp_gettime(struct ptp_clock_info *ptp, struct timespec *ts)
+{
+ struct fm10k_intfc *interface;
+ unsigned long flags;
+ u64 now;
+
+ interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
+
+ read_lock_irqsave(&interface->systime_lock, flags);
+ now = fm10k_systime_read(interface) + interface->ptp_adjust;
+ read_unlock_irqrestore(&interface->systime_lock, flags);
+
+ *ts = ns_to_timespec(now);
+
+ return 0;
+}
+
+static int fm10k_ptp_settime(struct ptp_clock_info *ptp,
+ const struct timespec *ts)
+{
+ struct fm10k_intfc *interface;
+ unsigned long flags;
+ u64 ns = timespec_to_ns(ts);
+
+ interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
+
+ write_lock_irqsave(&interface->systime_lock, flags);
+ interface->ptp_adjust = fm10k_systime_read(interface) - ns;
+ write_unlock_irqrestore(&interface->systime_lock, flags);
+
+ return 0;
+}
+
+static int fm10k_ptp_enable(struct ptp_clock_info *ptp,
+ struct ptp_clock_request *rq, int on)
+{
+ struct ptp_clock_time *t = &rq->perout.period;
+ struct fm10k_intfc *interface;
+ struct fm10k_hw *hw;
+ u64 period;
+ u32 step;
+
+ /* we can only support periodic output */
+ if (rq->type != PTP_CLK_REQ_PEROUT)
+ return -EINVAL;
+
+ /* verify the requested channel is there */
+ if (rq->perout.index >= ptp->n_per_out)
+ return -EINVAL;
+
+ /* we cannot enforce start time as there is no
+ * mechanism for that in the hardware, we can only control
+ * the period.
+ */
+
+ /* we cannot support periods greater than 4 seconds due to reg limit */
+ if (t->sec > 4 || t->sec < 0)
+ return -ERANGE;
+
+ interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
+ hw = &interface->hw;
+
+ /* we simply cannot support the operation if we don't have BAR4 */
+ if (!hw->sw_addr)
+ return -ENOTSUPP;
+
+ /* convert to unsigned 64b ns, verify we can put it in a 32b register */
+ period = t->sec * 1000000000LL + t->nsec;
+
+ /* determine the minimum size for period */
+ step = 2 * (fm10k_read_reg(hw, FM10K_SYSTIME_CFG) &
+ FM10K_SYSTIME_CFG_STEP_MASK);
+
+ /* verify the value is in range supported by hardware */
+ if ((period && (period < step)) || (period > U32_MAX))
+ return -ERANGE;
+
+ /* notify hardware of request to being sending pulses */
+ fm10k_write_sw_reg(hw, FM10K_SW_SYSTIME_PULSE(rq->perout.index),
+ (u32)period);
+
+ return 0;
+}
+
+static struct ptp_pin_desc fm10k_ptp_pd[2] = {
+ {
+ .name = "IEEE1588_PULSE0",
+ .index = 0,
+ .func = PTP_PF_PEROUT,
+ .chan = 0
+ },
+ {
+ .name = "IEEE1588_PULSE1",
+ .index = 1,
+ .func = PTP_PF_PEROUT,
+ .chan = 1
+ }
+};
+
+static int fm10k_ptp_verify(struct ptp_clock_info *ptp, unsigned int pin,
+ enum ptp_pin_function func, unsigned int chan)
+{
+ /* verify the requested pin is there */
+ if (pin >= ptp->n_pins || !ptp->pin_config)
+ return -EINVAL;
+
+ /* enforce locked channels, no changing them */
+ if (chan != ptp->pin_config[pin].chan)
+ return -EINVAL;
+
+ /* we want to keep the functions locked as well */
+ if (func != ptp->pin_config[pin].func)
+ return -EINVAL;
+
+ return 0;
+}
+
+void fm10k_ptp_register(struct fm10k_intfc *interface)
+{
+ struct ptp_clock_info *ptp_caps = &interface->ptp_caps;
+ struct device *dev = &interface->pdev->dev;
+ struct ptp_clock *ptp_clock;
+
+ snprintf(ptp_caps->name, sizeof(ptp_caps->name),
+ "%s", interface->netdev->name);
+ ptp_caps->owner = THIS_MODULE;
+ /* This math is simply the inverse of the math in
+ * fm10k_adjust_systime_pf applied to an adjustment value
+ * of 2^30 - 1 which is the maximum value of the register:
+ * max_ppb == ((2^30 - 1) * 5^9) / 2^31
+ */
+ ptp_caps->max_adj = 976562;
+ ptp_caps->adjfreq = fm10k_ptp_adjfreq;
+ ptp_caps->adjtime = fm10k_ptp_adjtime;
+ ptp_caps->gettime = fm10k_ptp_gettime;
+ ptp_caps->settime = fm10k_ptp_settime;
+
+ /* provide pins if BAR4 is accessible */
+ if (interface->sw_addr) {
+ /* enable periodic outputs */
+ ptp_caps->n_per_out = 2;
+ ptp_caps->enable = fm10k_ptp_enable;
+
+ /* enable clock pins */
+ ptp_caps->verify = fm10k_ptp_verify;
+ ptp_caps->n_pins = 2;
+ ptp_caps->pin_config = fm10k_ptp_pd;
+ }
+
+ ptp_clock = ptp_clock_register(ptp_caps, dev);
+ if (IS_ERR(ptp_clock)) {
+ ptp_clock = NULL;
+ dev_err(dev, "ptp_clock_register failed\n");
+ } else {
+ dev_info(dev, "registered PHC device %s\n", ptp_caps->name);
+ }
+
+ interface->ptp_clock = ptp_clock;
+}
+
+void fm10k_ptp_unregister(struct fm10k_intfc *interface)
+{
+ struct ptp_clock *ptp_clock = interface->ptp_clock;
+ struct device *dev = &interface->pdev->dev;
+
+ if (!ptp_clock)
+ return;
+
+ interface->ptp_clock = NULL;
+
+ ptp_clock_unregister(ptp_clock);
+ dev_info(dev, "removed PHC %s\n", interface->ptp_caps.name);
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_tlv.c b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.c
new file mode 100644
index 0000000..fd0a05f
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.c
@@ -0,0 +1,863 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k_tlv.h"
+
+/**
+ * fm10k_tlv_msg_init - Initialize message block for TLV data storage
+ * @msg: Pointer to message block
+ * @msg_id: Message ID indicating message type
+ *
+ * This function return success if provided with a valid message pointer
+ **/
+s32 fm10k_tlv_msg_init(u32 *msg, u16 msg_id)
+{
+ /* verify pointer is not NULL */
+ if (!msg)
+ return FM10K_ERR_PARAM;
+
+ *msg = (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT) | msg_id;
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_put_null_string - Place null terminated string on message
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ * @string: Pointer to string to be stored in attribute
+ *
+ * This function will reorder a string to be CPU endian and store it in
+ * the attribute buffer. It will return success if provided with a valid
+ * pointers.
+ **/
+s32 fm10k_tlv_attr_put_null_string(u32 *msg, u16 attr_id,
+ const unsigned char *string)
+{
+ u32 attr_data = 0, len = 0;
+ u32 *attr;
+
+ /* verify pointers are not NULL */
+ if (!string || !msg)
+ return FM10K_ERR_PARAM;
+
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+
+ /* copy string into local variable and then write to msg */
+ do {
+ /* write data to message */
+ if (len && !(len % 4)) {
+ attr[len / 4] = attr_data;
+ attr_data = 0;
+ }
+
+ /* record character to offset location */
+ attr_data |= (u32)(*string) << (8 * (len % 4));
+ len++;
+
+ /* test for NULL and then increment */
+ } while (*(string++));
+
+ /* write last piece of data to message */
+ attr[(len + 3) / 4] = attr_data;
+
+ /* record attribute header, update message length */
+ len <<= FM10K_TLV_LEN_SHIFT;
+ attr[0] = len | attr_id;
+
+ /* add header length to length */
+ len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+ *msg += FM10K_TLV_LEN_ALIGN(len);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_get_null_string - Get null terminated string from attribute
+ * @attr: Pointer to attribute
+ * @string: Pointer to location of destination string
+ *
+ * This function pulls the string back out of the attribute and will place
+ * it in the array pointed by by string. It will return success if provided
+ * with a valid pointers.
+ **/
+s32 fm10k_tlv_attr_get_null_string(u32 *attr, unsigned char *string)
+{
+ u32 len;
+
+ /* verify pointers are not NULL */
+ if (!string || !attr)
+ return FM10K_ERR_PARAM;
+
+ len = *attr >> FM10K_TLV_LEN_SHIFT;
+ attr++;
+
+ while (len--)
+ string[len] = (u8)(attr[len / 4] >> (8 * (len % 4)));
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_put_mac_vlan - Store MAC/VLAN attribute in message
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ * @mac_addr: MAC address to be stored
+ *
+ * This function will reorder a MAC address to be CPU endian and store it
+ * in the attribute buffer. It will return success if provided with a
+ * valid pointers.
+ **/
+s32 fm10k_tlv_attr_put_mac_vlan(u32 *msg, u16 attr_id,
+ const u8 *mac_addr, u16 vlan)
+{
+ u32 len = ETH_ALEN << FM10K_TLV_LEN_SHIFT;
+ u32 *attr;
+
+ /* verify pointers are not NULL */
+ if (!msg || !mac_addr)
+ return FM10K_ERR_PARAM;
+
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+
+ /* record attribute header, update message length */
+ attr[0] = len | attr_id;
+
+ /* copy value into local variable and then write to msg */
+ attr[1] = le32_to_cpu(*(const __le32 *)&mac_addr[0]);
+ attr[2] = le16_to_cpu(*(const __le16 *)&mac_addr[4]);
+ attr[2] |= (u32)vlan << 16;
+
+ /* add header length to length */
+ len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+ *msg += FM10K_TLV_LEN_ALIGN(len);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_get_mac_vlan - Get MAC/VLAN stored in attribute
+ * @attr: Pointer to attribute
+ * @attr_id: Attribute ID
+ * @mac_addr: location of buffer to store MAC address
+ *
+ * This function pulls the MAC address back out of the attribute and will
+ * place it in the array pointed by by mac_addr. It will return success
+ * if provided with a valid pointers.
+ **/
+s32 fm10k_tlv_attr_get_mac_vlan(u32 *attr, u8 *mac_addr, u16 *vlan)
+{
+ /* verify pointers are not NULL */
+ if (!mac_addr || !attr)
+ return FM10K_ERR_PARAM;
+
+ *(__le32 *)&mac_addr[0] = cpu_to_le32(attr[1]);
+ *(__le16 *)&mac_addr[4] = cpu_to_le16((u16)(attr[2]));
+ *vlan = (u16)(attr[2] >> 16);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_put_bool - Add header indicating value "true"
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ *
+ * This function will simply add an attribute header, the fact
+ * that the header is here means the attribute value is true, else
+ * it is false. The function will return success if provided with a
+ * valid pointers.
+ **/
+s32 fm10k_tlv_attr_put_bool(u32 *msg, u16 attr_id)
+{
+ /* verify pointers are not NULL */
+ if (!msg)
+ return FM10K_ERR_PARAM;
+
+ /* record attribute header */
+ msg[FM10K_TLV_DWORD_LEN(*msg)] = attr_id;
+
+ /* add header length to length */
+ *msg += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_put_value - Store integer value attribute in message
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ * @value: Value to be written
+ * @len: Size of value
+ *
+ * This function will place an integer value of up to 8 bytes in size
+ * in a message attribute. The function will return success provided
+ * that msg is a valid pointer, and len is 1, 2, 4, or 8.
+ **/
+s32 fm10k_tlv_attr_put_value(u32 *msg, u16 attr_id, s64 value, u32 len)
+{
+ u32 *attr;
+
+ /* verify non-null msg and len is 1, 2, 4, or 8 */
+ if (!msg || !len || len > 8 || (len & (len - 1)))
+ return FM10K_ERR_PARAM;
+
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+
+ if (len < 4) {
+ attr[1] = (u32)value & ((0x1ul << (8 * len)) - 1);
+ } else {
+ attr[1] = (u32)value;
+ if (len > 4)
+ attr[2] = (u32)(value >> 32);
+ }
+
+ /* record attribute header, update message length */
+ len <<= FM10K_TLV_LEN_SHIFT;
+ attr[0] = len | attr_id;
+
+ /* add header length to length */
+ len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+ *msg += FM10K_TLV_LEN_ALIGN(len);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_get_value - Get integer value stored in attribute
+ * @attr: Pointer to attribute
+ * @value: Pointer to destination buffer
+ * @len: Size of value
+ *
+ * This function will place an integer value of up to 8 bytes in size
+ * in the offset pointed to by value. The function will return success
+ * provided that pointers are valid and the len value matches the
+ * attribute length.
+ **/
+s32 fm10k_tlv_attr_get_value(u32 *attr, void *value, u32 len)
+{
+ /* verify pointers are not NULL */
+ if (!attr || !value)
+ return FM10K_ERR_PARAM;
+
+ if ((*attr >> FM10K_TLV_LEN_SHIFT) != len)
+ return FM10K_ERR_PARAM;
+
+ if (len == 8)
+ *(u64 *)value = ((u64)attr[2] << 32) | attr[1];
+ else if (len == 4)
+ *(u32 *)value = attr[1];
+ else if (len == 2)
+ *(u16 *)value = (u16)attr[1];
+ else
+ *(u8 *)value = (u8)attr[1];
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_put_le_struct - Store little endian structure in message
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ * @le_struct: Pointer to structure to be written
+ * @len: Size of le_struct
+ *
+ * This function will place a little endian structure value in a message
+ * attribute. The function will return success provided that all pointers
+ * are valid and length is a non-zero multiple of 4.
+ **/
+s32 fm10k_tlv_attr_put_le_struct(u32 *msg, u16 attr_id,
+ const void *le_struct, u32 len)
+{
+ const __le32 *le32_ptr = (const __le32 *)le_struct;
+ u32 *attr;
+ u32 i;
+
+ /* verify non-null msg and len is in 32 bit words */
+ if (!msg || !len || (len % 4))
+ return FM10K_ERR_PARAM;
+
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+
+ /* copy le32 structure into host byte order at 32b boundaries */
+ for (i = 0; i < (len / 4); i++)
+ attr[i + 1] = le32_to_cpu(le32_ptr[i]);
+
+ /* record attribute header, update message length */
+ len <<= FM10K_TLV_LEN_SHIFT;
+ attr[0] = len | attr_id;
+
+ /* add header length to length */
+ len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+ *msg += FM10K_TLV_LEN_ALIGN(len);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_get_le_struct - Get little endian struct form attribute
+ * @attr: Pointer to attribute
+ * @le_struct: Pointer to structure to be written
+ * @len: Size of structure
+ *
+ * This function will place a little endian structure in the buffer
+ * pointed to by le_struct. The function will return success
+ * provided that pointers are valid and the len value matches the
+ * attribute length.
+ **/
+s32 fm10k_tlv_attr_get_le_struct(u32 *attr, void *le_struct, u32 len)
+{
+ __le32 *le32_ptr = (__le32 *)le_struct;
+ u32 i;
+
+ /* verify pointers are not NULL */
+ if (!le_struct || !attr)
+ return FM10K_ERR_PARAM;
+
+ if ((*attr >> FM10K_TLV_LEN_SHIFT) != len)
+ return FM10K_ERR_PARAM;
+
+ attr++;
+
+ for (i = 0; len; i++, len -= 4)
+ le32_ptr[i] = cpu_to_le32(attr[i]);
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_nest_start - Start a set of nested attributes
+ * @msg: Pointer to message block
+ * @attr_id: Attribute ID
+ *
+ * This function will mark off a new nested region for encapsulating
+ * a given set of attributes. The idea is if you wish to place a secondary
+ * structure within the message this mechanism allows for that. The
+ * function will return NULL on failure, and a pointer to the start
+ * of the nested attributes on success.
+ **/
+u32 *fm10k_tlv_attr_nest_start(u32 *msg, u16 attr_id)
+{
+ u32 *attr;
+
+ /* verify pointer is not NULL */
+ if (!msg)
+ return NULL;
+
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+
+ attr[0] = attr_id;
+
+ /* return pointer to nest header */
+ return attr;
+}
+
+/**
+ * fm10k_tlv_attr_nest_start - Start a set of nested attributes
+ * @msg: Pointer to message block
+ *
+ * This function closes off an existing set of nested attributes. The
+ * message pointer should be pointing to the parent of the nest. So in
+ * the case of a nest within the nest this would be the outer nest pointer.
+ * This function will return success provided all pointers are valid.
+ **/
+s32 fm10k_tlv_attr_nest_stop(u32 *msg)
+{
+ u32 *attr;
+ u32 len;
+
+ /* verify pointer is not NULL */
+ if (!msg)
+ return FM10K_ERR_PARAM;
+
+ /* locate the nested header and retrieve its length */
+ attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
+ len = (attr[0] >> FM10K_TLV_LEN_SHIFT) << FM10K_TLV_LEN_SHIFT;
+
+ /* only include nest if data was added to it */
+ if (len) {
+ len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT;
+ *msg += len;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_validate - Validate attribute metadata
+ * @attr: Pointer to attribute
+ * @tlv_attr: Type and length info for attribute
+ *
+ * This function does some basic validation of the input TLV. It
+ * verifies the length, and in the case of null terminated strings
+ * it verifies that the last byte is null. The function will
+ * return FM10K_ERR_PARAM if any attribute is malformed, otherwise
+ * it returns 0.
+ **/
+static s32 fm10k_tlv_attr_validate(u32 *attr,
+ const struct fm10k_tlv_attr *tlv_attr)
+{
+ u32 attr_id = *attr & FM10K_TLV_ID_MASK;
+ u16 len = *attr >> FM10K_TLV_LEN_SHIFT;
+
+ /* verify this is an attribute and not a message */
+ if (*attr & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT))
+ return FM10K_ERR_PARAM;
+
+ /* search through the list of attributes to find a matching ID */
+ while (tlv_attr->id < attr_id)
+ tlv_attr++;
+
+ /* if didn't find a match then we should exit */
+ if (tlv_attr->id != attr_id)
+ return FM10K_NOT_IMPLEMENTED;
+
+ /* move to start of attribute data */
+ attr++;
+
+ switch (tlv_attr->type) {
+ case FM10K_TLV_NULL_STRING:
+ if (!len ||
+ (attr[(len - 1) / 4] & (0xFF << (8 * ((len - 1) % 4)))))
+ return FM10K_ERR_PARAM;
+ if (len > tlv_attr->len)
+ return FM10K_ERR_PARAM;
+ break;
+ case FM10K_TLV_MAC_ADDR:
+ if (len != ETH_ALEN)
+ return FM10K_ERR_PARAM;
+ break;
+ case FM10K_TLV_BOOL:
+ if (len)
+ return FM10K_ERR_PARAM;
+ break;
+ case FM10K_TLV_UNSIGNED:
+ case FM10K_TLV_SIGNED:
+ if (len != tlv_attr->len)
+ return FM10K_ERR_PARAM;
+ break;
+ case FM10K_TLV_LE_STRUCT:
+ /* struct must be 4 byte aligned */
+ if ((len % 4) || len != tlv_attr->len)
+ return FM10K_ERR_PARAM;
+ break;
+ case FM10K_TLV_NESTED:
+ /* nested attributes must be 4 byte aligned */
+ if (len % 4)
+ return FM10K_ERR_PARAM;
+ break;
+ default:
+ /* attribute id is mapped to bad value */
+ return FM10K_ERR_PARAM;
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_attr_parse - Parses stream of attribute data
+ * @attr: Pointer to attribute list
+ * @results: Pointer array to store pointers to attributes
+ * @tlv_attr: Type and length info for attributes
+ *
+ * This function validates a stream of attributes and parses them
+ * up into an array of pointers stored in results. The function will
+ * return FM10K_ERR_PARAM on any input or message error,
+ * FM10K_NOT_IMPLEMENTED for any attribute that is outside of the array
+ * and 0 on success.
+ **/
+s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results,
+ const struct fm10k_tlv_attr *tlv_attr)
+{
+ u32 i, attr_id, offset = 0;
+ s32 err = 0;
+ u16 len;
+
+ /* verify pointers are not NULL */
+ if (!attr || !results)
+ return FM10K_ERR_PARAM;
+
+ /* initialize results to NULL */
+ for (i = 0; i < FM10K_TLV_RESULTS_MAX; i++)
+ results[i] = NULL;
+
+ /* pull length from the message header */
+ len = *attr >> FM10K_TLV_LEN_SHIFT;
+
+ /* no attributes to parse if there is no length */
+ if (!len)
+ return 0;
+
+ /* no attributes to parse, just raw data, message becomes attribute */
+ if (!tlv_attr) {
+ results[0] = attr;
+ return 0;
+ }
+
+ /* move to start of attribute data */
+ attr++;
+
+ /* run through list parsing all attributes */
+ while (offset < len) {
+ attr_id = *attr & FM10K_TLV_ID_MASK;
+
+ if (attr_id < FM10K_TLV_RESULTS_MAX)
+ err = fm10k_tlv_attr_validate(attr, tlv_attr);
+ else
+ err = FM10K_NOT_IMPLEMENTED;
+
+ if (err < 0)
+ return err;
+ if (!err)
+ results[attr_id] = attr;
+
+ /* update offset */
+ offset += FM10K_TLV_DWORD_LEN(*attr) * 4;
+
+ /* move to next attribute */
+ attr = &attr[FM10K_TLV_DWORD_LEN(*attr)];
+ }
+
+ /* we should find ourselves at the end of the list */
+ if (offset != len)
+ return FM10K_ERR_PARAM;
+
+ return 0;
+}
+
+/**
+ * fm10k_tlv_msg_parse - Parses message header and calls function handler
+ * @hw: Pointer to hardware structure
+ * @msg: Pointer to message
+ * @mbx: Pointer to mailbox information structure
+ * @func: Function array containing list of message handling functions
+ *
+ * This function should be the first function called upon receiving a
+ * message. The handler will identify the message type and call the correct
+ * handler for the given message. It will return the value from the function
+ * call on a recognized message type, otherwise it will return
+ * FM10K_NOT_IMPLEMENTED on an unrecognized type.
+ **/
+s32 fm10k_tlv_msg_parse(struct fm10k_hw *hw, u32 *msg,
+ struct fm10k_mbx_info *mbx,
+ const struct fm10k_msg_data *data)
+{
+ u32 *results[FM10K_TLV_RESULTS_MAX];
+ u32 msg_id;
+ s32 err;
+
+ /* verify pointer is not NULL */
+ if (!msg || !data)
+ return FM10K_ERR_PARAM;
+
+ /* verify this is a message and not an attribute */
+ if (!(*msg & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT)))
+ return FM10K_ERR_PARAM;
+
+ /* grab message ID */
+ msg_id = *msg & FM10K_TLV_ID_MASK;
+
+ while (data->id < msg_id)
+ data++;
+
+ /* if we didn't find it then pass it up as an error */
+ if (data->id != msg_id) {
+ while (data->id != FM10K_TLV_ERROR)
+ data++;
+ }
+
+ /* parse the attributes into the results list */
+ err = fm10k_tlv_attr_parse(msg, results, data->attr);
+ if (err < 0)
+ return err;
+
+ return data->func(hw, results, mbx);
+}
+
+/**
+ * fm10k_tlv_msg_error - Default handler for unrecognized TLV message IDs
+ * @hw: Pointer to hardware structure
+ * @results: Pointer array to message, results[0] is pointer to message
+ * @mbx: Unused mailbox pointer
+ *
+ * This function is a default handler for unrecognized messages. At a
+ * a minimum it just indicates that the message requested was
+ * unimplemented.
+ **/
+s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ return FM10K_NOT_IMPLEMENTED;
+}
+
+static const unsigned char test_str[] = "fm10k";
+static const unsigned char test_mac[ETH_ALEN] = { 0x12, 0x34, 0x56,
+ 0x78, 0x9a, 0xbc };
+static const u16 test_vlan = 0x0FED;
+static const u64 test_u64 = 0xfedcba9876543210ull;
+static const u32 test_u32 = 0x87654321;
+static const u16 test_u16 = 0x8765;
+static const u8 test_u8 = 0x87;
+static const s64 test_s64 = -0x123456789abcdef0ll;
+static const s32 test_s32 = -0x1235678;
+static const s16 test_s16 = -0x1234;
+static const s8 test_s8 = -0x12;
+static const __le32 test_le[2] = { cpu_to_le32(0x12345678),
+ cpu_to_le32(0x9abcdef0)};
+
+/* The message below is meant to be used as a test message to demonstrate
+ * how to use the TLV interface and to test the types. Normally this code
+ * be compiled out by stripping the code wrapped in FM10K_TLV_TEST_MSG
+ */
+const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[] = {
+ FM10K_TLV_ATTR_NULL_STRING(FM10K_TEST_MSG_STRING, 80),
+ FM10K_TLV_ATTR_MAC_ADDR(FM10K_TEST_MSG_MAC_ADDR),
+ FM10K_TLV_ATTR_U8(FM10K_TEST_MSG_U8),
+ FM10K_TLV_ATTR_U16(FM10K_TEST_MSG_U16),
+ FM10K_TLV_ATTR_U32(FM10K_TEST_MSG_U32),
+ FM10K_TLV_ATTR_U64(FM10K_TEST_MSG_U64),
+ FM10K_TLV_ATTR_S8(FM10K_TEST_MSG_S8),
+ FM10K_TLV_ATTR_S16(FM10K_TEST_MSG_S16),
+ FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_S32),
+ FM10K_TLV_ATTR_S64(FM10K_TEST_MSG_S64),
+ FM10K_TLV_ATTR_LE_STRUCT(FM10K_TEST_MSG_LE_STRUCT, 8),
+ FM10K_TLV_ATTR_NESTED(FM10K_TEST_MSG_NESTED),
+ FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_RESULT),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_tlv_msg_test_generate_data - Stuff message with data
+ * @msg: Pointer to message
+ * @attr_flags: List of flags indicating what attributes to add
+ *
+ * This function is meant to load a message buffer with attribute data
+ **/
+static void fm10k_tlv_msg_test_generate_data(u32 *msg, u32 attr_flags)
+{
+ if (attr_flags & (1 << FM10K_TEST_MSG_STRING))
+ fm10k_tlv_attr_put_null_string(msg, FM10K_TEST_MSG_STRING,
+ test_str);
+ if (attr_flags & (1 << FM10K_TEST_MSG_MAC_ADDR))
+ fm10k_tlv_attr_put_mac_vlan(msg, FM10K_TEST_MSG_MAC_ADDR,
+ test_mac, test_vlan);
+ if (attr_flags & (1 << FM10K_TEST_MSG_U8))
+ fm10k_tlv_attr_put_u8(msg, FM10K_TEST_MSG_U8, test_u8);
+ if (attr_flags & (1 << FM10K_TEST_MSG_U16))
+ fm10k_tlv_attr_put_u16(msg, FM10K_TEST_MSG_U16, test_u16);
+ if (attr_flags & (1 << FM10K_TEST_MSG_U32))
+ fm10k_tlv_attr_put_u32(msg, FM10K_TEST_MSG_U32, test_u32);
+ if (attr_flags & (1 << FM10K_TEST_MSG_U64))
+ fm10k_tlv_attr_put_u64(msg, FM10K_TEST_MSG_U64, test_u64);
+ if (attr_flags & (1 << FM10K_TEST_MSG_S8))
+ fm10k_tlv_attr_put_s8(msg, FM10K_TEST_MSG_S8, test_s8);
+ if (attr_flags & (1 << FM10K_TEST_MSG_S16))
+ fm10k_tlv_attr_put_s16(msg, FM10K_TEST_MSG_S16, test_s16);
+ if (attr_flags & (1 << FM10K_TEST_MSG_S32))
+ fm10k_tlv_attr_put_s32(msg, FM10K_TEST_MSG_S32, test_s32);
+ if (attr_flags & (1 << FM10K_TEST_MSG_S64))
+ fm10k_tlv_attr_put_s64(msg, FM10K_TEST_MSG_S64, test_s64);
+ if (attr_flags & (1 << FM10K_TEST_MSG_LE_STRUCT))
+ fm10k_tlv_attr_put_le_struct(msg, FM10K_TEST_MSG_LE_STRUCT,
+ test_le, 8);
+}
+
+/**
+ * fm10k_tlv_msg_test_create - Create a test message testing all attributes
+ * @msg: Pointer to message
+ * @attr_flags: List of flags indicating what attributes to add
+ *
+ * This function is meant to load a message buffer with all attribute types
+ * including a nested attribute.
+ **/
+void fm10k_tlv_msg_test_create(u32 *msg, u32 attr_flags)
+{
+ u32 *nest = NULL;
+
+ fm10k_tlv_msg_init(msg, FM10K_TLV_MSG_ID_TEST);
+
+ fm10k_tlv_msg_test_generate_data(msg, attr_flags);
+
+ /* check for nested attributes */
+ attr_flags >>= FM10K_TEST_MSG_NESTED;
+
+ if (attr_flags) {
+ nest = fm10k_tlv_attr_nest_start(msg, FM10K_TEST_MSG_NESTED);
+
+ fm10k_tlv_msg_test_generate_data(nest, attr_flags);
+
+ fm10k_tlv_attr_nest_stop(msg);
+ }
+}
+
+/**
+ * fm10k_tlv_msg_test - Validate all results on test message receive
+ * @hw: Pointer to hardware structure
+ * @results: Pointer array to attributes in the mesage
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This function does a check to verify all attributes match what the test
+ * message placed in the message buffer. It is the default handler
+ * for TLV test messages.
+ **/
+s32 fm10k_tlv_msg_test(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ u32 *nest_results[FM10K_TLV_RESULTS_MAX];
+ unsigned char result_str[80];
+ unsigned char result_mac[ETH_ALEN];
+ s32 err = 0;
+ __le32 result_le[2];
+ u16 result_vlan;
+ u64 result_u64;
+ u32 result_u32;
+ u16 result_u16;
+ u8 result_u8;
+ s64 result_s64;
+ s32 result_s32;
+ s16 result_s16;
+ s8 result_s8;
+ u32 reply[3];
+
+ /* retrieve results of a previous test */
+ if (!!results[FM10K_TEST_MSG_RESULT])
+ return fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_RESULT],
+ &mbx->test_result);
+
+parse_nested:
+ if (!!results[FM10K_TEST_MSG_STRING]) {
+ err = fm10k_tlv_attr_get_null_string(
+ results[FM10K_TEST_MSG_STRING],
+ result_str);
+ if (!err && memcmp(test_str, result_str, sizeof(test_str)))
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_MAC_ADDR]) {
+ err = fm10k_tlv_attr_get_mac_vlan(
+ results[FM10K_TEST_MSG_MAC_ADDR],
+ result_mac, &result_vlan);
+ if (!err && memcmp(test_mac, result_mac, ETH_ALEN))
+ err = FM10K_ERR_INVALID_VALUE;
+ if (!err && test_vlan != result_vlan)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_U8]) {
+ err = fm10k_tlv_attr_get_u8(results[FM10K_TEST_MSG_U8],
+ &result_u8);
+ if (!err && test_u8 != result_u8)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_U16]) {
+ err = fm10k_tlv_attr_get_u16(results[FM10K_TEST_MSG_U16],
+ &result_u16);
+ if (!err && test_u16 != result_u16)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_U32]) {
+ err = fm10k_tlv_attr_get_u32(results[FM10K_TEST_MSG_U32],
+ &result_u32);
+ if (!err && test_u32 != result_u32)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_U64]) {
+ err = fm10k_tlv_attr_get_u64(results[FM10K_TEST_MSG_U64],
+ &result_u64);
+ if (!err && test_u64 != result_u64)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_S8]) {
+ err = fm10k_tlv_attr_get_s8(results[FM10K_TEST_MSG_S8],
+ &result_s8);
+ if (!err && test_s8 != result_s8)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_S16]) {
+ err = fm10k_tlv_attr_get_s16(results[FM10K_TEST_MSG_S16],
+ &result_s16);
+ if (!err && test_s16 != result_s16)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_S32]) {
+ err = fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_S32],
+ &result_s32);
+ if (!err && test_s32 != result_s32)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_S64]) {
+ err = fm10k_tlv_attr_get_s64(results[FM10K_TEST_MSG_S64],
+ &result_s64);
+ if (!err && test_s64 != result_s64)
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+ if (!!results[FM10K_TEST_MSG_LE_STRUCT]) {
+ err = fm10k_tlv_attr_get_le_struct(
+ results[FM10K_TEST_MSG_LE_STRUCT],
+ result_le,
+ sizeof(result_le));
+ if (!err && memcmp(test_le, result_le, sizeof(test_le)))
+ err = FM10K_ERR_INVALID_VALUE;
+ if (err)
+ goto report_result;
+ }
+
+ if (!!results[FM10K_TEST_MSG_NESTED]) {
+ /* clear any pointers */
+ memset(nest_results, 0, sizeof(nest_results));
+
+ /* parse the nested attributes into the nest results list */
+ err = fm10k_tlv_attr_parse(results[FM10K_TEST_MSG_NESTED],
+ nest_results,
+ fm10k_tlv_msg_test_attr);
+ if (err)
+ goto report_result;
+
+ /* loop back through to the start */
+ results = nest_results;
+ goto parse_nested;
+ }
+
+report_result:
+ /* generate reply with test result */
+ fm10k_tlv_msg_init(reply, FM10K_TLV_MSG_ID_TEST);
+ fm10k_tlv_attr_put_s32(reply, FM10K_TEST_MSG_RESULT, err);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, reply);
+}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_tlv.h b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.h
new file mode 100644
index 0000000..7e045e8
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.h
@@ -0,0 +1,186 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_TLV_H_
+#define _FM10K_TLV_H_
+
+/* forward declaration */
+struct fm10k_msg_data;
+
+#include "fm10k_type.h"
+
+/* Message / Argument header format
+ * 3 2 1 0
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Length | Flags | Type / ID |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ *
+ * The message header format described here is used for messages that are
+ * passed between the PF and the VF. To allow for messages larger then
+ * mailbox size we will provide a message with the above header and it
+ * will be segmented and transported to the mailbox to the other side where
+ * it is reassembled. It contains the following fields:
+ * Len: Length of the message in bytes excluding the message header
+ * Flags: TBD
+ * Rule: These will be the message/argument types we pass
+ */
+/* message data header */
+#define FM10K_TLV_ID_SHIFT 0
+#define FM10K_TLV_ID_SIZE 16
+#define FM10K_TLV_ID_MASK ((1u << FM10K_TLV_ID_SIZE) - 1)
+#define FM10K_TLV_FLAGS_SHIFT 16
+#define FM10K_TLV_FLAGS_MSG 0x1
+#define FM10K_TLV_FLAGS_SIZE 4
+#define FM10K_TLV_LEN_SHIFT 20
+#define FM10K_TLV_LEN_SIZE 12
+
+#define FM10K_TLV_HDR_LEN 4ul
+#define FM10K_TLV_LEN_ALIGN_MASK \
+ ((FM10K_TLV_HDR_LEN - 1) << FM10K_TLV_LEN_SHIFT)
+#define FM10K_TLV_LEN_ALIGN(tlv) \
+ (((tlv) + FM10K_TLV_LEN_ALIGN_MASK) & ~FM10K_TLV_LEN_ALIGN_MASK)
+#define FM10K_TLV_DWORD_LEN(tlv) \
+ ((u16)((FM10K_TLV_LEN_ALIGN(tlv)) >> (FM10K_TLV_LEN_SHIFT + 2)) + 1)
+
+#define FM10K_TLV_RESULTS_MAX 32
+
+enum fm10k_tlv_type {
+ FM10K_TLV_NULL_STRING,
+ FM10K_TLV_MAC_ADDR,
+ FM10K_TLV_BOOL,
+ FM10K_TLV_UNSIGNED,
+ FM10K_TLV_SIGNED,
+ FM10K_TLV_LE_STRUCT,
+ FM10K_TLV_NESTED,
+ FM10K_TLV_MAX_TYPE
+};
+
+#define FM10K_TLV_ERROR (~0u)
+
+struct fm10k_tlv_attr {
+ unsigned int id;
+ enum fm10k_tlv_type type;
+ u16 len;
+};
+
+#define FM10K_TLV_ATTR_NULL_STRING(id, len) { id, FM10K_TLV_NULL_STRING, len }
+#define FM10K_TLV_ATTR_MAC_ADDR(id) { id, FM10K_TLV_MAC_ADDR, 6 }
+#define FM10K_TLV_ATTR_BOOL(id) { id, FM10K_TLV_BOOL, 0 }
+#define FM10K_TLV_ATTR_U8(id) { id, FM10K_TLV_UNSIGNED, 1 }
+#define FM10K_TLV_ATTR_U16(id) { id, FM10K_TLV_UNSIGNED, 2 }
+#define FM10K_TLV_ATTR_U32(id) { id, FM10K_TLV_UNSIGNED, 4 }
+#define FM10K_TLV_ATTR_U64(id) { id, FM10K_TLV_UNSIGNED, 8 }
+#define FM10K_TLV_ATTR_S8(id) { id, FM10K_TLV_SIGNED, 1 }
+#define FM10K_TLV_ATTR_S16(id) { id, FM10K_TLV_SIGNED, 2 }
+#define FM10K_TLV_ATTR_S32(id) { id, FM10K_TLV_SIGNED, 4 }
+#define FM10K_TLV_ATTR_S64(id) { id, FM10K_TLV_SIGNED, 8 }
+#define FM10K_TLV_ATTR_LE_STRUCT(id, len) { id, FM10K_TLV_LE_STRUCT, len }
+#define FM10K_TLV_ATTR_NESTED(id) { id, FM10K_TLV_NESTED }
+#define FM10K_TLV_ATTR_LAST { FM10K_TLV_ERROR }
+
+struct fm10k_msg_data {
+ unsigned int id;
+ const struct fm10k_tlv_attr *attr;
+ s32 (*func)(struct fm10k_hw *, u32 **,
+ struct fm10k_mbx_info *);
+};
+
+#define FM10K_MSG_HANDLER(id, attr, func) { id, attr, func }
+
+s32 fm10k_tlv_msg_init(u32 *, u16);
+s32 fm10k_tlv_attr_put_null_string(u32 *, u16, const unsigned char *);
+s32 fm10k_tlv_attr_get_null_string(u32 *, unsigned char *);
+s32 fm10k_tlv_attr_put_mac_vlan(u32 *, u16, const u8 *, u16);
+s32 fm10k_tlv_attr_get_mac_vlan(u32 *, u8 *, u16 *);
+s32 fm10k_tlv_attr_put_bool(u32 *, u16);
+s32 fm10k_tlv_attr_put_value(u32 *, u16, s64, u32);
+#define fm10k_tlv_attr_put_u8(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 1)
+#define fm10k_tlv_attr_put_u16(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 2)
+#define fm10k_tlv_attr_put_u32(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 4)
+#define fm10k_tlv_attr_put_u64(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 8)
+#define fm10k_tlv_attr_put_s8(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 1)
+#define fm10k_tlv_attr_put_s16(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 2)
+#define fm10k_tlv_attr_put_s32(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 4)
+#define fm10k_tlv_attr_put_s64(msg, attr_id, val) \
+ fm10k_tlv_attr_put_value(msg, attr_id, val, 8)
+s32 fm10k_tlv_attr_get_value(u32 *, void *, u32);
+#define fm10k_tlv_attr_get_u8(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(u8))
+#define fm10k_tlv_attr_get_u16(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(u16))
+#define fm10k_tlv_attr_get_u32(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(u32))
+#define fm10k_tlv_attr_get_u64(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(u64))
+#define fm10k_tlv_attr_get_s8(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(s8))
+#define fm10k_tlv_attr_get_s16(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(s16))
+#define fm10k_tlv_attr_get_s32(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(s32))
+#define fm10k_tlv_attr_get_s64(attr, ptr) \
+ fm10k_tlv_attr_get_value(attr, ptr, sizeof(s64))
+s32 fm10k_tlv_attr_put_le_struct(u32 *, u16, const void *, u32);
+s32 fm10k_tlv_attr_get_le_struct(u32 *, void *, u32);
+u32 *fm10k_tlv_attr_nest_start(u32 *, u16);
+s32 fm10k_tlv_attr_nest_stop(u32 *);
+s32 fm10k_tlv_attr_parse(u32 *, u32 **, const struct fm10k_tlv_attr *);
+s32 fm10k_tlv_msg_parse(struct fm10k_hw *, u32 *, struct fm10k_mbx_info *,
+ const struct fm10k_msg_data *);
+s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *);
+
+#define FM10K_TLV_MSG_ID_TEST 0
+
+enum fm10k_tlv_test_attr_id {
+ FM10K_TEST_MSG_UNSET,
+ FM10K_TEST_MSG_STRING,
+ FM10K_TEST_MSG_MAC_ADDR,
+ FM10K_TEST_MSG_U8,
+ FM10K_TEST_MSG_U16,
+ FM10K_TEST_MSG_U32,
+ FM10K_TEST_MSG_U64,
+ FM10K_TEST_MSG_S8,
+ FM10K_TEST_MSG_S16,
+ FM10K_TEST_MSG_S32,
+ FM10K_TEST_MSG_S64,
+ FM10K_TEST_MSG_LE_STRUCT,
+ FM10K_TEST_MSG_NESTED,
+ FM10K_TEST_MSG_RESULT,
+ FM10K_TEST_MSG_MAX
+};
+
+extern const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[];
+void fm10k_tlv_msg_test_create(u32 *, u32);
+s32 fm10k_tlv_msg_test(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
+
+#define FM10K_TLV_MSG_TEST_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_TLV_MSG_ID_TEST, fm10k_tlv_msg_test_attr, func)
+#define FM10K_TLV_MSG_ERROR_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_TLV_ERROR, NULL, func)
+#endif /* _FM10K_MSG_H_ */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_type.h b/drivers/net/ethernet/intel/fm10k/fm10k_type.h
new file mode 100644
index 0000000..280296f
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_type.h
@@ -0,0 +1,770 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_TYPE_H_
+#define _FM10K_TYPE_H_
+
+/* forward declaration */
+struct fm10k_hw;
+
+#include <linux/types.h>
+#include <asm/byteorder.h>
+#include <linux/etherdevice.h>
+
+#include "fm10k_mbx.h"
+
+#define FM10K_DEV_ID_PF 0x15A4
+#define FM10K_DEV_ID_VF 0x15A5
+
+#define FM10K_MAX_QUEUES 256
+#define FM10K_MAX_QUEUES_PF 128
+#define FM10K_MAX_QUEUES_POOL 16
+
+#define FM10K_48_BIT_MASK 0x0000FFFFFFFFFFFFull
+#define FM10K_STAT_VALID 0x80000000
+
+/* PCI Bus Info */
+#define FM10K_PCIE_LINK_CAP 0x7C
+#define FM10K_PCIE_LINK_STATUS 0x82
+#define FM10K_PCIE_LINK_WIDTH 0x3F0
+#define FM10K_PCIE_LINK_WIDTH_1 0x10
+#define FM10K_PCIE_LINK_WIDTH_2 0x20
+#define FM10K_PCIE_LINK_WIDTH_4 0x40
+#define FM10K_PCIE_LINK_WIDTH_8 0x80
+#define FM10K_PCIE_LINK_SPEED 0xF
+#define FM10K_PCIE_LINK_SPEED_2500 0x1
+#define FM10K_PCIE_LINK_SPEED_5000 0x2
+#define FM10K_PCIE_LINK_SPEED_8000 0x3
+
+/* PCIe payload size */
+#define FM10K_PCIE_DEV_CAP 0x74
+#define FM10K_PCIE_DEV_CAP_PAYLOAD 0x07
+#define FM10K_PCIE_DEV_CAP_PAYLOAD_128 0x00
+#define FM10K_PCIE_DEV_CAP_PAYLOAD_256 0x01
+#define FM10K_PCIE_DEV_CAP_PAYLOAD_512 0x02
+#define FM10K_PCIE_DEV_CTRL 0x78
+#define FM10K_PCIE_DEV_CTRL_PAYLOAD 0xE0
+#define FM10K_PCIE_DEV_CTRL_PAYLOAD_128 0x00
+#define FM10K_PCIE_DEV_CTRL_PAYLOAD_256 0x20
+#define FM10K_PCIE_DEV_CTRL_PAYLOAD_512 0x40
+
+/* PCIe MSI-X Capability info */
+#define FM10K_PCI_MSIX_MSG_CTRL 0xB2
+#define FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK 0x7FF
+#define FM10K_MAX_MSIX_VECTORS 256
+#define FM10K_MAX_VECTORS_PF 256
+#define FM10K_MAX_VECTORS_POOL 32
+
+/* PCIe SR-IOV Info */
+#define FM10K_PCIE_SRIOV_CTRL 0x190
+#define FM10K_PCIE_SRIOV_CTRL_VFARI 0x10
+
+#define FM10K_ERR_PARAM -2
+#define FM10K_ERR_REQUESTS_PENDING -4
+#define FM10K_ERR_RESET_REQUESTED -5
+#define FM10K_ERR_DMA_PENDING -6
+#define FM10K_ERR_RESET_FAILED -7
+#define FM10K_ERR_INVALID_MAC_ADDR -8
+#define FM10K_ERR_INVALID_VALUE -9
+#define FM10K_NOT_IMPLEMENTED 0x7FFFFFFF
+
+/* Start of PF registers */
+#define FM10K_CTRL 0x0000
+#define FM10K_CTRL_BAR4_ALLOWED 0x00000004
+
+#define FM10K_CTRL_EXT 0x0001
+#define FM10K_GCR 0x0003
+#define FM10K_GCR_EXT 0x0005
+
+/* Interrupt control registers */
+#define FM10K_EICR 0x0006
+#define FM10K_EICR_FAULT_MASK 0x0000003F
+#define FM10K_EICR_MAILBOX 0x00000040
+#define FM10K_EICR_SWITCHREADY 0x00000080
+#define FM10K_EICR_SWITCHNOTREADY 0x00000100
+#define FM10K_EICR_SWITCHINTERRUPT 0x00000200
+#define FM10K_EICR_VFLR 0x00000800
+#define FM10K_EICR_MAXHOLDTIME 0x00001000
+#define FM10K_EIMR 0x0007
+#define FM10K_EIMR_PCA_FAULT 0x00000001
+#define FM10K_EIMR_THI_FAULT 0x00000010
+#define FM10K_EIMR_FUM_FAULT 0x00000400
+#define FM10K_EIMR_MAILBOX 0x00001000
+#define FM10K_EIMR_SWITCHREADY 0x00004000
+#define FM10K_EIMR_SWITCHNOTREADY 0x00010000
+#define FM10K_EIMR_SWITCHINTERRUPT 0x00040000
+#define FM10K_EIMR_SRAMERROR 0x00100000
+#define FM10K_EIMR_VFLR 0x00400000
+#define FM10K_EIMR_MAXHOLDTIME 0x01000000
+#define FM10K_EIMR_ALL 0x55555555
+#define FM10K_EIMR_DISABLE(NAME) ((FM10K_EIMR_ ## NAME) << 0)
+#define FM10K_EIMR_ENABLE(NAME) ((FM10K_EIMR_ ## NAME) << 1)
+#define FM10K_FAULT_ADDR_LO 0x0
+#define FM10K_FAULT_ADDR_HI 0x1
+#define FM10K_FAULT_SPECINFO 0x2
+#define FM10K_FAULT_FUNC 0x3
+#define FM10K_FAULT_SIZE 0x4
+#define FM10K_FAULT_FUNC_VALID 0x00008000
+#define FM10K_FAULT_FUNC_PF 0x00004000
+#define FM10K_FAULT_FUNC_VF_MASK 0x00003F00
+#define FM10K_FAULT_FUNC_VF_SHIFT 8
+#define FM10K_FAULT_FUNC_TYPE_MASK 0x000000FF
+
+#define FM10K_PCA_FAULT 0x0008
+#define FM10K_THI_FAULT 0x0010
+#define FM10K_FUM_FAULT 0x001C
+
+/* Rx queue timeout indicator */
+#define FM10K_MAXHOLDQ(_n) ((_n) + 0x0020)
+
+/* Switch Manager info */
+#define FM10K_SM_AREA(_n) ((_n) + 0x0028)
+
+/* GLORT mapping registers */
+#define FM10K_DGLORTMAP(_n) ((_n) + 0x0030)
+#define FM10K_DGLORT_COUNT 8
+#define FM10K_DGLORTMAP_MASK_SHIFT 16
+#define FM10K_DGLORTMAP_ANY 0x00000000
+#define FM10K_DGLORTMAP_NONE 0x0000FFFF
+#define FM10K_DGLORTMAP_ZERO 0xFFFF0000
+#define FM10K_DGLORTDEC(_n) ((_n) + 0x0038)
+#define FM10K_DGLORTDEC_VSILENGTH_SHIFT 4
+#define FM10K_DGLORTDEC_VSIBASE_SHIFT 7
+#define FM10K_DGLORTDEC_PCLENGTH_SHIFT 14
+#define FM10K_DGLORTDEC_QBASE_SHIFT 16
+#define FM10K_DGLORTDEC_RSSLENGTH_SHIFT 24
+#define FM10K_DGLORTDEC_INNERRSS_ENABLE 0x08000000
+#define FM10K_TUNNEL_CFG 0x0040
+#define FM10K_TUNNEL_CFG_NVGRE_SHIFT 16
+#define FM10K_SWPRI_MAP(_n) ((_n) + 0x0050)
+#define FM10K_SWPRI_MAX 16
+#define FM10K_RSSRK(_n, _m) (((_n) * 0x10) + (_m) + 0x0800)
+#define FM10K_RSSRK_SIZE 10
+#define FM10K_RSSRK_ENTRIES_PER_REG 4
+#define FM10K_RETA(_n, _m) (((_n) * 0x20) + (_m) + 0x1000)
+#define FM10K_RETA_SIZE 32
+#define FM10K_RETA_ENTRIES_PER_REG 4
+#define FM10K_MAX_RSS_INDICES 128
+
+/* Rate limiting registers */
+#define FM10K_TC_CREDIT(_n) ((_n) + 0x2000)
+#define FM10K_TC_CREDIT_CREDIT_MASK 0x001FFFFF
+#define FM10K_TC_MAXCREDIT(_n) ((_n) + 0x2040)
+#define FM10K_TC_MAXCREDIT_64K 0x00010000
+#define FM10K_TC_RATE(_n) ((_n) + 0x2080)
+#define FM10K_TC_RATE_QUANTA_MASK 0x0000FFFF
+#define FM10K_TC_RATE_INTERVAL_4US_GEN1 0x00020000
+#define FM10K_TC_RATE_INTERVAL_4US_GEN2 0x00040000
+#define FM10K_TC_RATE_INTERVAL_4US_GEN3 0x00080000
+
+/* DMA control registers */
+#define FM10K_DMA_CTRL 0x20C3
+#define FM10K_DMA_CTRL_TX_ENABLE 0x00000001
+#define FM10K_DMA_CTRL_TX_ACTIVE 0x00000008
+#define FM10K_DMA_CTRL_RX_ENABLE 0x00000010
+#define FM10K_DMA_CTRL_RX_ACTIVE 0x00000080
+#define FM10K_DMA_CTRL_RX_DESC_SIZE 0x00000100
+#define FM10K_DMA_CTRL_MINMSS_64 0x00008000
+#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3 0x04800000
+#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2 0x04000000
+#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1 0x03800000
+#define FM10K_DMA_CTRL_DATAPATH_RESET 0x20000000
+#define FM10K_DMA_CTRL_32_DESC 0x00000000
+
+#define FM10K_DMA_CTRL2 0x20C4
+#define FM10K_DMA_CTRL2_SWITCH_READY 0x00002000
+
+/* TSO flags configuration
+ * First packet contains all flags except for fin and psh
+ * Middle packet contains only urg and ack
+ * Last packet contains urg, ack, fin, and psh
+ */
+#define FM10K_TSO_FLAGS_LOW 0x00300FF6
+#define FM10K_TSO_FLAGS_HI 0x00000039
+#define FM10K_DTXTCPFLGL 0x20C5
+#define FM10K_DTXTCPFLGH 0x20C6
+
+#define FM10K_TPH_CTRL 0x20C7
+#define FM10K_MRQC(_n) ((_n) + 0x2100)
+#define FM10K_MRQC_TCP_IPV4 0x00000001
+#define FM10K_MRQC_IPV4 0x00000002
+#define FM10K_MRQC_IPV6 0x00000010
+#define FM10K_MRQC_TCP_IPV6 0x00000020
+#define FM10K_MRQC_UDP_IPV4 0x00000040
+#define FM10K_MRQC_UDP_IPV6 0x00000080
+
+#define FM10K_TQMAP(_n) ((_n) + 0x2800)
+#define FM10K_TQMAP_TABLE_SIZE 2048
+#define FM10K_RQMAP(_n) ((_n) + 0x3000)
+
+/* Hardware Statistics */
+#define FM10K_STATS_TIMEOUT 0x3800
+#define FM10K_STATS_UR 0x3801
+#define FM10K_STATS_CA 0x3802
+#define FM10K_STATS_UM 0x3803
+#define FM10K_STATS_XEC 0x3804
+#define FM10K_STATS_VLAN_DROP 0x3805
+#define FM10K_STATS_LOOPBACK_DROP 0x3806
+#define FM10K_STATS_NODESC_DROP 0x3807
+
+/* Timesync registers */
+#define FM10K_SYSTIME 0x3814
+#define FM10K_SYSTIME_CFG 0x3818
+#define FM10K_SYSTIME_CFG_STEP_MASK 0x0000000F
+
+/* PCIe state registers */
+#define FM10K_PHYADDR 0x381C
+
+/* Rx ring registers */
+#define FM10K_RDBAL(_n) ((0x40 * (_n)) + 0x4000)
+#define FM10K_RDBAH(_n) ((0x40 * (_n)) + 0x4001)
+#define FM10K_RDLEN(_n) ((0x40 * (_n)) + 0x4002)
+#define FM10K_TPH_RXCTRL(_n) ((0x40 * (_n)) + 0x4003)
+#define FM10K_TPH_RXCTRL_DESC_TPHEN 0x00000020
+#define FM10K_TPH_RXCTRL_DESC_RROEN 0x00000200
+#define FM10K_TPH_RXCTRL_DATA_WROEN 0x00002000
+#define FM10K_TPH_RXCTRL_HDR_WROEN 0x00008000
+#define FM10K_RDH(_n) ((0x40 * (_n)) + 0x4004)
+#define FM10K_RDT(_n) ((0x40 * (_n)) + 0x4005)
+#define FM10K_RXQCTL(_n) ((0x40 * (_n)) + 0x4006)
+#define FM10K_RXQCTL_ENABLE 0x00000001
+#define FM10K_RXQCTL_PF 0x000000FC
+#define FM10K_RXQCTL_VF_SHIFT 2
+#define FM10K_RXQCTL_VF 0x00000100
+#define FM10K_RXQCTL_ID_MASK (FM10K_RXQCTL_PF | FM10K_RXQCTL_VF)
+#define FM10K_RXDCTL(_n) ((0x40 * (_n)) + 0x4007)
+#define FM10K_RXDCTL_WRITE_BACK_MIN_DELAY 0x00000001
+#define FM10K_RXDCTL_DROP_ON_EMPTY 0x00000200
+#define FM10K_RXINT(_n) ((0x40 * (_n)) + 0x4008)
+#define FM10K_SRRCTL(_n) ((0x40 * (_n)) + 0x4009)
+#define FM10K_SRRCTL_BSIZEPKT_SHIFT 8 /* shift _right_ */
+#define FM10K_SRRCTL_LOOPBACK_SUPPRESS 0x40000000
+#define FM10K_SRRCTL_BUFFER_CHAINING_EN 0x80000000
+
+/* Rx Statistics */
+#define FM10K_QPRC(_n) ((0x40 * (_n)) + 0x400A)
+#define FM10K_QPRDC(_n) ((0x40 * (_n)) + 0x400B)
+#define FM10K_QBRC_L(_n) ((0x40 * (_n)) + 0x400C)
+#define FM10K_QBRC_H(_n) ((0x40 * (_n)) + 0x400D)
+
+/* Rx GLORT register */
+#define FM10K_RX_SGLORT(_n) ((0x40 * (_n)) + 0x400E)
+
+/* Tx ring registers */
+#define FM10K_TDBAL(_n) ((0x40 * (_n)) + 0x8000)
+#define FM10K_TDBAH(_n) ((0x40 * (_n)) + 0x8001)
+#define FM10K_TDLEN(_n) ((0x40 * (_n)) + 0x8002)
+#define FM10K_TPH_TXCTRL(_n) ((0x40 * (_n)) + 0x8003)
+#define FM10K_TPH_TXCTRL_DESC_TPHEN 0x00000020
+#define FM10K_TPH_TXCTRL_DESC_RROEN 0x00000200
+#define FM10K_TPH_TXCTRL_DESC_WROEN 0x00000800
+#define FM10K_TPH_TXCTRL_DATA_RROEN 0x00002000
+#define FM10K_TDH(_n) ((0x40 * (_n)) + 0x8004)
+#define FM10K_TDT(_n) ((0x40 * (_n)) + 0x8005)
+#define FM10K_TXDCTL(_n) ((0x40 * (_n)) + 0x8006)
+#define FM10K_TXDCTL_ENABLE 0x00004000
+#define FM10K_TXDCTL_MAX_TIME_SHIFT 16
+#define FM10K_TXQCTL(_n) ((0x40 * (_n)) + 0x8007)
+#define FM10K_TXQCTL_PF 0x0000003F
+#define FM10K_TXQCTL_VF 0x00000040
+#define FM10K_TXQCTL_ID_MASK (FM10K_TXQCTL_PF | FM10K_TXQCTL_VF)
+#define FM10K_TXQCTL_PC_SHIFT 7
+#define FM10K_TXQCTL_PC_MASK 0x00000380
+#define FM10K_TXQCTL_TC_SHIFT 10
+#define FM10K_TXQCTL_VID_SHIFT 16
+#define FM10K_TXQCTL_VID_MASK 0x0FFF0000
+#define FM10K_TXQCTL_UNLIMITED_BW 0x10000000
+#define FM10K_TXINT(_n) ((0x40 * (_n)) + 0x8008)
+
+/* Tx Statistics */
+#define FM10K_QPTC(_n) ((0x40 * (_n)) + 0x8009)
+#define FM10K_QBTC_L(_n) ((0x40 * (_n)) + 0x800A)
+#define FM10K_QBTC_H(_n) ((0x40 * (_n)) + 0x800B)
+
+/* Tx Push registers */
+#define FM10K_TQDLOC(_n) ((0x40 * (_n)) + 0x800C)
+#define FM10K_TQDLOC_BASE_32_DESC 0x08
+#define FM10K_TQDLOC_SIZE_32_DESC 0x00050000
+
+/* Tx GLORT registers */
+#define FM10K_TX_SGLORT(_n) ((0x40 * (_n)) + 0x800D)
+#define FM10K_PFVTCTL(_n) ((0x40 * (_n)) + 0x800E)
+#define FM10K_PFVTCTL_FTAG_DESC_ENABLE 0x00000001
+
+/* Interrupt moderation and control registers */
+#define FM10K_INT_MAP(_n) ((_n) + 0x10080)
+#define FM10K_INT_MAP_TIMER0 0x00000000
+#define FM10K_INT_MAP_TIMER1 0x00000100
+#define FM10K_INT_MAP_IMMEDIATE 0x00000200
+#define FM10K_INT_MAP_DISABLE 0x00000300
+#define FM10K_MSIX_VECTOR_MASK(_n) ((0x4 * (_n)) + 0x11003)
+#define FM10K_INT_CTRL 0x12000
+#define FM10K_INT_CTRL_ENABLEMODERATOR 0x00000400
+#define FM10K_ITR(_n) ((_n) + 0x12400)
+#define FM10K_ITR_INTERVAL1_SHIFT 12
+#define FM10K_ITR_PENDING2 0x10000000
+#define FM10K_ITR_AUTOMASK 0x20000000
+#define FM10K_ITR_MASK_SET 0x40000000
+#define FM10K_ITR_MASK_CLEAR 0x80000000
+#define FM10K_ITR2(_n) ((0x2 * (_n)) + 0x12800)
+#define FM10K_ITR_REG_COUNT 768
+#define FM10K_ITR_REG_COUNT_PF 256
+
+/* Switch manager interrupt registers */
+#define FM10K_IP 0x13000
+#define FM10K_IP_NOTINRESET 0x00000100
+
+/* VLAN registers */
+#define FM10K_VLAN_TABLE(_n, _m) ((0x80 * (_n)) + (_m) + 0x14000)
+#define FM10K_VLAN_TABLE_SIZE 128
+
+/* VLAN specific message offsets */
+#define FM10K_VLAN_TABLE_VID_MAX 4096
+#define FM10K_VLAN_TABLE_VSI_MAX 64
+#define FM10K_VLAN_LENGTH_SHIFT 16
+#define FM10K_VLAN_CLEAR (1 << 15)
+#define FM10K_VLAN_ALL \
+ ((FM10K_VLAN_TABLE_VID_MAX - 1) << FM10K_VLAN_LENGTH_SHIFT)
+
+/* VF FLR event notification registers */
+#define FM10K_PFVFLRE(_n) ((0x1 * (_n)) + 0x18844)
+#define FM10K_PFVFLREC(_n) ((0x1 * (_n)) + 0x18846)
+
+/* Defines for size of uncacheable memories */
+#define FM10K_UC_ADDR_START 0x000000 /* start of standard regs */
+#define FM10K_UC_ADDR_END 0x100000 /* end of standard regs */
+#define FM10K_UC_ADDR_SIZE (FM10K_UC_ADDR_END - FM10K_UC_ADDR_START)
+
+/* Define timeouts for resets and disables */
+#define FM10K_QUEUE_DISABLE_TIMEOUT 100
+#define FM10K_RESET_TIMEOUT 100
+
+/* VF registers */
+#define FM10K_VFCTRL 0x00000
+#define FM10K_VFCTRL_RST 0x00000008
+#define FM10K_VFINT_MAP 0x00030
+#define FM10K_VFSYSTIME 0x00040
+#define FM10K_VFITR(_n) ((_n) + 0x00060)
+
+/* Registers contained in BAR 4 for Switch management */
+#define FM10K_SW_SYSTIME_ADJUST 0x0224D
+#define FM10K_SW_SYSTIME_ADJUST_MASK 0x3FFFFFFF
+#define FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE 0x80000000
+#define FM10K_SW_SYSTIME_PULSE(_n) ((_n) + 0x02252)
+
+enum fm10k_int_source {
+ fm10k_int_Mailbox = 0,
+ fm10k_int_PCIeFault = 1,
+ fm10k_int_SwitchUpDown = 2,
+ fm10k_int_SwitchEvent = 3,
+ fm10k_int_SRAM = 4,
+ fm10k_int_VFLR = 5,
+ fm10k_int_MaxHoldTime = 6,
+ fm10k_int_sources_max_pf
+};
+
+/* PCIe bus speeds */
+enum fm10k_bus_speed {
+ fm10k_bus_speed_unknown = 0,
+ fm10k_bus_speed_2500 = 2500,
+ fm10k_bus_speed_5000 = 5000,
+ fm10k_bus_speed_8000 = 8000,
+ fm10k_bus_speed_reserved
+};
+
+/* PCIe bus widths */
+enum fm10k_bus_width {
+ fm10k_bus_width_unknown = 0,
+ fm10k_bus_width_pcie_x1 = 1,
+ fm10k_bus_width_pcie_x2 = 2,
+ fm10k_bus_width_pcie_x4 = 4,
+ fm10k_bus_width_pcie_x8 = 8,
+ fm10k_bus_width_reserved
+};
+
+/* PCIe payload sizes */
+enum fm10k_bus_payload {
+ fm10k_bus_payload_unknown = 0,
+ fm10k_bus_payload_128 = 1,
+ fm10k_bus_payload_256 = 2,
+ fm10k_bus_payload_512 = 3,
+ fm10k_bus_payload_reserved
+};
+
+/* Bus parameters */
+struct fm10k_bus_info {
+ enum fm10k_bus_speed speed;
+ enum fm10k_bus_width width;
+ enum fm10k_bus_payload payload;
+};
+
+/* Statistics related declarations */
+struct fm10k_hw_stat {
+ u64 count;
+ u32 base_l;
+ u32 base_h;
+};
+
+struct fm10k_hw_stats_q {
+ struct fm10k_hw_stat tx_bytes;
+ struct fm10k_hw_stat tx_packets;
+#define tx_stats_idx tx_packets.base_h
+ struct fm10k_hw_stat rx_bytes;
+ struct fm10k_hw_stat rx_packets;
+#define rx_stats_idx rx_packets.base_h
+ struct fm10k_hw_stat rx_drops;
+};
+
+struct fm10k_hw_stats {
+ struct fm10k_hw_stat timeout;
+#define stats_idx timeout.base_h
+ struct fm10k_hw_stat ur;
+ struct fm10k_hw_stat ca;
+ struct fm10k_hw_stat um;
+ struct fm10k_hw_stat xec;
+ struct fm10k_hw_stat vlan_drop;
+ struct fm10k_hw_stat loopback_drop;
+ struct fm10k_hw_stat nodesc_drop;
+ struct fm10k_hw_stats_q q[FM10K_MAX_QUEUES_PF];
+};
+
+/* Establish DGLORT feature priority */
+enum fm10k_dglortdec_idx {
+ fm10k_dglort_default = 0,
+ fm10k_dglort_vf_rsvd0 = 1,
+ fm10k_dglort_vf_rss = 2,
+ fm10k_dglort_pf_rsvd0 = 3,
+ fm10k_dglort_pf_queue = 4,
+ fm10k_dglort_pf_vsi = 5,
+ fm10k_dglort_pf_rsvd1 = 6,
+ fm10k_dglort_pf_rss = 7
+};
+
+struct fm10k_dglort_cfg {
+ u16 glort; /* GLORT base */
+ u16 queue_b; /* Base value for queue */
+ u8 vsi_b; /* Base value for VSI */
+ u8 idx; /* index of DGLORTDEC entry */
+ u8 rss_l; /* RSS indices */
+ u8 pc_l; /* Priority Class indices */
+ u8 vsi_l; /* Number of bits from GLORT used to determine VSI */
+ u8 queue_l; /* Number of bits from GLORT used to determine queue */
+ u8 shared_l; /* Ignored bits from GLORT resulting in shared VSI */
+ u8 inner_rss; /* Boolean value if inner header is used for RSS */
+};
+
+enum fm10k_pca_fault {
+ PCA_NO_FAULT,
+ PCA_UNMAPPED_ADDR,
+ PCA_BAD_QACCESS_PF,
+ PCA_BAD_QACCESS_VF,
+ PCA_MALICIOUS_REQ,
+ PCA_POISONED_TLP,
+ PCA_TLP_ABORT,
+ __PCA_MAX
+};
+
+enum fm10k_thi_fault {
+ THI_NO_FAULT,
+ THI_MAL_DIS_Q_FAULT,
+ __THI_MAX
+};
+
+enum fm10k_fum_fault {
+ FUM_NO_FAULT,
+ FUM_UNMAPPED_ADDR,
+ FUM_POISONED_TLP,
+ FUM_BAD_VF_QACCESS,
+ FUM_ADD_DECODE_ERR,
+ FUM_RO_ERROR,
+ FUM_QPRC_CRC_ERROR,
+ FUM_CSR_TIMEOUT,
+ FUM_INVALID_TYPE,
+ FUM_INVALID_LENGTH,
+ FUM_INVALID_BE,
+ FUM_INVALID_ALIGN,
+ __FUM_MAX
+};
+
+struct fm10k_fault {
+ u64 address; /* Address at the time fault was detected */
+ u32 specinfo; /* Extra info on this fault (fault dependent) */
+ u8 type; /* Fault value dependent on subunit */
+ u8 func; /* Function number of the fault */
+};
+
+struct fm10k_mac_ops {
+ /* basic bring-up and tear-down */
+ s32 (*reset_hw)(struct fm10k_hw *);
+ s32 (*init_hw)(struct fm10k_hw *);
+ s32 (*start_hw)(struct fm10k_hw *);
+ s32 (*stop_hw)(struct fm10k_hw *);
+ s32 (*get_bus_info)(struct fm10k_hw *);
+ s32 (*get_host_state)(struct fm10k_hw *, bool *);
+ bool (*is_slot_appropriate)(struct fm10k_hw *);
+ s32 (*update_vlan)(struct fm10k_hw *, u32, u8, bool);
+ s32 (*read_mac_addr)(struct fm10k_hw *);
+ s32 (*update_uc_addr)(struct fm10k_hw *, u16, const u8 *,
+ u16, bool, u8);
+ s32 (*update_mc_addr)(struct fm10k_hw *, u16, const u8 *, u16, bool);
+ s32 (*update_xcast_mode)(struct fm10k_hw *, u16, u8);
+ void (*update_int_moderator)(struct fm10k_hw *);
+ s32 (*update_lport_state)(struct fm10k_hw *, u16, u16, bool);
+ void (*update_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *);
+ void (*rebind_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *);
+ s32 (*configure_dglort_map)(struct fm10k_hw *,
+ struct fm10k_dglort_cfg *);
+ void (*set_dma_mask)(struct fm10k_hw *, u64);
+ s32 (*get_fault)(struct fm10k_hw *, int, struct fm10k_fault *);
+ void (*request_lport_map)(struct fm10k_hw *);
+ s32 (*adjust_systime)(struct fm10k_hw *, s32 ppb);
+ u64 (*read_systime)(struct fm10k_hw *);
+};
+
+enum fm10k_mac_type {
+ fm10k_mac_unknown = 0,
+ fm10k_mac_pf,
+ fm10k_mac_vf,
+ fm10k_num_macs
+};
+
+struct fm10k_mac_info {
+ struct fm10k_mac_ops ops;
+ enum fm10k_mac_type type;
+ u8 addr[ETH_ALEN];
+ u8 perm_addr[ETH_ALEN];
+ u16 default_vid;
+ u16 max_msix_vectors;
+ u16 max_queues;
+ bool vlan_override;
+ bool get_host_state;
+ bool tx_ready;
+ u32 dglort_map;
+};
+
+struct fm10k_swapi_table_info {
+ u32 used;
+ u32 avail;
+};
+
+struct fm10k_swapi_info {
+ u32 status;
+ struct fm10k_swapi_table_info mac;
+ struct fm10k_swapi_table_info nexthop;
+ struct fm10k_swapi_table_info ffu;
+};
+
+enum fm10k_xcast_modes {
+ FM10K_XCAST_MODE_ALLMULTI = 0,
+ FM10K_XCAST_MODE_MULTI = 1,
+ FM10K_XCAST_MODE_PROMISC = 2,
+ FM10K_XCAST_MODE_NONE = 3,
+ FM10K_XCAST_MODE_DISABLE = 4
+};
+
+#define FM10K_VF_TC_MAX 100000 /* 100,000 Mb/s aka 100Gb/s */
+#define FM10K_VF_TC_MIN 1 /* 1 Mb/s is the slowest rate */
+
+struct fm10k_vf_info {
+ /* mbx must be first field in struct unless all default IOV message
+ * handlers are redone as the assumption is that vf_info starts
+ * at the same offset as the mailbox
+ */
+ struct fm10k_mbx_info mbx; /* PF side of VF mailbox */
+ int rate; /* Tx BW cap as defined by OS */
+ u16 glort; /* resource tag for this VF */
+ u16 sw_vid; /* Switch API assigned VLAN */
+ u16 pf_vid; /* PF assigned Default VLAN */
+ u8 mac[ETH_ALEN]; /* PF Default MAC address */
+ u8 vsi; /* VSI idenfifier */
+ u8 vf_idx; /* which VF this is */
+ u8 vf_flags; /* flags indicating what modes
+ * are supported for the port
+ */
+};
+
+#define FM10K_VF_FLAG_ALLMULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_ALLMULTI)
+#define FM10K_VF_FLAG_MULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_MULTI)
+#define FM10K_VF_FLAG_PROMISC_CAPABLE ((u8)1 << FM10K_XCAST_MODE_PROMISC)
+#define FM10K_VF_FLAG_NONE_CAPABLE ((u8)1 << FM10K_XCAST_MODE_NONE)
+#define FM10K_VF_FLAG_CAPABLE(vf_info) ((vf_info)->vf_flags & (u8)0xF)
+#define FM10K_VF_FLAG_ENABLED(vf_info) ((vf_info)->vf_flags >> 4)
+#define FM10K_VF_FLAG_SET_MODE(mode) ((u8)0x10 << (mode))
+#define FM10K_VF_FLAG_SET_MODE_NONE \
+ FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_NONE)
+#define FM10K_VF_FLAG_MULTI_ENABLED \
+ (FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_ALLMULTI) | \
+ FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_MULTI) | \
+ FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_PROMISC))
+
+struct fm10k_iov_ops {
+ /* IOV related bring-up and tear-down */
+ s32 (*assign_resources)(struct fm10k_hw *, u16, u16);
+ s32 (*configure_tc)(struct fm10k_hw *, u16, int);
+ s32 (*assign_int_moderator)(struct fm10k_hw *, u16);
+ s32 (*assign_default_mac_vlan)(struct fm10k_hw *,
+ struct fm10k_vf_info *);
+ s32 (*reset_resources)(struct fm10k_hw *,
+ struct fm10k_vf_info *);
+ s32 (*set_lport)(struct fm10k_hw *, struct fm10k_vf_info *, u16, u8);
+ void (*reset_lport)(struct fm10k_hw *, struct fm10k_vf_info *);
+ void (*update_stats)(struct fm10k_hw *, struct fm10k_hw_stats_q *, u16);
+ s32 (*report_timestamp)(struct fm10k_hw *, struct fm10k_vf_info *, u64);
+};
+
+struct fm10k_iov_info {
+ struct fm10k_iov_ops ops;
+ u16 total_vfs;
+ u16 num_vfs;
+ u16 num_pools;
+};
+
+enum fm10k_devices {
+ fm10k_device_pf,
+ fm10k_device_vf,
+};
+
+struct fm10k_info {
+ enum fm10k_mac_type mac;
+ s32 (*get_invariants)(struct fm10k_hw *);
+ struct fm10k_mac_ops *mac_ops;
+ struct fm10k_iov_ops *iov_ops;
+};
+
+struct fm10k_hw {
+ u32 __iomem *hw_addr;
+ u32 __iomem *sw_addr;
+ void *back;
+ struct fm10k_mac_info mac;
+ struct fm10k_bus_info bus;
+ struct fm10k_bus_info bus_caps;
+ struct fm10k_iov_info iov;
+ struct fm10k_mbx_info mbx;
+ struct fm10k_swapi_info swapi;
+ u16 device_id;
+ u16 vendor_id;
+ u16 subsystem_device_id;
+ u16 subsystem_vendor_id;
+ u8 revision_id;
+};
+
+/* Number of Transmit and Receive Descriptors must be a multiple of 8 */
+#define FM10K_REQ_TX_DESCRIPTOR_MULTIPLE 8
+#define FM10K_REQ_RX_DESCRIPTOR_MULTIPLE 8
+
+/* Transmit Descriptor */
+struct fm10k_tx_desc {
+ __le64 buffer_addr; /* Address of the descriptor's data buffer */
+ __le16 buflen; /* Length of data to be DMAed */
+ __le16 vlan; /* VLAN_ID and VPRI to be inserted in FTAG */
+ __le16 mss; /* MSS for segmentation offload */
+ u8 hdrlen; /* Header size for segmentation offload */
+ u8 flags; /* Status and offload request flags */
+};
+
+/* Transmit Descriptor Cache Structure */
+struct fm10k_tx_desc_cache {
+ struct fm10k_tx_desc tx_desc[256];
+};
+
+#define FM10K_TXD_FLAG_INT 0x01
+#define FM10K_TXD_FLAG_TIME 0x02
+#define FM10K_TXD_FLAG_CSUM 0x04
+#define FM10K_TXD_FLAG_FTAG 0x10
+#define FM10K_TXD_FLAG_RS 0x20
+#define FM10K_TXD_FLAG_LAST 0x40
+#define FM10K_TXD_FLAG_DONE 0x80
+
+/* These macros are meant to enable optimal placement of the RS and INT
+ * bits. It will point us to the last descriptor in the cache for either the
+ * start of the packet, or the end of the packet. If the index is actually
+ * at the start of the FIFO it will point to the offset for the last index
+ * in the FIFO to prevent an unnecessary write.
+ */
+#define FM10K_TXD_WB_FIFO_SIZE 4
+
+/* Receive Descriptor - 32B */
+union fm10k_rx_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ __le64 reserved; /* Empty space, RSS hash */
+ __le64 timestamp;
+ } q; /* Read, Writeback, 64b quad-words */
+ struct {
+ __le32 data; /* RSS and header data */
+ __le32 rss; /* RSS Hash */
+ __le32 staterr;
+ __le32 vlan_len;
+ __le32 glort; /* sglort/dglort */
+ } d; /* Writeback, 32b double-words */
+ struct {
+ __le16 pkt_info; /* RSS, Pkt type */
+ __le16 hdr_info; /* Splithdr, hdrlen, xC */
+ __le16 rss_lower;
+ __le16 rss_upper;
+ __le16 status; /* status/error */
+ __le16 csum_err; /* checksum or extended error value */
+ __le16 length; /* Packet length */
+ __le16 vlan; /* VLAN tag */
+ __le16 dglort;
+ __le16 sglort;
+ } w; /* Writeback, 16b words */
+};
+
+#define FM10K_RXD_RSSTYPE_MASK 0x000F
+enum fm10k_rdesc_rss_type {
+ FM10K_RSSTYPE_NONE = 0x0,
+ FM10K_RSSTYPE_IPV4_TCP = 0x1,
+ FM10K_RSSTYPE_IPV4 = 0x2,
+ FM10K_RSSTYPE_IPV6_TCP = 0x3,
+ /* Reserved 0x4 */
+ FM10K_RSSTYPE_IPV6 = 0x5,
+ /* Reserved 0x6 */
+ FM10K_RSSTYPE_IPV4_UDP = 0x7,
+ FM10K_RSSTYPE_IPV6_UDP = 0x8
+ /* Reserved 0x9 - 0xF */
+};
+
+#define FM10K_RXD_HDR_INFO_XC_MASK 0x0006
+enum fm10k_rxdesc_xc {
+ FM10K_XC_UNICAST = 0x0,
+ FM10K_XC_MULTICAST = 0x4,
+ FM10K_XC_BROADCAST = 0x6
+};
+
+#define FM10K_RXD_STATUS_DD 0x0001 /* Descriptor done */
+#define FM10K_RXD_STATUS_EOP 0x0002 /* End of packet */
+#define FM10K_RXD_STATUS_L4CS 0x0010 /* Indicates an L4 csum */
+#define FM10K_RXD_STATUS_L4CS2 0x0040 /* Inner header L4 csum */
+#define FM10K_RXD_STATUS_L4E2 0x0800 /* Inner header L4 csum err */
+#define FM10K_RXD_STATUS_IPE2 0x1000 /* Inner header IPv4 csum err */
+#define FM10K_RXD_STATUS_RXE 0x2000 /* Generic Rx error */
+#define FM10K_RXD_STATUS_L4E 0x4000 /* L4 csum error */
+#define FM10K_RXD_STATUS_IPE 0x8000 /* IPv4 csum error */
+
+struct fm10k_ftag {
+ __be16 swpri_type_user;
+ __be16 vlan;
+ __be16 sglort;
+ __be16 dglort;
+};
+
+#endif /* _FM10K_TYPE_H */
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_vf.c b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c
new file mode 100644
index 0000000..f0aa0f9
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c
@@ -0,0 +1,578 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "fm10k_vf.h"
+
+/**
+ * fm10k_stop_hw_vf - Stop Tx/Rx units
+ * @hw: pointer to hardware structure
+ *
+ **/
+static s32 fm10k_stop_hw_vf(struct fm10k_hw *hw)
+{
+ u8 *perm_addr = hw->mac.perm_addr;
+ u32 bal = 0, bah = 0;
+ s32 err;
+ u16 i;
+
+ /* we need to disable the queues before taking further steps */
+ err = fm10k_stop_hw_generic(hw);
+ if (err)
+ return err;
+
+ /* If permenant address is set then we need to restore it */
+ if (is_valid_ether_addr(perm_addr)) {
+ bal = (((u32)perm_addr[3]) << 24) |
+ (((u32)perm_addr[4]) << 16) |
+ (((u32)perm_addr[5]) << 8);
+ bah = (((u32)0xFF) << 24) |
+ (((u32)perm_addr[0]) << 16) |
+ (((u32)perm_addr[1]) << 8) |
+ ((u32)perm_addr[2]);
+ }
+
+ /* The queues have already been disabled so we just need to
+ * update their base address registers
+ */
+ for (i = 0; i < hw->mac.max_queues; i++) {
+ fm10k_write_reg(hw, FM10K_TDBAL(i), bal);
+ fm10k_write_reg(hw, FM10K_TDBAH(i), bah);
+ fm10k_write_reg(hw, FM10K_RDBAL(i), bal);
+ fm10k_write_reg(hw, FM10K_RDBAH(i), bah);
+ }
+
+ return 0;
+}
+
+/**
+ * fm10k_reset_hw_vf - VF hardware reset
+ * @hw: pointer to hardware structure
+ *
+ * This function should return the hardare to a state similar to the
+ * one it is in after just being initialized.
+ **/
+static s32 fm10k_reset_hw_vf(struct fm10k_hw *hw)
+{
+ s32 err;
+
+ /* shut down queues we own and reset DMA configuration */
+ err = fm10k_stop_hw_vf(hw);
+ if (err)
+ return err;
+
+ /* Inititate VF reset */
+ fm10k_write_reg(hw, FM10K_VFCTRL, FM10K_VFCTRL_RST);
+
+ /* Flush write and allow 100us for reset to complete */
+ fm10k_write_flush(hw);
+ udelay(FM10K_RESET_TIMEOUT);
+
+ /* Clear reset bit and verify it was cleared */
+ fm10k_write_reg(hw, FM10K_VFCTRL, 0);
+ if (fm10k_read_reg(hw, FM10K_VFCTRL) & FM10K_VFCTRL_RST)
+ err = FM10K_ERR_RESET_FAILED;
+
+ return err;
+}
+
+/**
+ * fm10k_init_hw_vf - VF hardware initialization
+ * @hw: pointer to hardware structure
+ *
+ **/
+static s32 fm10k_init_hw_vf(struct fm10k_hw *hw)
+{
+ u32 tqdloc, tqdloc0 = ~fm10k_read_reg(hw, FM10K_TQDLOC(0));
+ s32 err;
+ u16 i;
+
+ /* assume we always have at least 1 queue */
+ for (i = 1; tqdloc0 && (i < FM10K_MAX_QUEUES_POOL); i++) {
+ /* verify the Descriptor cache offsets are increasing */
+ tqdloc = ~fm10k_read_reg(hw, FM10K_TQDLOC(i));
+ if (!tqdloc || (tqdloc == tqdloc0))
+ break;
+
+ /* check to verify the PF doesn't own any of our queues */
+ if (!~fm10k_read_reg(hw, FM10K_TXQCTL(i)) ||
+ !~fm10k_read_reg(hw, FM10K_RXQCTL(i)))
+ break;
+ }
+
+ /* shut down queues we own and reset DMA configuration */
+ err = fm10k_disable_queues_generic(hw, i);
+ if (err)
+ return err;
+
+ /* record maximum queue count */
+ hw->mac.max_queues = i;
+
+ return 0;
+}
+
+/**
+ * fm10k_is_slot_appropriate_vf - Indicate appropriate slot for this SKU
+ * @hw: pointer to hardware structure
+ *
+ * Looks at the PCIe bus info to confirm whether or not this slot can support
+ * the necessary bandwidth for this device. Since the VF has no control over
+ * the "slot" it is in, always indicate that the slot is appropriate.
+ **/
+static bool fm10k_is_slot_appropriate_vf(struct fm10k_hw *hw)
+{
+ return true;
+}
+
+/* This structure defines the attibutes to be parsed below */
+const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[] = {
+ FM10K_TLV_ATTR_U32(FM10K_MAC_VLAN_MSG_VLAN),
+ FM10K_TLV_ATTR_BOOL(FM10K_MAC_VLAN_MSG_SET),
+ FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MAC),
+ FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_DEFAULT_MAC),
+ FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MULTICAST),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_update_vlan_vf - Update status of VLAN ID in VLAN filter table
+ * @hw: pointer to hardware structure
+ * @vid: VLAN ID to add to table
+ * @vsi: Reserved, should always be 0
+ * @set: Indicates if this is a set or clear operation
+ *
+ * This function adds or removes the corresponding VLAN ID from the VLAN
+ * filter table for this VF.
+ **/
+static s32 fm10k_update_vlan_vf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[4];
+
+ /* verify the index is not set */
+ if (vsi)
+ return FM10K_ERR_PARAM;
+
+ /* verify upper 4 bits of vid and length are 0 */
+ if ((vid << 16 | vid) >> 28)
+ return FM10K_ERR_PARAM;
+
+ /* encode set bit into the VLAN ID */
+ if (!set)
+ vid |= FM10K_VLAN_CLEAR;
+
+ /* generate VLAN request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN);
+ fm10k_tlv_attr_put_u32(msg, FM10K_MAC_VLAN_MSG_VLAN, vid);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_msg_mac_vlan_vf - Read device MAC address from mailbox message
+ * @hw: pointer to the HW structure
+ * @results: Attributes for message
+ * @mbx: unused mailbox data
+ *
+ * This function should determine the MAC address for the VF
+ **/
+s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ u8 perm_addr[ETH_ALEN];
+ u16 vid;
+ s32 err;
+
+ /* record MAC address requested */
+ err = fm10k_tlv_attr_get_mac_vlan(
+ results[FM10K_MAC_VLAN_MSG_DEFAULT_MAC],
+ perm_addr, &vid);
+ if (err)
+ return err;
+
+ ether_addr_copy(hw->mac.perm_addr, perm_addr);
+ hw->mac.default_vid = vid & (FM10K_VLAN_TABLE_VID_MAX - 1);
+ hw->mac.vlan_override = !!(vid & FM10K_VLAN_CLEAR);
+
+ return 0;
+}
+
+/**
+ * fm10k_read_mac_addr_vf - Read device MAC address
+ * @hw: pointer to the HW structure
+ *
+ * This function should determine the MAC address for the VF
+ **/
+static s32 fm10k_read_mac_addr_vf(struct fm10k_hw *hw)
+{
+ u8 perm_addr[ETH_ALEN];
+ u32 base_addr;
+
+ base_addr = fm10k_read_reg(hw, FM10K_TDBAL(0));
+
+ /* last byte should be 0 */
+ if (base_addr << 24)
+ return FM10K_ERR_INVALID_MAC_ADDR;
+
+ perm_addr[3] = (u8)(base_addr >> 24);
+ perm_addr[4] = (u8)(base_addr >> 16);
+ perm_addr[5] = (u8)(base_addr >> 8);
+
+ base_addr = fm10k_read_reg(hw, FM10K_TDBAH(0));
+
+ /* first byte should be all 1's */
+ if ((~base_addr) >> 24)
+ return FM10K_ERR_INVALID_MAC_ADDR;
+
+ perm_addr[0] = (u8)(base_addr >> 16);
+ perm_addr[1] = (u8)(base_addr >> 8);
+ perm_addr[2] = (u8)(base_addr);
+
+ ether_addr_copy(hw->mac.perm_addr, perm_addr);
+ ether_addr_copy(hw->mac.addr, perm_addr);
+
+ return 0;
+}
+
+/**
+ * fm10k_update_uc_addr_vf - Update device unicast address
+ * @hw: pointer to the HW structure
+ * @glort: unused
+ * @mac: MAC address to add/remove from table
+ * @vid: VLAN ID to add/remove from table
+ * @add: Indicates if this is an add or remove operation
+ * @flags: flags field to indicate add and secure - unused
+ *
+ * This function is used to add or remove unicast MAC addresses for
+ * the VF.
+ **/
+static s32 fm10k_update_uc_addr_vf(struct fm10k_hw *hw, u16 glort,
+ const u8 *mac, u16 vid, bool add, u8 flags)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[7];
+
+ /* verify VLAN ID is valid */
+ if (vid >= FM10K_VLAN_TABLE_VID_MAX)
+ return FM10K_ERR_PARAM;
+
+ /* verify MAC address is valid */
+ if (!is_valid_ether_addr(mac))
+ return FM10K_ERR_PARAM;
+
+ /* verify we are not locked down on the MAC address */
+ if (is_valid_ether_addr(hw->mac.perm_addr) &&
+ memcmp(hw->mac.perm_addr, mac, ETH_ALEN))
+ return FM10K_ERR_PARAM;
+
+ /* add bit to notify us if this is a set of clear operation */
+ if (!add)
+ vid |= FM10K_VLAN_CLEAR;
+
+ /* generate VLAN request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN);
+ fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MAC, mac, vid);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_update_mc_addr_vf - Update device multicast address
+ * @hw: pointer to the HW structure
+ * @glort: unused
+ * @mac: MAC address to add/remove from table
+ * @vid: VLAN ID to add/remove from table
+ * @add: Indicates if this is an add or remove operation
+ *
+ * This function is used to add or remove multicast MAC addresses for
+ * the VF.
+ **/
+static s32 fm10k_update_mc_addr_vf(struct fm10k_hw *hw, u16 glort,
+ const u8 *mac, u16 vid, bool add)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[7];
+
+ /* verify VLAN ID is valid */
+ if (vid >= FM10K_VLAN_TABLE_VID_MAX)
+ return FM10K_ERR_PARAM;
+
+ /* verify multicast address is valid */
+ if (!is_multicast_ether_addr(mac))
+ return FM10K_ERR_PARAM;
+
+ /* add bit to notify us if this is a set of clear operation */
+ if (!add)
+ vid |= FM10K_VLAN_CLEAR;
+
+ /* generate VLAN request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN);
+ fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MULTICAST,
+ mac, vid);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_update_int_moderator_vf - Request update of interrupt moderator list
+ * @hw: pointer to hardware structure
+ *
+ * This function will issue a request to the PF to rescan our MSI-X table
+ * and to update the interrupt moderator linked list.
+ **/
+static void fm10k_update_int_moderator_vf(struct fm10k_hw *hw)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[1];
+
+ /* generate MSI-X request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MSIX);
+
+ /* load onto outgoing mailbox */
+ mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/* This structure defines the attibutes to be parsed below */
+const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[] = {
+ FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_DISABLE),
+ FM10K_TLV_ATTR_U8(FM10K_LPORT_STATE_MSG_XCAST_MODE),
+ FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_READY),
+ FM10K_TLV_ATTR_LAST
+};
+
+/**
+ * fm10k_msg_lport_state_vf - Message handler for lport_state message from PF
+ * @hw: Pointer to hardware structure
+ * @results: pointer array containing parsed data
+ * @mbx: Pointer to mailbox information structure
+ *
+ * This handler is meant to capture the indication from the PF that we
+ * are ready to bring up the interface.
+ **/
+s32 fm10k_msg_lport_state_vf(struct fm10k_hw *hw, u32 **results,
+ struct fm10k_mbx_info *mbx)
+{
+ hw->mac.dglort_map = !results[FM10K_LPORT_STATE_MSG_READY] ?
+ FM10K_DGLORTMAP_NONE : FM10K_DGLORTMAP_ZERO;
+
+ return 0;
+}
+
+/**
+ * fm10k_update_lport_state_vf - Update device state in lower device
+ * @hw: pointer to the HW structure
+ * @glort: unused
+ * @count: number of logical ports to enable - unused (always 1)
+ * @enable: boolean value indicating if this is an enable or disable request
+ *
+ * Notify the lower device of a state change. If the lower device is
+ * enabled we can add filters, if it is disabled all filters for this
+ * logical port are flushed.
+ **/
+static s32 fm10k_update_lport_state_vf(struct fm10k_hw *hw, u16 glort,
+ u16 count, bool enable)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[2];
+
+ /* reset glort mask 0 as we have to wait to be enabled */
+ hw->mac.dglort_map = FM10K_DGLORTMAP_NONE;
+
+ /* generate port state request */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE);
+ if (!enable)
+ fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_DISABLE);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+/**
+ * fm10k_update_xcast_mode_vf - Request update of multicast mode
+ * @hw: pointer to hardware structure
+ * @glort: unused
+ * @mode: integer value indicating mode being requested
+ *
+ * This function will attempt to request a higher mode for the port
+ * so that it can enable either multicast, multicast promiscuous, or
+ * promiscuous mode of operation.
+ **/
+static s32 fm10k_update_xcast_mode_vf(struct fm10k_hw *hw, u16 glort, u8 mode)
+{
+ struct fm10k_mbx_info *mbx = &hw->mbx;
+ u32 msg[3];
+
+ if (mode > FM10K_XCAST_MODE_NONE)
+ return FM10K_ERR_PARAM;
+ /* generate message requesting to change xcast mode */
+ fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE);
+ fm10k_tlv_attr_put_u8(msg, FM10K_LPORT_STATE_MSG_XCAST_MODE, mode);
+
+ /* load onto outgoing mailbox */
+ return mbx->ops.enqueue_tx(hw, mbx, msg);
+}
+
+const struct fm10k_tlv_attr fm10k_1588_msg_attr[] = {
+ FM10K_TLV_ATTR_U64(FM10K_1588_MSG_TIMESTAMP),
+ FM10K_TLV_ATTR_LAST
+};
+
+/* currently there is no shared 1588 timestamp handler */
+
+/**
+ * fm10k_update_hw_stats_vf - Updates hardware related statistics of VF
+ * @hw: pointer to hardware structure
+ * @stats: pointer to statistics structure
+ *
+ * This function collects and aggregates per queue hardware statistics.
+ **/
+static void fm10k_update_hw_stats_vf(struct fm10k_hw *hw,
+ struct fm10k_hw_stats *stats)
+{
+ fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues);
+}
+
+/**
+ * fm10k_rebind_hw_stats_vf - Resets base for hardware statistics of VF
+ * @hw: pointer to hardware structure
+ * @stats: pointer to the stats structure to update
+ *
+ * This function resets the base for queue hardware statistics.
+ **/
+static void fm10k_rebind_hw_stats_vf(struct fm10k_hw *hw,
+ struct fm10k_hw_stats *stats)
+{
+ /* Unbind Queue Statistics */
+ fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues);
+
+ /* Reinitialize bases for all stats */
+ fm10k_update_hw_stats_vf(hw, stats);
+}
+
+/**
+ * fm10k_configure_dglort_map_vf - Configures GLORT entry and queues
+ * @hw: pointer to hardware structure
+ * @dglort: pointer to dglort configuration structure
+ *
+ * Reads the configuration structure contained in dglort_cfg and uses
+ * that information to then populate a DGLORTMAP/DEC entry and the queues
+ * to which it has been assigned.
+ **/
+static s32 fm10k_configure_dglort_map_vf(struct fm10k_hw *hw,
+ struct fm10k_dglort_cfg *dglort)
+{
+ /* verify the dglort pointer */
+ if (!dglort)
+ return FM10K_ERR_PARAM;
+
+ /* stub for now until we determine correct message for this */
+
+ return 0;
+}
+
+/**
+ * fm10k_adjust_systime_vf - Adjust systime frequency
+ * @hw: pointer to hardware structure
+ * @ppb: adjustment rate in parts per billion
+ *
+ * This function takes an adjustment rate in parts per billion and will
+ * verify that this value is 0 as the VF cannot support adjusting the
+ * systime clock.
+ *
+ * If the ppb value is non-zero the return is ERR_PARAM else success
+ **/
+static s32 fm10k_adjust_systime_vf(struct fm10k_hw *hw, s32 ppb)
+{
+ /* The VF cannot adjust the clock frequency, however it should
+ * already have a syntonic clock with whichever host interface is
+ * running as the master for the host interface clock domain so
+ * there should be not frequency adjustment necessary.
+ */
+ return ppb ? FM10K_ERR_PARAM : 0;
+}
+
+/**
+ * fm10k_read_systime_vf - Reads value of systime registers
+ * @hw: pointer to the hardware structure
+ *
+ * Function reads the content of 2 registers, combined to represent a 64 bit
+ * value measured in nanosecods. In order to guarantee the value is accurate
+ * we check the 32 most significant bits both before and after reading the
+ * 32 least significant bits to verify they didn't change as we were reading
+ * the registers.
+ **/
+static u64 fm10k_read_systime_vf(struct fm10k_hw *hw)
+{
+ u32 systime_l, systime_h, systime_tmp;
+
+ systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1);
+
+ do {
+ systime_tmp = systime_h;
+ systime_l = fm10k_read_reg(hw, FM10K_VFSYSTIME);
+ systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1);
+ } while (systime_tmp != systime_h);
+
+ return ((u64)systime_h << 32) | systime_l;
+}
+
+static const struct fm10k_msg_data fm10k_msg_data_vf[] = {
+ FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test),
+ FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf),
+ FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf),
+ FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error),
+};
+
+static struct fm10k_mac_ops mac_ops_vf = {
+ .get_bus_info = &fm10k_get_bus_info_generic,
+ .reset_hw = &fm10k_reset_hw_vf,
+ .init_hw = &fm10k_init_hw_vf,
+ .start_hw = &fm10k_start_hw_generic,
+ .stop_hw = &fm10k_stop_hw_vf,
+ .is_slot_appropriate = &fm10k_is_slot_appropriate_vf,
+ .update_vlan = &fm10k_update_vlan_vf,
+ .read_mac_addr = &fm10k_read_mac_addr_vf,
+ .update_uc_addr = &fm10k_update_uc_addr_vf,
+ .update_mc_addr = &fm10k_update_mc_addr_vf,
+ .update_xcast_mode = &fm10k_update_xcast_mode_vf,
+ .update_int_moderator = &fm10k_update_int_moderator_vf,
+ .update_lport_state = &fm10k_update_lport_state_vf,
+ .update_hw_stats = &fm10k_update_hw_stats_vf,
+ .rebind_hw_stats = &fm10k_rebind_hw_stats_vf,
+ .configure_dglort_map = &fm10k_configure_dglort_map_vf,
+ .get_host_state = &fm10k_get_host_state_generic,
+ .adjust_systime = &fm10k_adjust_systime_vf,
+ .read_systime = &fm10k_read_systime_vf,
+};
+
+static s32 fm10k_get_invariants_vf(struct fm10k_hw *hw)
+{
+ fm10k_get_invariants_generic(hw);
+
+ return fm10k_pfvf_mbx_init(hw, &hw->mbx, fm10k_msg_data_vf, 0);
+}
+
+struct fm10k_info fm10k_vf_info = {
+ .mac = fm10k_mac_vf,
+ .get_invariants = &fm10k_get_invariants_vf,
+ .mac_ops = &mac_ops_vf,
+};
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_vf.h b/drivers/net/ethernet/intel/fm10k/fm10k_vf.h
new file mode 100644
index 0000000..06a99d7
--- /dev/null
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_vf.h
@@ -0,0 +1,78 @@
+/* Intel Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2014 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Contact Information:
+ * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _FM10K_VF_H_
+#define _FM10K_VF_H_
+
+#include "fm10k_type.h"
+#include "fm10k_common.h"
+
+enum fm10k_vf_tlv_msg_id {
+ FM10K_VF_MSG_ID_TEST = 0, /* msg ID reserved for testing */
+ FM10K_VF_MSG_ID_MSIX,
+ FM10K_VF_MSG_ID_MAC_VLAN,
+ FM10K_VF_MSG_ID_LPORT_STATE,
+ FM10K_VF_MSG_ID_1588,
+ FM10K_VF_MSG_ID_MAX,
+};
+
+enum fm10k_tlv_mac_vlan_attr_id {
+ FM10K_MAC_VLAN_MSG_VLAN,
+ FM10K_MAC_VLAN_MSG_SET,
+ FM10K_MAC_VLAN_MSG_MAC,
+ FM10K_MAC_VLAN_MSG_DEFAULT_MAC,
+ FM10K_MAC_VLAN_MSG_MULTICAST,
+ FM10K_MAC_VLAN_MSG_ID_MAX
+};
+
+enum fm10k_tlv_lport_state_attr_id {
+ FM10K_LPORT_STATE_MSG_DISABLE,
+ FM10K_LPORT_STATE_MSG_XCAST_MODE,
+ FM10K_LPORT_STATE_MSG_READY,
+ FM10K_LPORT_STATE_MSG_MAX
+};
+
+enum fm10k_tlv_1588_attr_id {
+ FM10K_1588_MSG_TIMESTAMP,
+ FM10K_1588_MSG_MAX
+};
+
+#define FM10K_VF_MSG_MSIX_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MSIX, NULL, func)
+
+s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
+extern const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[];
+#define FM10K_VF_MSG_MAC_VLAN_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MAC_VLAN, \
+ fm10k_mac_vlan_msg_attr, func)
+
+s32 fm10k_msg_lport_state_vf(struct fm10k_hw *, u32 **,
+ struct fm10k_mbx_info *);
+extern const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[];
+#define FM10K_VF_MSG_LPORT_STATE_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_LPORT_STATE, \
+ fm10k_lport_state_msg_attr, func)
+
+extern const struct fm10k_tlv_attr fm10k_1588_msg_attr[];
+#define FM10K_VF_MSG_1588_HANDLER(func) \
+ FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_1588, fm10k_1588_msg_attr, func)
+
+extern struct fm10k_info fm10k_vf_info;
+#endif /* _FM10K_VF_H */
diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 801da39..f1e33f8 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -144,6 +144,8 @@ enum i40e_state_t {
__I40E_PTP_TX_IN_PROGRESS,
__I40E_BAD_EEPROM,
__I40E_DOWN_REQUESTED,
+ __I40E_FD_FLUSH_REQUESTED,
+ __I40E_RESET_FAILED,
};
enum i40e_interrupt_policy {
@@ -250,6 +252,11 @@ struct i40e_pf {
u16 fdir_pf_active_filters;
u16 fd_sb_cnt_idx;
u16 fd_atr_cnt_idx;
+ unsigned long fd_flush_timestamp;
+ u32 fd_flush_cnt;
+ u32 fd_add_err;
+ u32 fd_atr_cnt;
+ u32 fd_tcp_rule;
#ifdef CONFIG_I40E_VXLAN
__be16 vxlan_ports[I40E_MAX_PF_UDP_OFFLOAD_PORTS];
@@ -310,6 +317,7 @@ struct i40e_pf {
u32 tx_timeout_count;
u32 tx_timeout_recovery_level;
unsigned long tx_timeout_last_recovery;
+ u32 tx_sluggish_count;
u32 hw_csum_rx_error;
u32 led_status;
u16 corer_count; /* Core reset count */
@@ -608,6 +616,7 @@ int i40e_add_del_fdir(struct i40e_vsi *vsi,
void i40e_fdir_check_and_reenable(struct i40e_pf *pf);
int i40e_get_current_fd_count(struct i40e_pf *pf);
int i40e_get_cur_guaranteed_fd_count(struct i40e_pf *pf);
+int i40e_get_current_atr_cnt(struct i40e_pf *pf);
bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features);
void i40e_set_ethtool_ops(struct net_device *netdev);
struct i40e_mac_filter *i40e_add_filter(struct i40e_vsi *vsi,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.c b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
index b29c157..72f5d25 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
@@ -840,7 +840,8 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
/* bump the tail */
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
- i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc_on_ring, buff);
+ i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
+ buff, buff_size);
(hw->aq.asq.next_to_use)++;
if (hw->aq.asq.next_to_use == hw->aq.asq.count)
hw->aq.asq.next_to_use = 0;
@@ -891,7 +892,7 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
"AQTX: desc and buffer writeback:\n");
- i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, buff);
+ i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size);
/* update the error if time out occurred */
if ((!cmd_completed) &&
@@ -987,7 +988,8 @@ i40e_status i40e_clean_arq_element(struct i40e_hw *hw,
e->msg_size);
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
- i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf);
+ i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
+ hw->aq.arq_buf_size);
/* Restore the original datalen and buffer address in the desc,
* FW updates datalen to indicate the event message
diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
index df43e7c..30056b2 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
@@ -75,13 +75,15 @@ static i40e_status i40e_set_mac_type(struct i40e_hw *hw)
* @mask: debug mask
* @desc: pointer to admin queue descriptor
* @buffer: pointer to command buffer
+ * @buf_len: max length of buffer
*
* Dumps debug log about adminq command with descriptor contents.
**/
void i40e_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask, void *desc,
- void *buffer)
+ void *buffer, u16 buf_len)
{
struct i40e_aq_desc *aq_desc = (struct i40e_aq_desc *)desc;
+ u16 len = le16_to_cpu(aq_desc->datalen);
u8 *aq_buffer = (u8 *)buffer;
u32 data[4];
u32 i = 0;
@@ -105,7 +107,9 @@ void i40e_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask, void *desc,
if ((buffer != NULL) && (aq_desc->datalen != 0)) {
memset(data, 0, sizeof(data));
i40e_debug(hw, mask, "AQ CMD Buffer:\n");
- for (i = 0; i < le16_to_cpu(aq_desc->datalen); i++) {
+ if (buf_len < len)
+ len = buf_len;
+ for (i = 0; i < len; i++) {
data[((i % 16) / 4)] |=
((u32)aq_buffer[i]) << (8 * (i % 4));
if ((i % 16) == 15) {
@@ -748,6 +752,8 @@ static enum i40e_media_type i40e_get_media_type(struct i40e_hw *hw)
switch (hw->phy.link_info.phy_type) {
case I40E_PHY_TYPE_10GBASE_SR:
case I40E_PHY_TYPE_10GBASE_LR:
+ case I40E_PHY_TYPE_1000BASE_SX:
+ case I40E_PHY_TYPE_1000BASE_LX:
case I40E_PHY_TYPE_40GBASE_SR4:
case I40E_PHY_TYPE_40GBASE_LR4:
media = I40E_MEDIA_TYPE_FIBER;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
index 5a0cabe..7067f4b 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
@@ -1356,6 +1356,9 @@ static ssize_t i40e_dbg_command_write(struct file *filp,
"emp reset count: %d\n", pf->empr_count);
dev_info(&pf->pdev->dev,
"pf reset count: %d\n", pf->pfr_count);
+ dev_info(&pf->pdev->dev,
+ "pf tx sluggish count: %d\n",
+ pf->tx_sluggish_count);
} else if (strncmp(&cmd_buf[5], "port", 4) == 0) {
struct i40e_aqc_query_port_ets_config_resp *bw_data;
struct i40e_dcbx_config *cfg =
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index e8ba747..1dda467 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -145,6 +145,7 @@ static struct i40e_stats i40e_gstrings_stats[] = {
I40E_PF_STAT("rx_jabber", stats.rx_jabber),
I40E_PF_STAT("VF_admin_queue_requests", vf_aq_requests),
I40E_PF_STAT("rx_hwtstamp_cleared", rx_hwtstamp_cleared),
+ I40E_PF_STAT("fdir_flush_cnt", fd_flush_cnt),
I40E_PF_STAT("fdir_atr_match", stats.fd_atr_match),
I40E_PF_STAT("fdir_sb_match", stats.fd_sb_match),
@@ -312,7 +313,10 @@ static int i40e_get_settings(struct net_device *netdev,
break;
case I40E_PHY_TYPE_10GBASE_SR:
case I40E_PHY_TYPE_10GBASE_LR:
+ case I40E_PHY_TYPE_1000BASE_SX:
+ case I40E_PHY_TYPE_1000BASE_LX:
ecmd->supported = SUPPORTED_10000baseT_Full;
+ ecmd->supported |= SUPPORTED_1000baseT_Full;
break;
case I40E_PHY_TYPE_10GBASE_CR1_CU:
case I40E_PHY_TYPE_10GBASE_CR1:
@@ -351,7 +355,8 @@ static int i40e_get_settings(struct net_device *netdev,
break;
default:
/* if we got here and link is up something bad is afoot */
- WARN_ON(link_up);
+ netdev_info(netdev, "WARNING: Link is up but PHY type 0x%x is not recognized.\n",
+ hw_link_info->phy_type);
}
no_valid_phy_type:
@@ -461,7 +466,8 @@ static int i40e_set_settings(struct net_device *netdev,
if (hw->phy.media_type != I40E_MEDIA_TYPE_BASET &&
hw->phy.media_type != I40E_MEDIA_TYPE_FIBER &&
- hw->phy.media_type != I40E_MEDIA_TYPE_BACKPLANE)
+ hw->phy.media_type != I40E_MEDIA_TYPE_BACKPLANE &&
+ hw->phy.link_info.link_info & I40E_AQ_LINK_UP)
return -EOPNOTSUPP;
/* get our own copy of the bits to check against */
@@ -492,11 +498,10 @@ static int i40e_set_settings(struct net_device *netdev,
if (status)
return -EAGAIN;
- /* Copy link_speed and abilities to config in case they are not
+ /* Copy abilities to config in case autoneg is not
* set below
*/
memset(&config, 0, sizeof(struct i40e_aq_set_phy_config));
- config.link_speed = abilities.link_speed;
config.abilities = abilities.abilities;
/* Check autoneg */
@@ -533,42 +538,38 @@ static int i40e_set_settings(struct net_device *netdev,
return -EINVAL;
if (advertise & ADVERTISED_100baseT_Full)
- if (!(abilities.link_speed & I40E_LINK_SPEED_100MB)) {
- config.link_speed |= I40E_LINK_SPEED_100MB;
- change = true;
- }
+ config.link_speed |= I40E_LINK_SPEED_100MB;
if (advertise & ADVERTISED_1000baseT_Full ||
advertise & ADVERTISED_1000baseKX_Full)
- if (!(abilities.link_speed & I40E_LINK_SPEED_1GB)) {
- config.link_speed |= I40E_LINK_SPEED_1GB;
- change = true;
- }
+ config.link_speed |= I40E_LINK_SPEED_1GB;
if (advertise & ADVERTISED_10000baseT_Full ||
advertise & ADVERTISED_10000baseKX4_Full ||
advertise & ADVERTISED_10000baseKR_Full)
- if (!(abilities.link_speed & I40E_LINK_SPEED_10GB)) {
- config.link_speed |= I40E_LINK_SPEED_10GB;
- change = true;
- }
+ config.link_speed |= I40E_LINK_SPEED_10GB;
if (advertise & ADVERTISED_40000baseKR4_Full ||
advertise & ADVERTISED_40000baseCR4_Full ||
advertise & ADVERTISED_40000baseSR4_Full ||
advertise & ADVERTISED_40000baseLR4_Full)
- if (!(abilities.link_speed & I40E_LINK_SPEED_40GB)) {
- config.link_speed |= I40E_LINK_SPEED_40GB;
- change = true;
- }
+ config.link_speed |= I40E_LINK_SPEED_40GB;
- if (change) {
+ if (change || (abilities.link_speed != config.link_speed)) {
/* copy over the rest of the abilities */
config.phy_type = abilities.phy_type;
config.eee_capability = abilities.eee_capability;
config.eeer = abilities.eeer_val;
config.low_power_ctrl = abilities.d3_lpan;
- /* If link is up set link and an so changes take effect */
- if (hw->phy.link_info.link_info & I40E_AQ_LINK_UP)
- config.abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK;
+ /* set link and auto negotiation so changes take effect */
+ config.abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK;
+ /* If link is up put link down */
+ if (hw->phy.link_info.link_info & I40E_AQ_LINK_UP) {
+ /* Tell the OS link is going down, the link will go
+ * back up when fw says it is ready asynchronously
+ */
+ netdev_info(netdev, "PHY settings change requested, NIC Link is going down.\n");
+ netif_carrier_off(netdev);
+ netif_tx_stop_all_queues(netdev);
+ }
/* make the aq call */
status = i40e_aq_set_phy_config(hw, &config, NULL);
@@ -685,6 +686,13 @@ static int i40e_set_pauseparam(struct net_device *netdev,
else
return -EINVAL;
+ /* Tell the OS link is going down, the link will go back up when fw
+ * says it is ready asynchronously
+ */
+ netdev_info(netdev, "Flow control settings change requested, NIC Link is going down.\n");
+ netif_carrier_off(netdev);
+ netif_tx_stop_all_queues(netdev);
+
/* Set the fc mode and only restart an if link is up*/
status = i40e_set_fc(hw, &aq_failures, link_up);
@@ -1977,6 +1985,13 @@ static int i40e_del_fdir_entry(struct i40e_vsi *vsi,
struct i40e_pf *pf = vsi->back;
int ret = 0;
+ if (test_bit(__I40E_RESET_RECOVERY_PENDING, &pf->state) ||
+ test_bit(__I40E_RESET_INTR_RECEIVED, &pf->state))
+ return -EBUSY;
+
+ if (test_bit(__I40E_FD_FLUSH_REQUESTED, &pf->state))
+ return -EBUSY;
+
ret = i40e_update_ethtool_fdir_entry(vsi, NULL, fsp->location, cmd);
i40e_fdir_check_and_reenable(pf);
@@ -2010,6 +2025,13 @@ static int i40e_add_fdir_ethtool(struct i40e_vsi *vsi,
if (pf->auto_disable_flags & I40E_FLAG_FD_SB_ENABLED)
return -ENOSPC;
+ if (test_bit(__I40E_RESET_RECOVERY_PENDING, &pf->state) ||
+ test_bit(__I40E_RESET_INTR_RECEIVED, &pf->state))
+ return -EBUSY;
+
+ if (test_bit(__I40E_FD_FLUSH_REQUESTED, &pf->state))
+ return -EBUSY;
+
fsp = (struct ethtool_rx_flow_spec *)&cmd->fs;
if (fsp->location >= (pf->hw.func_caps.fd_filters_best_effort +
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index eddec6b..ed5f1c1 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -37,9 +37,9 @@ static const char i40e_driver_string[] =
#define DRV_KERN "-k"
-#define DRV_VERSION_MAJOR 0
-#define DRV_VERSION_MINOR 4
-#define DRV_VERSION_BUILD 21
+#define DRV_VERSION_MAJOR 1
+#define DRV_VERSION_MINOR 0
+#define DRV_VERSION_BUILD 11
#define DRV_VERSION __stringify(DRV_VERSION_MAJOR) "." \
__stringify(DRV_VERSION_MINOR) "." \
__stringify(DRV_VERSION_BUILD) DRV_KERN
@@ -1239,8 +1239,11 @@ struct i40e_mac_filter *i40e_put_mac_in_vlan(struct i40e_vsi *vsi, u8 *macaddr,
* i40e_rm_default_mac_filter - Remove the default MAC filter set by NVM
* @vsi: the PF Main VSI - inappropriate for any other VSI
* @macaddr: the MAC address
+ *
+ * Some older firmware configurations set up a default promiscuous VLAN
+ * filter that needs to be removed.
**/
-static void i40e_rm_default_mac_filter(struct i40e_vsi *vsi, u8 *macaddr)
+static int i40e_rm_default_mac_filter(struct i40e_vsi *vsi, u8 *macaddr)
{
struct i40e_aqc_remove_macvlan_element_data element;
struct i40e_pf *pf = vsi->back;
@@ -1248,15 +1251,18 @@ static void i40e_rm_default_mac_filter(struct i40e_vsi *vsi, u8 *macaddr)
/* Only appropriate for the PF main VSI */
if (vsi->type != I40E_VSI_MAIN)
- return;
+ return -EINVAL;
+ memset(&element, 0, sizeof(element));
ether_addr_copy(element.mac_addr, macaddr);
element.vlan_tag = 0;
element.flags = I40E_AQC_MACVLAN_DEL_PERFECT_MATCH |
I40E_AQC_MACVLAN_DEL_IGNORE_VLAN;
aq_ret = i40e_aq_remove_macvlan(&pf->hw, vsi->seid, &element, 1, NULL);
if (aq_ret)
- dev_err(&pf->pdev->dev, "Could not remove default MAC-VLAN\n");
+ return -ENOENT;
+
+ return 0;
}
/**
@@ -1385,18 +1391,30 @@ static int i40e_set_mac(struct net_device *netdev, void *p)
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_vsi *vsi = np->vsi;
+ struct i40e_pf *pf = vsi->back;
+ struct i40e_hw *hw = &pf->hw;
struct sockaddr *addr = p;
struct i40e_mac_filter *f;
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
- netdev_info(netdev, "set mac address=%pM\n", addr->sa_data);
+ if (ether_addr_equal(netdev->dev_addr, addr->sa_data)) {
+ netdev_info(netdev, "already using mac address %pM\n",
+ addr->sa_data);
+ return 0;
+ }
if (test_bit(__I40E_DOWN, &vsi->back->state) ||
test_bit(__I40E_RESET_RECOVERY_PENDING, &vsi->back->state))
return -EADDRNOTAVAIL;
+ if (ether_addr_equal(hw->mac.addr, addr->sa_data))
+ netdev_info(netdev, "returning to hw mac address %pM\n",
+ hw->mac.addr);
+ else
+ netdev_info(netdev, "set new mac address %pM\n", addr->sa_data);
+
if (vsi->type == I40E_VSI_MAIN) {
i40e_status ret;
ret = i40e_aq_mac_address_write(&vsi->back->hw,
@@ -1410,25 +1428,34 @@ static int i40e_set_mac(struct net_device *netdev, void *p)
}
}
- f = i40e_find_mac(vsi, addr->sa_data, false, true);
- if (!f) {
- /* In order to be sure to not drop any packets, add the
- * new address first then delete the old one.
- */
- f = i40e_add_filter(vsi, addr->sa_data, I40E_VLAN_ANY,
- false, false);
- if (!f)
- return -ENOMEM;
+ if (ether_addr_equal(netdev->dev_addr, hw->mac.addr)) {
+ struct i40e_aqc_remove_macvlan_element_data element;
- i40e_sync_vsi_filters(vsi);
+ memset(&element, 0, sizeof(element));
+ ether_addr_copy(element.mac_addr, netdev->dev_addr);
+ element.flags = I40E_AQC_MACVLAN_DEL_PERFECT_MATCH;
+ i40e_aq_remove_macvlan(&pf->hw, vsi->seid, &element, 1, NULL);
+ } else {
i40e_del_filter(vsi, netdev->dev_addr, I40E_VLAN_ANY,
false, false);
- i40e_sync_vsi_filters(vsi);
}
- f->is_laa = true;
- if (!ether_addr_equal(netdev->dev_addr, addr->sa_data))
- ether_addr_copy(netdev->dev_addr, addr->sa_data);
+ if (ether_addr_equal(addr->sa_data, hw->mac.addr)) {
+ struct i40e_aqc_add_macvlan_element_data element;
+
+ memset(&element, 0, sizeof(element));
+ ether_addr_copy(element.mac_addr, hw->mac.addr);
+ element.flags = cpu_to_le16(I40E_AQC_MACVLAN_ADD_PERFECT_MATCH);
+ i40e_aq_add_macvlan(&pf->hw, vsi->seid, &element, 1, NULL);
+ } else {
+ f = i40e_add_filter(vsi, addr->sa_data, I40E_VLAN_ANY,
+ false, false);
+ if (f)
+ f->is_laa = true;
+ }
+
+ i40e_sync_vsi_filters(vsi);
+ ether_addr_copy(netdev->dev_addr, addr->sa_data);
return 0;
}
@@ -1796,9 +1823,8 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
kfree(add_list);
add_list = NULL;
- if (add_happened && (!aq_ret)) {
- /* do nothing */;
- } else if (add_happened && (aq_ret)) {
+ if (add_happened && aq_ret &&
+ pf->hw.aq.asq_last_status != I40E_AQ_RC_EINVAL) {
dev_info(&pf->pdev->dev,
"add filter failed, err %d, aq_err %d\n",
aq_ret, pf->hw.aq.asq_last_status);
@@ -4480,11 +4506,26 @@ static int i40e_up_complete(struct i40e_vsi *vsi)
netif_carrier_on(vsi->netdev);
} else if (vsi->netdev) {
i40e_print_link_message(vsi, false);
+ /* need to check for qualified module here*/
+ if ((pf->hw.phy.link_info.link_info &
+ I40E_AQ_MEDIA_AVAILABLE) &&
+ (!(pf->hw.phy.link_info.an_info &
+ I40E_AQ_QUALIFIED_MODULE)))
+ netdev_err(vsi->netdev,
+ "the driver failed to link because an unqualified module was detected.");
}
/* replay FDIR SB filters */
- if (vsi->type == I40E_VSI_FDIR)
+ if (vsi->type == I40E_VSI_FDIR) {
+ /* reset fd counters */
+ pf->fd_add_err = pf->fd_atr_cnt = 0;
+ if (pf->fd_tcp_rule > 0) {
+ pf->flags &= ~I40E_FLAG_FD_ATR_ENABLED;
+ dev_info(&pf->pdev->dev, "Forcing ATR off, sideband rules for TCP/IPv4 exist\n");
+ pf->fd_tcp_rule = 0;
+ }
i40e_fdir_filter_restore(vsi);
+ }
i40e_service_event_schedule(pf);
return 0;
@@ -5125,6 +5166,7 @@ int i40e_get_current_fd_count(struct i40e_pf *pf)
I40E_PFQF_FDSTAT_BEST_CNT_SHIFT);
return fcnt_prog;
}
+
/**
* i40e_fdir_check_and_reenable - Function to reenabe FD ATR or SB if disabled
* @pf: board private structure
@@ -5133,15 +5175,17 @@ void i40e_fdir_check_and_reenable(struct i40e_pf *pf)
{
u32 fcnt_prog, fcnt_avail;
+ if (test_bit(__I40E_FD_FLUSH_REQUESTED, &pf->state))
+ return;
+
/* Check if, FD SB or ATR was auto disabled and if there is enough room
* to re-enable
*/
- if ((pf->flags & I40E_FLAG_FD_ATR_ENABLED) &&
- (pf->flags & I40E_FLAG_FD_SB_ENABLED))
- return;
fcnt_prog = i40e_get_cur_guaranteed_fd_count(pf);
fcnt_avail = pf->fdir_pf_filter_count;
- if (fcnt_prog < (fcnt_avail - I40E_FDIR_BUFFER_HEAD_ROOM)) {
+ if ((fcnt_prog < (fcnt_avail - I40E_FDIR_BUFFER_HEAD_ROOM)) ||
+ (pf->fd_add_err == 0) ||
+ (i40e_get_current_atr_cnt(pf) < pf->fd_atr_cnt)) {
if ((pf->flags & I40E_FLAG_FD_SB_ENABLED) &&
(pf->auto_disable_flags & I40E_FLAG_FD_SB_ENABLED)) {
pf->auto_disable_flags &= ~I40E_FLAG_FD_SB_ENABLED;
@@ -5158,23 +5202,84 @@ void i40e_fdir_check_and_reenable(struct i40e_pf *pf)
}
}
+#define I40E_MIN_FD_FLUSH_INTERVAL 10
+/**
+ * i40e_fdir_flush_and_replay - Function to flush all FD filters and replay SB
+ * @pf: board private structure
+ **/
+static void i40e_fdir_flush_and_replay(struct i40e_pf *pf)
+{
+ int flush_wait_retry = 50;
+ int reg;
+
+ if (time_after(jiffies, pf->fd_flush_timestamp +
+ (I40E_MIN_FD_FLUSH_INTERVAL * HZ))) {
+ set_bit(__I40E_FD_FLUSH_REQUESTED, &pf->state);
+ pf->fd_flush_timestamp = jiffies;
+ pf->auto_disable_flags |= I40E_FLAG_FD_SB_ENABLED;
+ pf->flags &= ~I40E_FLAG_FD_ATR_ENABLED;
+ /* flush all filters */
+ wr32(&pf->hw, I40E_PFQF_CTL_1,
+ I40E_PFQF_CTL_1_CLEARFDTABLE_MASK);
+ i40e_flush(&pf->hw);
+ pf->fd_flush_cnt++;
+ pf->fd_add_err = 0;
+ do {
+ /* Check FD flush status every 5-6msec */
+ usleep_range(5000, 6000);
+ reg = rd32(&pf->hw, I40E_PFQF_CTL_1);
+ if (!(reg & I40E_PFQF_CTL_1_CLEARFDTABLE_MASK))
+ break;
+ } while (flush_wait_retry--);
+ if (reg & I40E_PFQF_CTL_1_CLEARFDTABLE_MASK) {
+ dev_warn(&pf->pdev->dev, "FD table did not flush, needs more time\n");
+ } else {
+ /* replay sideband filters */
+ i40e_fdir_filter_restore(pf->vsi[pf->lan_vsi]);
+
+ pf->flags |= I40E_FLAG_FD_ATR_ENABLED;
+ pf->auto_disable_flags &= ~I40E_FLAG_FD_ATR_ENABLED;
+ pf->auto_disable_flags &= ~I40E_FLAG_FD_SB_ENABLED;
+ clear_bit(__I40E_FD_FLUSH_REQUESTED, &pf->state);
+ dev_info(&pf->pdev->dev, "FD Filter table flushed and FD-SB replayed.\n");
+ }
+ }
+}
+
+/**
+ * i40e_get_current_atr_count - Get the count of total FD ATR filters programmed
+ * @pf: board private structure
+ **/
+int i40e_get_current_atr_cnt(struct i40e_pf *pf)
+{
+ return i40e_get_current_fd_count(pf) - pf->fdir_pf_active_filters;
+}
+
+/* We can see up to 256 filter programming desc in transit if the filters are
+ * being applied really fast; before we see the first
+ * filter miss error on Rx queue 0. Accumulating enough error messages before
+ * reacting will make sure we don't cause flush too often.
+ */
+#define I40E_MAX_FD_PROGRAM_ERROR 256
+
/**
* i40e_fdir_reinit_subtask - Worker thread to reinit FDIR filter table
* @pf: board private structure
**/
static void i40e_fdir_reinit_subtask(struct i40e_pf *pf)
{
- if (!(pf->flags & I40E_FLAG_FDIR_REQUIRES_REINIT))
- return;
/* if interface is down do nothing */
if (test_bit(__I40E_DOWN, &pf->state))
return;
+
+ if ((pf->fd_add_err >= I40E_MAX_FD_PROGRAM_ERROR) &&
+ (i40e_get_current_atr_cnt(pf) >= pf->fd_atr_cnt) &&
+ (i40e_get_current_atr_cnt(pf) > pf->fdir_pf_filter_count))
+ i40e_fdir_flush_and_replay(pf);
+
i40e_fdir_check_and_reenable(pf);
- if ((pf->flags & I40E_FLAG_FD_ATR_ENABLED) &&
- (pf->flags & I40E_FLAG_FD_SB_ENABLED))
- pf->flags &= ~I40E_FLAG_FDIR_REQUIRES_REINIT;
}
/**
@@ -5184,7 +5289,7 @@ static void i40e_fdir_reinit_subtask(struct i40e_pf *pf)
**/
static void i40e_vsi_link_event(struct i40e_vsi *vsi, bool link_up)
{
- if (!vsi)
+ if (!vsi || test_bit(__I40E_DOWN, &vsi->state))
return;
switch (vsi->type) {
@@ -5420,6 +5525,13 @@ static void i40e_handle_link_event(struct i40e_pf *pf,
memcpy(&pf->hw.phy.link_info_old, hw_link_info,
sizeof(pf->hw.phy.link_info_old));
+ /* check for unqualified module, if link is down */
+ if ((status->link_info & I40E_AQ_MEDIA_AVAILABLE) &&
+ (!(status->an_info & I40E_AQ_QUALIFIED_MODULE)) &&
+ (!(status->link_info & I40E_AQ_LINK_UP)))
+ dev_err(&pf->pdev->dev,
+ "The driver failed to link because an unqualified module was detected.\n");
+
/* update link status */
hw_link_info->phy_type = (enum i40e_aq_phy_type)status->phy_type;
hw_link_info->link_speed = (enum i40e_aq_link_speed)status->link_speed;
@@ -5456,6 +5568,10 @@ static void i40e_clean_adminq_subtask(struct i40e_pf *pf)
u32 oldval;
u32 val;
+ /* Do not run clean AQ when PF reset fails */
+ if (test_bit(__I40E_RESET_FAILED, &pf->state))
+ return;
+
/* check for error indications */
val = rd32(&pf->hw, pf->hw.aq.arq.len);
oldval = val;
@@ -5861,19 +5977,20 @@ static void i40e_reset_and_rebuild(struct i40e_pf *pf, bool reinit)
ret = i40e_pf_reset(hw);
if (ret) {
dev_info(&pf->pdev->dev, "PF reset failed, %d\n", ret);
- goto end_core_reset;
+ set_bit(__I40E_RESET_FAILED, &pf->state);
+ goto clear_recovery;
}
pf->pfr_count++;
if (test_bit(__I40E_DOWN, &pf->state))
- goto end_core_reset;
+ goto clear_recovery;
dev_dbg(&pf->pdev->dev, "Rebuilding internal switch\n");
/* rebuild the basics for the AdminQ, HMC, and initial HW switch */
ret = i40e_init_adminq(&pf->hw);
if (ret) {
dev_info(&pf->pdev->dev, "Rebuild AdminQ failed, %d\n", ret);
- goto end_core_reset;
+ goto clear_recovery;
}
/* re-verify the eeprom if we just had an EMP reset */
@@ -5991,6 +6108,8 @@ static void i40e_reset_and_rebuild(struct i40e_pf *pf, bool reinit)
i40e_send_version(pf);
end_core_reset:
+ clear_bit(__I40E_RESET_FAILED, &pf->state);
+clear_recovery:
clear_bit(__I40E_RESET_RECOVERY_PENDING, &pf->state);
}
@@ -6036,9 +6155,9 @@ static void i40e_handle_mdd_event(struct i40e_pf *pf)
I40E_GL_MDET_TX_EVENT_SHIFT;
u8 queue = (reg & I40E_GL_MDET_TX_QUEUE_MASK) >>
I40E_GL_MDET_TX_QUEUE_SHIFT;
- dev_info(&pf->pdev->dev,
- "Malicious Driver Detection event 0x%02x on TX queue %d pf number 0x%02x vf number 0x%02x\n",
- event, queue, pf_num, vf_num);
+ if (netif_msg_tx_err(pf))
+ dev_info(&pf->pdev->dev, "Malicious Driver Detection event 0x%02x on TX queue %d pf number 0x%02x vf number 0x%02x\n",
+ event, queue, pf_num, vf_num);
wr32(hw, I40E_GL_MDET_TX, 0xffffffff);
mdd_detected = true;
}
@@ -6050,9 +6169,9 @@ static void i40e_handle_mdd_event(struct i40e_pf *pf)
I40E_GL_MDET_RX_EVENT_SHIFT;
u8 queue = (reg & I40E_GL_MDET_RX_QUEUE_MASK) >>
I40E_GL_MDET_RX_QUEUE_SHIFT;
- dev_info(&pf->pdev->dev,
- "Malicious Driver Detection event 0x%02x on RX queue %d of function 0x%02x\n",
- event, queue, func);
+ if (netif_msg_rx_err(pf))
+ dev_info(&pf->pdev->dev, "Malicious Driver Detection event 0x%02x on RX queue %d of function 0x%02x\n",
+ event, queue, func);
wr32(hw, I40E_GL_MDET_RX, 0xffffffff);
mdd_detected = true;
}
@@ -6061,17 +6180,13 @@ static void i40e_handle_mdd_event(struct i40e_pf *pf)
reg = rd32(hw, I40E_PF_MDET_TX);
if (reg & I40E_PF_MDET_TX_VALID_MASK) {
wr32(hw, I40E_PF_MDET_TX, 0xFFFF);
- dev_info(&pf->pdev->dev,
- "MDD TX event is for this function 0x%08x, requesting PF reset.\n",
- reg);
+ dev_info(&pf->pdev->dev, "TX driver issue detected, PF reset issued\n");
pf_mdd_detected = true;
}
reg = rd32(hw, I40E_PF_MDET_RX);
if (reg & I40E_PF_MDET_RX_VALID_MASK) {
wr32(hw, I40E_PF_MDET_RX, 0xFFFF);
- dev_info(&pf->pdev->dev,
- "MDD RX event is for this function 0x%08x, requesting PF reset.\n",
- reg);
+ dev_info(&pf->pdev->dev, "RX driver issue detected, PF reset issued\n");
pf_mdd_detected = true;
}
/* Queue belongs to the PF, initiate a reset */
@@ -6088,14 +6203,16 @@ static void i40e_handle_mdd_event(struct i40e_pf *pf)
if (reg & I40E_VP_MDET_TX_VALID_MASK) {
wr32(hw, I40E_VP_MDET_TX(i), 0xFFFF);
vf->num_mdd_events++;
- dev_info(&pf->pdev->dev, "MDD TX event on VF %d\n", i);
+ dev_info(&pf->pdev->dev, "TX driver issue detected on VF %d\n",
+ i);
}
reg = rd32(hw, I40E_VP_MDET_RX(i));
if (reg & I40E_VP_MDET_RX_VALID_MASK) {
wr32(hw, I40E_VP_MDET_RX(i), 0xFFFF);
vf->num_mdd_events++;
- dev_info(&pf->pdev->dev, "MDD RX event on VF %d\n", i);
+ dev_info(&pf->pdev->dev, "RX driver issue detected on VF %d\n",
+ i);
}
if (vf->num_mdd_events > I40E_DEFAULT_NUM_MDD_EVENTS_ALLOWED) {
@@ -7086,6 +7203,11 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
}
pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
pf->auto_disable_flags &= ~I40E_FLAG_FD_SB_ENABLED;
+ /* reset fd counters */
+ pf->fd_add_err = pf->fd_atr_cnt = pf->fd_tcp_rule = 0;
+ pf->fdir_pf_active_filters = 0;
+ pf->flags |= I40E_FLAG_FD_ATR_ENABLED;
+ dev_info(&pf->pdev->dev, "ATR re-enabled.\n");
/* if ATR was auto disabled it can be re-enabled. */
if ((pf->flags & I40E_FLAG_FD_ATR_ENABLED) &&
(pf->auto_disable_flags & I40E_FLAG_FD_ATR_ENABLED))
@@ -7352,7 +7474,7 @@ static const struct net_device_ops i40e_netdev_ops = {
.ndo_set_vf_rate = i40e_ndo_set_vf_bw,
.ndo_get_vf_config = i40e_ndo_get_vf_config,
.ndo_set_vf_link_state = i40e_ndo_set_vf_link_state,
- .ndo_set_vf_spoofchk = i40e_ndo_set_vf_spoofck,
+ .ndo_set_vf_spoofchk = i40e_ndo_set_vf_spoofchk,
#ifdef CONFIG_I40E_VXLAN
.ndo_add_vxlan_port = i40e_add_vxlan_port,
.ndo_del_vxlan_port = i40e_del_vxlan_port,
@@ -7421,14 +7543,14 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
if (vsi->type == I40E_VSI_MAIN) {
SET_NETDEV_DEV(netdev, &pf->pdev->dev);
ether_addr_copy(mac_addr, hw->mac.perm_addr);
- /* The following two steps are necessary to prevent reception
- * of tagged packets - by default the NVM loads a MAC-VLAN
- * filter that will accept any tagged packet. This is to
- * prevent that during normal operations until a specific
- * VLAN tag filter has been set.
+ /* The following steps are necessary to prevent reception
+ * of tagged packets - some older NVM configurations load a
+ * default a MAC-VLAN filter that accepts any tagged packet
+ * which must be replaced by a normal filter.
*/
- i40e_rm_default_mac_filter(vsi, mac_addr);
- i40e_add_filter(vsi, mac_addr, I40E_VLAN_ANY, false, true);
+ if (!i40e_rm_default_mac_filter(vsi, mac_addr))
+ i40e_add_filter(vsi, mac_addr,
+ I40E_VLAN_ANY, false, true);
} else {
/* relate the VSI_VMDQ name to the VSI_MAIN name */
snprintf(netdev->name, IFNAMSIZ, "%sv%%d",
@@ -7644,7 +7766,22 @@ static int i40e_add_vsi(struct i40e_vsi *vsi)
f_count++;
if (f->is_laa && vsi->type == I40E_VSI_MAIN) {
- i40e_aq_mac_address_write(&vsi->back->hw,
+ struct i40e_aqc_remove_macvlan_element_data element;
+
+ memset(&element, 0, sizeof(element));
+ ether_addr_copy(element.mac_addr, f->macaddr);
+ element.flags = I40E_AQC_MACVLAN_DEL_PERFECT_MATCH;
+ ret = i40e_aq_remove_macvlan(hw, vsi->seid,
+ &element, 1, NULL);
+ if (ret) {
+ /* some older FW has a different default */
+ element.flags |=
+ I40E_AQC_MACVLAN_DEL_IGNORE_VLAN;
+ i40e_aq_remove_macvlan(hw, vsi->seid,
+ &element, 1, NULL);
+ }
+
+ i40e_aq_mac_address_write(hw,
I40E_AQC_WRITE_TYPE_LAA_WOL,
f->macaddr, NULL);
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
index 949a9a0..0988b5c 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
@@ -52,10 +52,8 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details);
/* debug function for adminq */
-void i40e_debug_aq(struct i40e_hw *hw,
- enum i40e_debug_mask mask,
- void *desc,
- void *buffer);
+void i40e_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask,
+ void *desc, void *buffer, u16 buf_len);
void i40e_idle_aq(struct i40e_hw *hw);
bool i40e_check_asq_alive(struct i40e_hw *hw);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 369848e..3195d82 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -224,15 +224,19 @@ static int i40e_add_del_fdir_udpv4(struct i40e_vsi *vsi,
ret = i40e_program_fdir_filter(fd_data, raw_packet, pf, add);
if (ret) {
dev_info(&pf->pdev->dev,
- "Filter command send failed for PCTYPE %d (ret = %d)\n",
- fd_data->pctype, ret);
+ "PCTYPE:%d, Filter command send failed for fd_id:%d (ret = %d)\n",
+ fd_data->pctype, fd_data->fd_id, ret);
err = true;
} else {
- dev_info(&pf->pdev->dev,
- "Filter OK for PCTYPE %d (ret = %d)\n",
- fd_data->pctype, ret);
+ if (add)
+ dev_info(&pf->pdev->dev,
+ "Filter OK for PCTYPE %d loc = %d\n",
+ fd_data->pctype, fd_data->fd_id);
+ else
+ dev_info(&pf->pdev->dev,
+ "Filter deleted for PCTYPE %d loc = %d\n",
+ fd_data->pctype, fd_data->fd_id);
}
-
return err ? -EOPNOTSUPP : 0;
}
@@ -276,10 +280,18 @@ static int i40e_add_del_fdir_tcpv4(struct i40e_vsi *vsi,
tcp->source = fd_data->src_port;
if (add) {
+ pf->fd_tcp_rule++;
if (pf->flags & I40E_FLAG_FD_ATR_ENABLED) {
dev_info(&pf->pdev->dev, "Forcing ATR off, sideband rules for TCP/IPv4 flow being applied\n");
pf->flags &= ~I40E_FLAG_FD_ATR_ENABLED;
}
+ } else {
+ pf->fd_tcp_rule = (pf->fd_tcp_rule > 0) ?
+ (pf->fd_tcp_rule - 1) : 0;
+ if (pf->fd_tcp_rule == 0) {
+ pf->flags |= I40E_FLAG_FD_ATR_ENABLED;
+ dev_info(&pf->pdev->dev, "ATR re-enabled due to no sideband TCP/IPv4 rules\n");
+ }
}
fd_data->pctype = I40E_FILTER_PCTYPE_NONF_IPV4_TCP;
@@ -287,12 +299,17 @@ static int i40e_add_del_fdir_tcpv4(struct i40e_vsi *vsi,
if (ret) {
dev_info(&pf->pdev->dev,
- "Filter command send failed for PCTYPE %d (ret = %d)\n",
- fd_data->pctype, ret);
+ "PCTYPE:%d, Filter command send failed for fd_id:%d (ret = %d)\n",
+ fd_data->pctype, fd_data->fd_id, ret);
err = true;
} else {
- dev_info(&pf->pdev->dev, "Filter OK for PCTYPE %d (ret = %d)\n",
- fd_data->pctype, ret);
+ if (add)
+ dev_info(&pf->pdev->dev, "Filter OK for PCTYPE %d loc = %d)\n",
+ fd_data->pctype, fd_data->fd_id);
+ else
+ dev_info(&pf->pdev->dev,
+ "Filter deleted for PCTYPE %d loc = %d\n",
+ fd_data->pctype, fd_data->fd_id);
}
return err ? -EOPNOTSUPP : 0;
@@ -355,13 +372,18 @@ static int i40e_add_del_fdir_ipv4(struct i40e_vsi *vsi,
if (ret) {
dev_info(&pf->pdev->dev,
- "Filter command send failed for PCTYPE %d (ret = %d)\n",
- fd_data->pctype, ret);
+ "PCTYPE:%d, Filter command send failed for fd_id:%d (ret = %d)\n",
+ fd_data->pctype, fd_data->fd_id, ret);
err = true;
} else {
- dev_info(&pf->pdev->dev,
- "Filter OK for PCTYPE %d (ret = %d)\n",
- fd_data->pctype, ret);
+ if (add)
+ dev_info(&pf->pdev->dev,
+ "Filter OK for PCTYPE %d loc = %d\n",
+ fd_data->pctype, fd_data->fd_id);
+ else
+ dev_info(&pf->pdev->dev,
+ "Filter deleted for PCTYPE %d loc = %d\n",
+ fd_data->pctype, fd_data->fd_id);
}
}
@@ -443,8 +465,14 @@ static void i40e_fd_handle_status(struct i40e_ring *rx_ring,
I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT;
if (error == (0x1 << I40E_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT)) {
- dev_warn(&pdev->dev, "ntuple filter loc = %d, could not be added\n",
- rx_desc->wb.qword0.hi_dword.fd_id);
+ if ((rx_desc->wb.qword0.hi_dword.fd_id != 0) ||
+ (I40E_DEBUG_FD & pf->hw.debug_mask))
+ dev_warn(&pdev->dev, "ntuple filter loc = %d, could not be added\n",
+ rx_desc->wb.qword0.hi_dword.fd_id);
+
+ pf->fd_add_err++;
+ /* store the current atr filter count */
+ pf->fd_atr_cnt = i40e_get_current_atr_cnt(pf);
/* filter programming failed most likely due to table full */
fcnt_prog = i40e_get_cur_guaranteed_fd_count(pf);
@@ -454,29 +482,21 @@ static void i40e_fd_handle_status(struct i40e_ring *rx_ring,
* FD ATR/SB and then re-enable it when there is room.
*/
if (fcnt_prog >= (fcnt_avail - I40E_FDIR_BUFFER_FULL_MARGIN)) {
- /* Turn off ATR first */
- if ((pf->flags & I40E_FLAG_FD_ATR_ENABLED) &&
+ if ((pf->flags & I40E_FLAG_FD_SB_ENABLED) &&
!(pf->auto_disable_flags &
- I40E_FLAG_FD_ATR_ENABLED)) {
- dev_warn(&pdev->dev, "FD filter space full, ATR for further flows will be turned off\n");
- pf->auto_disable_flags |=
- I40E_FLAG_FD_ATR_ENABLED;
- pf->flags |= I40E_FLAG_FDIR_REQUIRES_REINIT;
- } else if ((pf->flags & I40E_FLAG_FD_SB_ENABLED) &&
- !(pf->auto_disable_flags &
I40E_FLAG_FD_SB_ENABLED)) {
dev_warn(&pdev->dev, "FD filter space full, new ntuple rules will not be added\n");
pf->auto_disable_flags |=
I40E_FLAG_FD_SB_ENABLED;
- pf->flags |= I40E_FLAG_FDIR_REQUIRES_REINIT;
}
} else {
- dev_info(&pdev->dev, "FD filter programming error\n");
+ dev_info(&pdev->dev,
+ "FD filter programming failed due to incorrect filter parameters\n");
}
} else if (error ==
(0x1 << I40E_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT)) {
if (I40E_DEBUG_FD & pf->hw.debug_mask)
- dev_info(&pdev->dev, "ntuple filter loc = %d, could not be removed\n",
+ dev_info(&pdev->dev, "ntuple filter fd_id = %d, could not be removed\n",
rx_desc->wb.qword0.hi_dword.fd_id);
}
}
@@ -587,6 +607,7 @@ static u32 i40e_get_tx_pending(struct i40e_ring *ring)
static bool i40e_check_tx_hang(struct i40e_ring *tx_ring)
{
u32 tx_pending = i40e_get_tx_pending(tx_ring);
+ struct i40e_pf *pf = tx_ring->vsi->back;
bool ret = false;
clear_check_for_tx_hang(tx_ring);
@@ -603,10 +624,17 @@ static bool i40e_check_tx_hang(struct i40e_ring *tx_ring)
* pending but without time to complete it yet.
*/
if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) &&
- tx_pending) {
+ (tx_pending >= I40E_MIN_DESC_PENDING)) {
/* make sure it is true for two checks in a row */
ret = test_and_set_bit(__I40E_HANG_CHECK_ARMED,
&tx_ring->state);
+ } else if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) &&
+ (tx_pending < I40E_MIN_DESC_PENDING) &&
+ (tx_pending > 0)) {
+ if (I40E_DEBUG_FLOW & pf->hw.debug_mask)
+ dev_info(tx_ring->dev, "HW needs some more descs to do a cacheline flush. tx_pending %d, queue %d",
+ tx_pending, tx_ring->queue_index);
+ pf->tx_sluggish_count++;
} else {
/* update completed stats and disarm the hang check */
tx_ring->tx_stats.tx_done_old = tx_ring->stats.packets;
@@ -674,7 +702,7 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
total_packets += tx_buf->gso_segs;
/* free the skb */
- dev_kfree_skb_any(tx_buf->skb);
+ dev_consume_skb_any(tx_buf->skb);
/* unmap skb header data */
dma_unmap_single(tx_ring->dev,
@@ -1213,7 +1241,6 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
ipv6_tunnel = (rx_ptype > I40E_RX_PTYPE_GRENAT6_MAC_PAY3) &&
(rx_ptype < I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4);
- skb->encapsulation = ipv4_tunnel || ipv6_tunnel;
skb->ip_summed = CHECKSUM_NONE;
/* Rx csum enabled and ip headers found? */
@@ -1287,6 +1314,7 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
}
skb->ip_summed = CHECKSUM_UNNECESSARY;
+ skb->csum_level = ipv4_tunnel || ipv6_tunnel;
return;
@@ -2025,6 +2053,47 @@ static void i40e_create_tx_ctx(struct i40e_ring *tx_ring,
}
/**
+ * __i40e_maybe_stop_tx - 2nd level check for tx stop conditions
+ * @tx_ring: the ring to be checked
+ * @size: the size buffer we want to assure is available
+ *
+ * Returns -EBUSY if a stop is needed, else 0
+ **/
+static inline int __i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)
+{
+ netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+ /* Memory barrier before checking head and tail */
+ smp_mb();
+
+ /* Check again in a case another CPU has just made room available. */
+ if (likely(I40E_DESC_UNUSED(tx_ring) < size))
+ return -EBUSY;
+
+ /* A reprieve! - use start_queue because it doesn't call schedule */
+ netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
+ ++tx_ring->tx_stats.restart_queue;
+ return 0;
+}
+
+/**
+ * i40e_maybe_stop_tx - 1st level check for tx stop conditions
+ * @tx_ring: the ring to be checked
+ * @size: the size buffer we want to assure is available
+ *
+ * Returns 0 if stop is not needed
+ **/
+#ifdef I40E_FCOE
+int i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)
+#else
+static int i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)
+#endif
+{
+ if (likely(I40E_DESC_UNUSED(tx_ring) >= size))
+ return 0;
+ return __i40e_maybe_stop_tx(tx_ring, size);
+}
+
+/**
* i40e_tx_map - Build the Tx descriptor
* @tx_ring: ring to send buffer on
* @skb: send buffer
@@ -2167,8 +2236,12 @@ static void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
tx_ring->next_to_use = i;
+ i40e_maybe_stop_tx(tx_ring, DESC_NEEDED);
/* notify HW of packet */
- writel(i, tx_ring->tail);
+ if (!skb->xmit_more ||
+ netif_xmit_stopped(netdev_get_tx_queue(tx_ring->netdev,
+ tx_ring->queue_index)))
+ writel(i, tx_ring->tail);
return;
@@ -2190,47 +2263,6 @@ dma_error:
}
/**
- * __i40e_maybe_stop_tx - 2nd level check for tx stop conditions
- * @tx_ring: the ring to be checked
- * @size: the size buffer we want to assure is available
- *
- * Returns -EBUSY if a stop is needed, else 0
- **/
-static inline int __i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)
-{
- netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
- /* Memory barrier before checking head and tail */
- smp_mb();
-
- /* Check again in a case another CPU has just made room available. */
- if (likely(I40E_DESC_UNUSED(tx_ring) < size))
- return -EBUSY;
-
- /* A reprieve! - use start_queue because it doesn't call schedule */
- netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
- ++tx_ring->tx_stats.restart_queue;
- return 0;
-}
-
-/**
- * i40e_maybe_stop_tx - 1st level check for tx stop conditions
- * @tx_ring: the ring to be checked
- * @size: the size buffer we want to assure is available
- *
- * Returns 0 if stop is not needed
- **/
-#ifdef I40E_FCOE
-int i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)
-#else
-static int i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)
-#endif
-{
- if (likely(I40E_DESC_UNUSED(tx_ring) >= size))
- return 0;
- return __i40e_maybe_stop_tx(tx_ring, size);
-}
-
-/**
* i40e_xmit_descriptor_count - calculate number of tx descriptors needed
* @skb: send buffer
* @tx_ring: ring to send buffer on
@@ -2344,8 +2376,6 @@ static netdev_tx_t i40e_xmit_frame_ring(struct sk_buff *skb,
i40e_tx_map(tx_ring, skb, first, tx_flags, hdr_len,
td_cmd, td_offset);
- i40e_maybe_stop_tx(tx_ring, DESC_NEEDED);
-
return NETDEV_TX_OK;
out_drop:
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index 73f4fa4..d7a625a 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -121,6 +121,7 @@ enum i40e_dyn_idx_t {
/* Tx Descriptors needed, worst case */
#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), I40E_MAX_DATA_PER_TXD)
#define DESC_NEEDED (MAX_SKB_FRAGS + 4)
+#define I40E_MIN_DESC_PENDING 4
#define I40E_TX_FLAGS_CSUM (u32)(1)
#define I40E_TX_FLAGS_HW_VLAN (u32)(1 << 1)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
index 3ac6a0d..4eeed26 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
@@ -73,7 +73,7 @@ static inline bool i40e_vc_isvalid_queue_id(struct i40e_vf *vf, u8 vsi_id,
{
struct i40e_pf *pf = vf->pf;
- return qid < pf->vsi[vsi_id]->num_queue_pairs;
+ return qid < pf->vsi[vsi_id]->alloc_queue_pairs;
}
/**
@@ -350,6 +350,7 @@ static int i40e_config_vsi_rx_queue(struct i40e_vf *vf, u16 vsi_idx,
rx_ctx.lrxqthresh = 2;
rx_ctx.crcstrip = 1;
rx_ctx.prefena = 1;
+ rx_ctx.l2tsel = 1;
/* clear the context in the HMC */
ret = i40e_clear_lan_rx_queue_context(hw, pf_queue_id);
@@ -468,7 +469,7 @@ static void i40e_enable_vf_mappings(struct i40e_vf *vf)
wr32(hw, I40E_VPLAN_MAPENA(vf->vf_id), reg);
/* map PF queues to VF queues */
- for (j = 0; j < pf->vsi[vf->lan_vsi_index]->num_queue_pairs; j++) {
+ for (j = 0; j < pf->vsi[vf->lan_vsi_index]->alloc_queue_pairs; j++) {
u16 qid = i40e_vc_get_pf_queue_id(vf, vf->lan_vsi_index, j);
reg = (qid & I40E_VPLAN_QTABLE_QINDEX_MASK);
wr32(hw, I40E_VPLAN_QTABLE(total_queue_pairs, vf->vf_id), reg);
@@ -477,7 +478,7 @@ static void i40e_enable_vf_mappings(struct i40e_vf *vf)
/* map PF queues to VSI */
for (j = 0; j < 7; j++) {
- if (j * 2 >= pf->vsi[vf->lan_vsi_index]->num_queue_pairs) {
+ if (j * 2 >= pf->vsi[vf->lan_vsi_index]->alloc_queue_pairs) {
reg = 0x07FF07FF; /* unused */
} else {
u16 qid = i40e_vc_get_pf_queue_id(vf, vf->lan_vsi_index,
@@ -584,7 +585,7 @@ static int i40e_alloc_vf_res(struct i40e_vf *vf)
ret = i40e_alloc_vsi_res(vf, I40E_VSI_SRIOV);
if (ret)
goto error_alloc;
- total_queue_pairs += pf->vsi[vf->lan_vsi_index]->num_queue_pairs;
+ total_queue_pairs += pf->vsi[vf->lan_vsi_index]->alloc_queue_pairs;
set_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps);
/* store the total qps number for the runtime
@@ -706,35 +707,6 @@ complete_reset:
wr32(hw, I40E_VFGEN_RSTAT1(vf->vf_id), I40E_VFR_VFACTIVE);
i40e_flush(hw);
}
-
-/**
- * i40e_vfs_are_assigned
- * @pf: pointer to the pf structure
- *
- * Determine if any VFs are assigned to VMs
- **/
-static bool i40e_vfs_are_assigned(struct i40e_pf *pf)
-{
- struct pci_dev *pdev = pf->pdev;
- struct pci_dev *vfdev;
-
- /* loop through all the VFs to see if we own any that are assigned */
- vfdev = pci_get_device(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_VF , NULL);
- while (vfdev) {
- /* if we don't own it we don't care */
- if (vfdev->is_virtfn && pci_physfn(vfdev) == pdev) {
- /* if it is assigned we cannot release it */
- if (vfdev->dev_flags & PCI_DEV_FLAGS_ASSIGNED)
- return true;
- }
-
- vfdev = pci_get_device(PCI_VENDOR_ID_INTEL,
- I40E_DEV_ID_VF,
- vfdev);
- }
-
- return false;
-}
#ifdef CONFIG_PCI_IOV
/**
@@ -842,7 +814,7 @@ void i40e_free_vfs(struct i40e_pf *pf)
* assigned. Setting the number of VFs to 0 through sysfs is caught
* before this function ever gets called.
*/
- if (!i40e_vfs_are_assigned(pf)) {
+ if (!pci_vfs_assigned(pf->pdev)) {
pci_disable_sriov(pf->pdev);
/* Acknowledge VFLR for all VFS. Without this, VFs will fail to
* work correctly when SR-IOV gets re-enabled.
@@ -979,7 +951,7 @@ int i40e_pci_sriov_configure(struct pci_dev *pdev, int num_vfs)
if (num_vfs)
return i40e_pci_sriov_enable(pdev, num_vfs);
- if (!i40e_vfs_are_assigned(pf)) {
+ if (!pci_vfs_assigned(pf->pdev)) {
i40e_free_vfs(pf);
} else {
dev_warn(&pdev->dev, "Unable to free VFs because some are assigned to VMs.\n");
@@ -1123,7 +1095,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf)
vfres->vsi_res[i].vsi_id = vf->lan_vsi_index;
vfres->vsi_res[i].vsi_type = I40E_VSI_SRIOV;
vfres->vsi_res[i].num_queue_pairs =
- pf->vsi[vf->lan_vsi_index]->num_queue_pairs;
+ pf->vsi[vf->lan_vsi_index]->alloc_queue_pairs;
memcpy(vfres->vsi_res[i].default_mac_addr,
vf->default_lan_addr.addr, ETH_ALEN);
i++;
@@ -1209,6 +1181,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
struct i40e_virtchnl_vsi_queue_config_info *qci =
(struct i40e_virtchnl_vsi_queue_config_info *)msg;
struct i40e_virtchnl_queue_pair_info *qpi;
+ struct i40e_pf *pf = vf->pf;
u16 vsi_id, vsi_queue_id;
i40e_status aq_ret = 0;
int i;
@@ -1242,6 +1215,8 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
goto error_param;
}
}
+ /* set vsi num_queue_pairs in use to num configured by vf */
+ pf->vsi[vf->lan_vsi_index]->num_queue_pairs = qci->num_queue_pairs;
error_param:
/* send the response to the vf */
@@ -2094,7 +2069,6 @@ int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
/* Force the VF driver stop so it has to reload with new MAC address */
i40e_vc_disable_vf(pf, vf);
dev_info(&pf->pdev->dev, "Reload the VF driver to make this change effective.\n");
- ret = 0;
error_param:
return ret;
@@ -2419,7 +2393,7 @@ error_out:
*
* Enable or disable VF spoof checking
**/
-int i40e_ndo_set_vf_spoofck(struct net_device *netdev, int vf_id, bool enable)
+int i40e_ndo_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool enable)
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_vsi *vsi = np->vsi;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
index 63e7e0d..0adc61e 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
@@ -122,7 +122,7 @@ int i40e_ndo_set_vf_bw(struct net_device *netdev, int vf_id, int min_tx_rate,
int i40e_ndo_get_vf_config(struct net_device *netdev,
int vf_id, struct ifla_vf_info *ivi);
int i40e_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link);
-int i40e_ndo_set_vf_spoofck(struct net_device *netdev, int vf_id, bool enable);
+int i40e_ndo_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool enable);
void i40e_vc_notify_link_state(struct i40e_pf *pf);
void i40e_vc_notify_reset(struct i40e_pf *pf);
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq.c b/drivers/net/ethernet/intel/i40evf/i40e_adminq.c
index 0030060..f206be9 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_adminq.c
+++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq.c
@@ -788,7 +788,8 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
/* bump the tail */
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
- i40evf_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc_on_ring, buff);
+ i40evf_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
+ buff, buff_size);
(hw->aq.asq.next_to_use)++;
if (hw->aq.asq.next_to_use == hw->aq.asq.count)
hw->aq.asq.next_to_use = 0;
@@ -842,7 +843,8 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
"AQTX: desc and buffer writeback:\n");
- i40evf_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, buff);
+ i40evf_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, buff,
+ buff_size);
/* update the error if time out occurred */
if ((!cmd_completed) &&
@@ -938,7 +940,8 @@ i40e_status i40evf_clean_arq_element(struct i40e_hw *hw,
hw->aq.nvm_busy = false;
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
- i40evf_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf);
+ i40evf_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
+ hw->aq.arq_buf_size);
/* Restore the original datalen and buffer address in the desc,
* FW updates datalen to indicate the event message
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_common.c b/drivers/net/ethernet/intel/i40evf/i40e_common.c
index 4ea90bf..9525605 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40evf/i40e_common.c
@@ -75,13 +75,15 @@ i40e_status i40e_set_mac_type(struct i40e_hw *hw)
* @mask: debug mask
* @desc: pointer to admin queue descriptor
* @buffer: pointer to command buffer
+ * @buf_len: max length of buffer
*
* Dumps debug log about adminq command with descriptor contents.
**/
void i40evf_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask, void *desc,
- void *buffer)
+ void *buffer, u16 buf_len)
{
struct i40e_aq_desc *aq_desc = (struct i40e_aq_desc *)desc;
+ u16 len = le16_to_cpu(aq_desc->datalen);
u8 *aq_buffer = (u8 *)buffer;
u32 data[4];
u32 i = 0;
@@ -105,7 +107,9 @@ void i40evf_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask, void *desc,
if ((buffer != NULL) && (aq_desc->datalen != 0)) {
memset(data, 0, sizeof(data));
i40e_debug(hw, mask, "AQ CMD Buffer:\n");
- for (i = 0; i < le16_to_cpu(aq_desc->datalen); i++) {
+ if (buf_len < len)
+ len = buf_len;
+ for (i = 0; i < len; i++) {
data[((i % 16) / 4)] |=
((u32)aq_buffer[i]) << (8 * (i % 4));
if ((i % 16) == 15) {
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_prototype.h b/drivers/net/ethernet/intel/i40evf/i40e_prototype.h
index 849edcc..9173834 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_prototype.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_prototype.h
@@ -53,10 +53,8 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
bool i40evf_asq_done(struct i40e_hw *hw);
/* debug function for adminq */
-void i40evf_debug_aq(struct i40e_hw *hw,
- enum i40e_debug_mask mask,
- void *desc,
- void *buffer);
+void i40evf_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask,
+ void *desc, void *buffer, u16 buf_len);
void i40e_idle_aq(struct i40e_hw *hw);
void i40evf_resume_aq(struct i40e_hw *hw);
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c
index 95a3ec2..04c7c15 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c
@@ -163,11 +163,13 @@ static bool i40e_check_tx_hang(struct i40e_ring *tx_ring)
* pending but without time to complete it yet.
*/
if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) &&
- tx_pending) {
+ (tx_pending >= I40E_MIN_DESC_PENDING)) {
/* make sure it is true for two checks in a row */
ret = test_and_set_bit(__I40E_HANG_CHECK_ARMED,
&tx_ring->state);
- } else {
+ } else if (!(tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) ||
+ !(tx_pending < I40E_MIN_DESC_PENDING) ||
+ !(tx_pending > 0)) {
/* update completed stats and disarm the hang check */
tx_ring->tx_stats.tx_done_old = tx_ring->stats.packets;
clear_bit(__I40E_HANG_CHECK_ARMED, &tx_ring->state);
@@ -744,7 +746,6 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
ipv6_tunnel = (rx_ptype > I40E_RX_PTYPE_GRENAT6_MAC_PAY3) &&
(rx_ptype < I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4);
- skb->encapsulation = ipv4_tunnel || ipv6_tunnel;
skb->ip_summed = CHECKSUM_NONE;
/* Rx csum enabled and ip headers found? */
@@ -818,6 +819,7 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
}
skb->ip_summed = CHECKSUM_UNNECESSARY;
+ skb->csum_level = ipv4_tunnel || ipv6_tunnel;
return;
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h
index 8bc6858..f6dcf9d 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h
@@ -121,6 +121,7 @@ enum i40e_dyn_idx_t {
/* Tx Descriptors needed, worst case */
#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), I40E_MAX_DATA_PER_TXD)
#define DESC_NEEDED (MAX_SKB_FRAGS + 4)
+#define I40E_MIN_DESC_PENDING 4
#define I40E_TX_FLAGS_CSUM (u32)(1)
#define I40E_TX_FLAGS_HW_VLAN (u32)(1 << 1)
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_main.c b/drivers/net/ethernet/intel/i40evf/i40evf_main.c
index 38429fa..c51bc7a 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf_main.c
+++ b/drivers/net/ethernet/intel/i40evf/i40evf_main.c
@@ -36,7 +36,7 @@ char i40evf_driver_name[] = "i40evf";
static const char i40evf_driver_string[] =
"Intel(R) XL710/X710 Virtual Function Network Driver";
-#define DRV_VERSION "0.9.40"
+#define DRV_VERSION "1.0.5"
const char i40evf_driver_version[] = DRV_VERSION;
static const char i40evf_copyright[] =
"Copyright (c) 2013 - 2014 Intel Corporation.";
diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.c b/drivers/net/ethernet/intel/igb/e1000_82575.c
index 236a618..051ea94 100644
--- a/drivers/net/ethernet/intel/igb/e1000_82575.c
+++ b/drivers/net/ethernet/intel/igb/e1000_82575.c
@@ -2548,11 +2548,13 @@ s32 igb_read_emi_reg(struct e1000_hw *hw, u16 addr, u16 *data)
/**
* igb_set_eee_i350 - Enable/disable EEE support
* @hw: pointer to the HW structure
+ * @adv1G: boolean flag enabling 1G EEE advertisement
+ * @adv100m: boolean flag enabling 100M EEE advertisement
*
* Enable/disable EEE based on setting in dev_spec structure.
*
**/
-s32 igb_set_eee_i350(struct e1000_hw *hw)
+s32 igb_set_eee_i350(struct e1000_hw *hw, bool adv1G, bool adv100M)
{
u32 ipcnfg, eeer;
@@ -2566,7 +2568,16 @@ s32 igb_set_eee_i350(struct e1000_hw *hw)
if (!(hw->dev_spec._82575.eee_disable)) {
u32 eee_su = rd32(E1000_EEE_SU);
- ipcnfg |= (E1000_IPCNFG_EEE_1G_AN | E1000_IPCNFG_EEE_100M_AN);
+ if (adv100M)
+ ipcnfg |= E1000_IPCNFG_EEE_100M_AN;
+ else
+ ipcnfg &= ~E1000_IPCNFG_EEE_100M_AN;
+
+ if (adv1G)
+ ipcnfg |= E1000_IPCNFG_EEE_1G_AN;
+ else
+ ipcnfg &= ~E1000_IPCNFG_EEE_1G_AN;
+
eeer |= (E1000_EEER_TX_LPI_EN | E1000_EEER_RX_LPI_EN |
E1000_EEER_LPI_FC);
@@ -2593,11 +2604,13 @@ out:
/**
* igb_set_eee_i354 - Enable/disable EEE support
* @hw: pointer to the HW structure
+ * @adv1G: boolean flag enabling 1G EEE advertisement
+ * @adv100m: boolean flag enabling 100M EEE advertisement
*
* Enable/disable EEE legacy mode based on setting in dev_spec structure.
*
**/
-s32 igb_set_eee_i354(struct e1000_hw *hw)
+s32 igb_set_eee_i354(struct e1000_hw *hw, bool adv1G, bool adv100M)
{
struct e1000_phy_info *phy = &hw->phy;
s32 ret_val = 0;
@@ -2636,8 +2649,16 @@ s32 igb_set_eee_i354(struct e1000_hw *hw)
if (ret_val)
goto out;
- phy_data |= E1000_EEE_ADV_100_SUPPORTED |
- E1000_EEE_ADV_1000_SUPPORTED;
+ if (adv100M)
+ phy_data |= E1000_EEE_ADV_100_SUPPORTED;
+ else
+ phy_data &= ~E1000_EEE_ADV_100_SUPPORTED;
+
+ if (adv1G)
+ phy_data |= E1000_EEE_ADV_1000_SUPPORTED;
+ else
+ phy_data &= ~E1000_EEE_ADV_1000_SUPPORTED;
+
ret_val = igb_write_xmdio_reg(hw, E1000_EEE_ADV_ADDR_I354,
E1000_EEE_ADV_DEV_I354,
phy_data);
diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.h b/drivers/net/ethernet/intel/igb/e1000_82575.h
index b407c55..2154aea 100644
--- a/drivers/net/ethernet/intel/igb/e1000_82575.h
+++ b/drivers/net/ethernet/intel/igb/e1000_82575.h
@@ -263,8 +263,8 @@ void igb_vmdq_set_loopback_pf(struct e1000_hw *, bool);
void igb_vmdq_set_replication_pf(struct e1000_hw *, bool);
u16 igb_rxpbs_adjust_82580(u32 data);
s32 igb_read_emi_reg(struct e1000_hw *, u16 addr, u16 *data);
-s32 igb_set_eee_i350(struct e1000_hw *);
-s32 igb_set_eee_i354(struct e1000_hw *);
+s32 igb_set_eee_i350(struct e1000_hw *, bool adv1G, bool adv100M);
+s32 igb_set_eee_i354(struct e1000_hw *, bool adv1G, bool adv100M);
s32 igb_get_eee_status_i354(struct e1000_hw *hw, bool *status);
#define E1000_I2C_THERMAL_SENSOR_ADDR 0xF8
diff --git a/drivers/net/ethernet/intel/igb/e1000_hw.h b/drivers/net/ethernet/intel/igb/e1000_hw.h
index ce55ea5..2003b37 100644
--- a/drivers/net/ethernet/intel/igb/e1000_hw.h
+++ b/drivers/net/ethernet/intel/igb/e1000_hw.h
@@ -265,11 +265,6 @@ struct e1000_hw_stats {
u64 b2ogprc;
};
-struct e1000_phy_stats {
- u32 idle_errors;
- u32 receive_errors;
-};
-
struct e1000_host_mng_dhcp_cookie {
u32 signature;
u8 status;
diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h
index 06102d1..82d891e 100644
--- a/drivers/net/ethernet/intel/igb/igb.h
+++ b/drivers/net/ethernet/intel/igb/igb.h
@@ -403,7 +403,6 @@ struct igb_adapter {
struct e1000_hw hw;
struct e1000_hw_stats stats;
struct e1000_phy_info phy_info;
- struct e1000_phy_stats phy_stats;
u32 test_icr;
struct igb_ring test_tx_ring;
diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
index c737d1f..02cfd3b 100644
--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
+++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
@@ -2675,6 +2675,7 @@ static int igb_set_eee(struct net_device *netdev,
struct igb_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
struct ethtool_eee eee_curr;
+ bool adv1g_eee = true, adv100m_eee = true;
s32 ret_val;
if ((hw->mac.type < e1000_i350) ||
@@ -2701,12 +2702,14 @@ static int igb_set_eee(struct net_device *netdev,
return -EINVAL;
}
- if (edata->advertised &
- ~(ADVERTISE_100_FULL | ADVERTISE_1000_FULL)) {
+ if (!edata->advertised || (edata->advertised &
+ ~(ADVERTISE_100_FULL | ADVERTISE_1000_FULL))) {
dev_err(&adapter->pdev->dev,
- "EEE Advertisement supports only 100Tx and or 100T full duplex\n");
+ "EEE Advertisement supports only 100Tx and/or 100T full duplex\n");
return -EINVAL;
}
+ adv100m_eee = !!(edata->advertised & ADVERTISE_100_FULL);
+ adv1g_eee = !!(edata->advertised & ADVERTISE_1000_FULL);
} else if (!edata->eee_enabled) {
dev_err(&adapter->pdev->dev,
@@ -2718,10 +2721,6 @@ static int igb_set_eee(struct net_device *netdev,
if (hw->dev_spec._82575.eee_disable != !edata->eee_enabled) {
hw->dev_spec._82575.eee_disable = !edata->eee_enabled;
adapter->flags |= IGB_FLAG_EEE;
- if (hw->mac.type == e1000_i350)
- igb_set_eee_i350(hw);
- else
- igb_set_eee_i354(hw);
/* reset link */
if (netif_running(netdev))
@@ -2730,6 +2729,17 @@ static int igb_set_eee(struct net_device *netdev,
igb_reset(adapter);
}
+ if (hw->mac.type == e1000_i354)
+ ret_val = igb_set_eee_i354(hw, adv1g_eee, adv100m_eee);
+ else
+ ret_val = igb_set_eee_i350(hw, adv1g_eee, adv100m_eee);
+
+ if (ret_val) {
+ dev_err(&adapter->pdev->dev,
+ "Problem setting EEE advertisement options\n");
+ return -EINVAL;
+ }
+
return 0;
}
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index cb14bbd..ae59c0b 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -58,7 +58,7 @@
#define MAJ 5
#define MIN 2
-#define BUILD 13
+#define BUILD 15
#define DRV_VERSION __stringify(MAJ) "." __stringify(MIN) "." \
__stringify(BUILD) "-k"
char igb_driver_name[] = "igb";
@@ -2012,10 +2012,10 @@ void igb_reset(struct igb_adapter *adapter)
case e1000_i350:
case e1000_i210:
case e1000_i211:
- igb_set_eee_i350(hw);
+ igb_set_eee_i350(hw, true, true);
break;
case e1000_i354:
- igb_set_eee_i354(hw);
+ igb_set_eee_i354(hw, true, true);
break;
default:
break;
@@ -2619,7 +2619,7 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
case e1000_i210:
case e1000_i211:
/* Enable EEE for internal copper PHY devices */
- err = igb_set_eee_i350(hw);
+ err = igb_set_eee_i350(hw, true, true);
if ((!err) &&
(!hw->dev_spec._82575.eee_disable)) {
adapter->eee_advert =
@@ -2630,7 +2630,7 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
case e1000_i354:
if ((rd32(E1000_CTRL_EXT) &
E1000_CTRL_EXT_LINK_MODE_SGMII)) {
- err = igb_set_eee_i354(hw);
+ err = igb_set_eee_i354(hw, true, true);
if ((!err) &&
(!hw->dev_spec._82575.eee_disable)) {
adapter->eee_advert =
@@ -4813,6 +4813,41 @@ static void igb_tx_olinfo_status(struct igb_ring *tx_ring,
tx_desc->read.olinfo_status = cpu_to_le32(olinfo_status);
}
+static int __igb_maybe_stop_tx(struct igb_ring *tx_ring, const u16 size)
+{
+ struct net_device *netdev = tx_ring->netdev;
+
+ netif_stop_subqueue(netdev, tx_ring->queue_index);
+
+ /* Herbert's original patch had:
+ * smp_mb__after_netif_stop_queue();
+ * but since that doesn't exist yet, just open code it.
+ */
+ smp_mb();
+
+ /* We need to check again in a case another CPU has just
+ * made room available.
+ */
+ if (igb_desc_unused(tx_ring) < size)
+ return -EBUSY;
+
+ /* A reprieve! */
+ netif_wake_subqueue(netdev, tx_ring->queue_index);
+
+ u64_stats_update_begin(&tx_ring->tx_syncp2);
+ tx_ring->tx_stats.restart_queue2++;
+ u64_stats_update_end(&tx_ring->tx_syncp2);
+
+ return 0;
+}
+
+static inline int igb_maybe_stop_tx(struct igb_ring *tx_ring, const u16 size)
+{
+ if (igb_desc_unused(tx_ring) >= size)
+ return 0;
+ return __igb_maybe_stop_tx(tx_ring, size);
+}
+
static void igb_tx_map(struct igb_ring *tx_ring,
struct igb_tx_buffer *first,
const u8 hdr_len)
@@ -4915,13 +4950,17 @@ static void igb_tx_map(struct igb_ring *tx_ring,
tx_ring->next_to_use = i;
- writel(i, tx_ring->tail);
+ /* Make sure there is space in the ring for the next send. */
+ igb_maybe_stop_tx(tx_ring, DESC_NEEDED);
- /* we need this if more than one processor can write to our tail
- * at a time, it synchronizes IO on IA64/Altix systems
- */
- mmiowb();
+ if (netif_xmit_stopped(txring_txq(tx_ring)) || !skb->xmit_more) {
+ writel(i, tx_ring->tail);
+ /* we need this if more than one processor can write to our tail
+ * at a time, it synchronizes IO on IA64/Altix systems
+ */
+ mmiowb();
+ }
return;
dma_error:
@@ -4941,41 +4980,6 @@ dma_error:
tx_ring->next_to_use = i;
}
-static int __igb_maybe_stop_tx(struct igb_ring *tx_ring, const u16 size)
-{
- struct net_device *netdev = tx_ring->netdev;
-
- netif_stop_subqueue(netdev, tx_ring->queue_index);
-
- /* Herbert's original patch had:
- * smp_mb__after_netif_stop_queue();
- * but since that doesn't exist yet, just open code it.
- */
- smp_mb();
-
- /* We need to check again in a case another CPU has just
- * made room available.
- */
- if (igb_desc_unused(tx_ring) < size)
- return -EBUSY;
-
- /* A reprieve! */
- netif_wake_subqueue(netdev, tx_ring->queue_index);
-
- u64_stats_update_begin(&tx_ring->tx_syncp2);
- tx_ring->tx_stats.restart_queue2++;
- u64_stats_update_end(&tx_ring->tx_syncp2);
-
- return 0;
-}
-
-static inline int igb_maybe_stop_tx(struct igb_ring *tx_ring, const u16 size)
-{
- if (igb_desc_unused(tx_ring) >= size)
- return 0;
- return __igb_maybe_stop_tx(tx_ring, size);
-}
-
netdev_tx_t igb_xmit_frame_ring(struct sk_buff *skb,
struct igb_ring *tx_ring)
{
@@ -5046,9 +5050,6 @@ netdev_tx_t igb_xmit_frame_ring(struct sk_buff *skb,
igb_tx_map(tx_ring, first, hdr_len);
- /* Make sure there is space in the ring for the next send. */
- igb_maybe_stop_tx(tx_ring, DESC_NEEDED);
-
return NETDEV_TX_OK;
out_drop:
@@ -5205,14 +5206,11 @@ void igb_update_stats(struct igb_adapter *adapter,
struct e1000_hw *hw = &adapter->hw;
struct pci_dev *pdev = adapter->pdev;
u32 reg, mpc;
- u16 phy_tmp;
int i;
u64 bytes, packets;
unsigned int start;
u64 _bytes, _packets;
-#define PHY_IDLE_ERROR_COUNT_MASK 0x00FF
-
/* Prevent stats update while adapter is being reset, or if the pci
* connection is down.
*/
@@ -5373,15 +5371,6 @@ void igb_update_stats(struct igb_adapter *adapter,
/* Tx Dropped needs to be maintained elsewhere */
- /* Phy Stats */
- if (hw->phy.media_type == e1000_media_type_copper) {
- if ((adapter->link_speed == SPEED_1000) &&
- (!igb_read_phy_reg(hw, PHY_1000T_STATUS, &phy_tmp))) {
- phy_tmp &= PHY_IDLE_ERROR_COUNT_MASK;
- adapter->phy_stats.idle_errors += phy_tmp;
- }
- }
-
/* Management Stats */
adapter->stats.mgptc += rd32(E1000_MGTPTC);
adapter->stats.mgprc += rd32(E1000_MGTPRC);
@@ -6385,7 +6374,7 @@ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector)
total_packets += tx_buffer->gso_segs;
/* free the skb */
- dev_kfree_skb_any(tx_buffer->skb);
+ dev_consume_skb_any(tx_buffer->skb);
/* unmap skb header data */
dma_unmap_single(tx_ring->dev,
@@ -6768,113 +6757,6 @@ static bool igb_is_non_eop(struct igb_ring *rx_ring,
}
/**
- * igb_get_headlen - determine size of header for LRO/GRO
- * @data: pointer to the start of the headers
- * @max_len: total length of section to find headers in
- *
- * This function is meant to determine the length of headers that will
- * be recognized by hardware for LRO, and GRO offloads. The main
- * motivation of doing this is to only perform one pull for IPv4 TCP
- * packets so that we can do basic things like calculating the gso_size
- * based on the average data per packet.
- **/
-static unsigned int igb_get_headlen(unsigned char *data,
- unsigned int max_len)
-{
- union {
- unsigned char *network;
- /* l2 headers */
- struct ethhdr *eth;
- struct vlan_hdr *vlan;
- /* l3 headers */
- struct iphdr *ipv4;
- struct ipv6hdr *ipv6;
- } hdr;
- __be16 protocol;
- u8 nexthdr = 0; /* default to not TCP */
- u8 hlen;
-
- /* this should never happen, but better safe than sorry */
- if (max_len < ETH_HLEN)
- return max_len;
-
- /* initialize network frame pointer */
- hdr.network = data;
-
- /* set first protocol and move network header forward */
- protocol = hdr.eth->h_proto;
- hdr.network += ETH_HLEN;
-
- /* handle any vlan tag if present */
- if (protocol == htons(ETH_P_8021Q)) {
- if ((hdr.network - data) > (max_len - VLAN_HLEN))
- return max_len;
-
- protocol = hdr.vlan->h_vlan_encapsulated_proto;
- hdr.network += VLAN_HLEN;
- }
-
- /* handle L3 protocols */
- if (protocol == htons(ETH_P_IP)) {
- if ((hdr.network - data) > (max_len - sizeof(struct iphdr)))
- return max_len;
-
- /* access ihl as a u8 to avoid unaligned access on ia64 */
- hlen = (hdr.network[0] & 0x0F) << 2;
-
- /* verify hlen meets minimum size requirements */
- if (hlen < sizeof(struct iphdr))
- return hdr.network - data;
-
- /* record next protocol if header is present */
- if (!(hdr.ipv4->frag_off & htons(IP_OFFSET)))
- nexthdr = hdr.ipv4->protocol;
- } else if (protocol == htons(ETH_P_IPV6)) {
- if ((hdr.network - data) > (max_len - sizeof(struct ipv6hdr)))
- return max_len;
-
- /* record next protocol */
- nexthdr = hdr.ipv6->nexthdr;
- hlen = sizeof(struct ipv6hdr);
- } else {
- return hdr.network - data;
- }
-
- /* relocate pointer to start of L4 header */
- hdr.network += hlen;
-
- /* finally sort out TCP */
- if (nexthdr == IPPROTO_TCP) {
- if ((hdr.network - data) > (max_len - sizeof(struct tcphdr)))
- return max_len;
-
- /* access doff as a u8 to avoid unaligned access on ia64 */
- hlen = (hdr.network[12] & 0xF0) >> 2;
-
- /* verify hlen meets minimum size requirements */
- if (hlen < sizeof(struct tcphdr))
- return hdr.network - data;
-
- hdr.network += hlen;
- } else if (nexthdr == IPPROTO_UDP) {
- if ((hdr.network - data) > (max_len - sizeof(struct udphdr)))
- return max_len;
-
- hdr.network += sizeof(struct udphdr);
- }
-
- /* If everything has gone correctly hdr.network should be the
- * data section of the packet and will be the end of the header.
- * If not then it probably represents the end of the last recognized
- * header.
- */
- if ((hdr.network - data) < max_len)
- return hdr.network - data;
- else
- return max_len;
-}
-
-/**
* igb_pull_tail - igb specific version of skb_pull_tail
* @rx_ring: rx descriptor ring packet is being transacted on
* @rx_desc: pointer to the EOP Rx descriptor
@@ -6918,7 +6800,7 @@ static void igb_pull_tail(struct igb_ring *rx_ring,
/* we need the header to contain the greater of either ETH_HLEN or
* 60 bytes if the skb->len is less than 60 for skb_pad.
*/
- pull_len = igb_get_headlen(va, IGB_RX_HDR_LEN);
+ pull_len = eth_get_headlen(va, IGB_RX_HDR_LEN);
/* align pull length to size of long to optimize memcpy performance */
skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index ac9f214..5032a60 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -307,7 +307,6 @@ enum ixgbe_ring_f_enum {
#define MAX_RX_QUEUES (IXGBE_MAX_FDIR_INDICES + 1)
#define MAX_TX_QUEUES (IXGBE_MAX_FDIR_INDICES + 1)
#define IXGBE_MAX_L2A_QUEUES 4
-#define IXGBE_MAX_L2A_QUEUES 4
#define IXGBE_BAD_L2A_QUEUE 3
#define IXGBE_MAX_MACVLANS 31
#define IXGBE_MAX_DCBMACVLANS 8
@@ -386,119 +385,87 @@ struct ixgbe_q_vector {
char name[IFNAMSIZ + 9];
#ifdef CONFIG_NET_RX_BUSY_POLL
- unsigned int state;
-#define IXGBE_QV_STATE_IDLE 0
-#define IXGBE_QV_STATE_NAPI 1 /* NAPI owns this QV */
-#define IXGBE_QV_STATE_POLL 2 /* poll owns this QV */
-#define IXGBE_QV_STATE_DISABLED 4 /* QV is disabled */
-#define IXGBE_QV_OWNED (IXGBE_QV_STATE_NAPI | IXGBE_QV_STATE_POLL)
-#define IXGBE_QV_LOCKED (IXGBE_QV_OWNED | IXGBE_QV_STATE_DISABLED)
-#define IXGBE_QV_STATE_NAPI_YIELD 8 /* NAPI yielded this QV */
-#define IXGBE_QV_STATE_POLL_YIELD 16 /* poll yielded this QV */
-#define IXGBE_QV_YIELD (IXGBE_QV_STATE_NAPI_YIELD | IXGBE_QV_STATE_POLL_YIELD)
-#define IXGBE_QV_USER_PEND (IXGBE_QV_STATE_POLL | IXGBE_QV_STATE_POLL_YIELD)
- spinlock_t lock;
+ atomic_t state;
#endif /* CONFIG_NET_RX_BUSY_POLL */
/* for dynamic allocation of rings associated with this q_vector */
struct ixgbe_ring ring[0] ____cacheline_internodealigned_in_smp;
};
+
#ifdef CONFIG_NET_RX_BUSY_POLL
+enum ixgbe_qv_state_t {
+ IXGBE_QV_STATE_IDLE = 0,
+ IXGBE_QV_STATE_NAPI,
+ IXGBE_QV_STATE_POLL,
+ IXGBE_QV_STATE_DISABLE
+};
+
static inline void ixgbe_qv_init_lock(struct ixgbe_q_vector *q_vector)
{
-
- spin_lock_init(&q_vector->lock);
- q_vector->state = IXGBE_QV_STATE_IDLE;
+ /* reset state to idle */
+ atomic_set(&q_vector->state, IXGBE_QV_STATE_IDLE);
}
/* called from the device poll routine to get ownership of a q_vector */
static inline bool ixgbe_qv_lock_napi(struct ixgbe_q_vector *q_vector)
{
- int rc = true;
- spin_lock_bh(&q_vector->lock);
- if (q_vector->state & IXGBE_QV_LOCKED) {
- WARN_ON(q_vector->state & IXGBE_QV_STATE_NAPI);
- q_vector->state |= IXGBE_QV_STATE_NAPI_YIELD;
- rc = false;
+ int rc = atomic_cmpxchg(&q_vector->state, IXGBE_QV_STATE_IDLE,
+ IXGBE_QV_STATE_NAPI);
#ifdef BP_EXTENDED_STATS
+ if (rc != IXGBE_QV_STATE_IDLE)
q_vector->tx.ring->stats.yields++;
#endif
- } else {
- /* we don't care if someone yielded */
- q_vector->state = IXGBE_QV_STATE_NAPI;
- }
- spin_unlock_bh(&q_vector->lock);
- return rc;
+
+ return rc == IXGBE_QV_STATE_IDLE;
}
/* returns true is someone tried to get the qv while napi had it */
-static inline bool ixgbe_qv_unlock_napi(struct ixgbe_q_vector *q_vector)
+static inline void ixgbe_qv_unlock_napi(struct ixgbe_q_vector *q_vector)
{
- int rc = false;
- spin_lock_bh(&q_vector->lock);
- WARN_ON(q_vector->state & (IXGBE_QV_STATE_POLL |
- IXGBE_QV_STATE_NAPI_YIELD));
-
- if (q_vector->state & IXGBE_QV_STATE_POLL_YIELD)
- rc = true;
- /* will reset state to idle, unless QV is disabled */
- q_vector->state &= IXGBE_QV_STATE_DISABLED;
- spin_unlock_bh(&q_vector->lock);
- return rc;
+ WARN_ON(atomic_read(&q_vector->state) != IXGBE_QV_STATE_NAPI);
+
+ /* flush any outstanding Rx frames */
+ if (q_vector->napi.gro_list)
+ napi_gro_flush(&q_vector->napi, false);
+
+ /* reset state to idle */
+ atomic_set(&q_vector->state, IXGBE_QV_STATE_IDLE);
}
/* called from ixgbe_low_latency_poll() */
static inline bool ixgbe_qv_lock_poll(struct ixgbe_q_vector *q_vector)
{
- int rc = true;
- spin_lock_bh(&q_vector->lock);
- if ((q_vector->state & IXGBE_QV_LOCKED)) {
- q_vector->state |= IXGBE_QV_STATE_POLL_YIELD;
- rc = false;
+ int rc = atomic_cmpxchg(&q_vector->state, IXGBE_QV_STATE_IDLE,
+ IXGBE_QV_STATE_POLL);
#ifdef BP_EXTENDED_STATS
- q_vector->rx.ring->stats.yields++;
+ if (rc != IXGBE_QV_STATE_IDLE)
+ q_vector->tx.ring->stats.yields++;
#endif
- } else {
- /* preserve yield marks */
- q_vector->state |= IXGBE_QV_STATE_POLL;
- }
- spin_unlock_bh(&q_vector->lock);
- return rc;
+ return rc == IXGBE_QV_STATE_IDLE;
}
/* returns true if someone tried to get the qv while it was locked */
-static inline bool ixgbe_qv_unlock_poll(struct ixgbe_q_vector *q_vector)
+static inline void ixgbe_qv_unlock_poll(struct ixgbe_q_vector *q_vector)
{
- int rc = false;
- spin_lock_bh(&q_vector->lock);
- WARN_ON(q_vector->state & (IXGBE_QV_STATE_NAPI));
-
- if (q_vector->state & IXGBE_QV_STATE_POLL_YIELD)
- rc = true;
- /* will reset state to idle, unless QV is disabled */
- q_vector->state &= IXGBE_QV_STATE_DISABLED;
- spin_unlock_bh(&q_vector->lock);
- return rc;
+ WARN_ON(atomic_read(&q_vector->state) != IXGBE_QV_STATE_POLL);
+
+ /* reset state to idle */
+ atomic_set(&q_vector->state, IXGBE_QV_STATE_IDLE);
}
/* true if a socket is polling, even if it did not get the lock */
static inline bool ixgbe_qv_busy_polling(struct ixgbe_q_vector *q_vector)
{
- WARN_ON(!(q_vector->state & IXGBE_QV_OWNED));
- return q_vector->state & IXGBE_QV_USER_PEND;
+ return atomic_read(&q_vector->state) == IXGBE_QV_STATE_POLL;
}
/* false if QV is currently owned */
static inline bool ixgbe_qv_disable(struct ixgbe_q_vector *q_vector)
{
- int rc = true;
- spin_lock_bh(&q_vector->lock);
- if (q_vector->state & IXGBE_QV_OWNED)
- rc = false;
- q_vector->state |= IXGBE_QV_STATE_DISABLED;
- spin_unlock_bh(&q_vector->lock);
-
- return rc;
+ int rc = atomic_cmpxchg(&q_vector->state, IXGBE_QV_STATE_IDLE,
+ IXGBE_QV_STATE_DISABLE);
+
+ return rc == IXGBE_QV_STATE_IDLE;
}
#else /* CONFIG_NET_RX_BUSY_POLL */
@@ -643,9 +610,7 @@ struct ixgbe_adapter {
* thus the additional *_CAPABLE flags.
*/
u32 flags;
-#define IXGBE_FLAG_MSI_CAPABLE (u32)(1 << 0)
#define IXGBE_FLAG_MSI_ENABLED (u32)(1 << 1)
-#define IXGBE_FLAG_MSIX_CAPABLE (u32)(1 << 2)
#define IXGBE_FLAG_MSIX_ENABLED (u32)(1 << 3)
#define IXGBE_FLAG_RX_1BUF_CAPABLE (u32)(1 << 4)
#define IXGBE_FLAG_RX_PS_CAPABLE (u32)(1 << 5)
@@ -760,8 +725,6 @@ struct ixgbe_adapter {
u8 __iomem *io_addr; /* Mainly for iounmap use */
u32 wol;
- u16 bd_number;
-
u16 eeprom_verh;
u16 eeprom_verl;
u16 eeprom_cap;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index e4100b5..3ce4a25 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -1303,7 +1303,7 @@ static const struct ixgbe_reg_test reg_test_82599[] = {
{ IXGBE_RAL(0), 16, TABLE64_TEST_LO, 0xFFFFFFFF, 0xFFFFFFFF },
{ IXGBE_RAL(0), 16, TABLE64_TEST_HI, 0x8001FFFF, 0x800CFFFF },
{ IXGBE_MTA(0), 128, TABLE32_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
- { 0, 0, 0, 0 }
+ { .reg = 0 }
};
/* default 82598 register test */
@@ -1331,7 +1331,7 @@ static const struct ixgbe_reg_test reg_test_82598[] = {
{ IXGBE_RAL(0), 16, TABLE64_TEST_LO, 0xFFFFFFFF, 0xFFFFFFFF },
{ IXGBE_RAL(0), 16, TABLE64_TEST_HI, 0x800CFFFF, 0x800CFFFF },
{ IXGBE_MTA(0), 128, TABLE32_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
- { 0, 0, 0, 0 }
+ { .reg = 0 }
};
static bool reg_pattern_test(struct ixgbe_adapter *adapter, u64 *data, int reg,
@@ -2267,7 +2267,6 @@ static int ixgbe_set_coalesce(struct net_device *netdev,
if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count)
adapter->tx_itr_setting = adapter->rx_itr_setting;
-#if IS_ENABLED(CONFIG_BQL)
/* detect ITR changes that require update of TXDCTL.WTHRESH */
if ((adapter->tx_itr_setting != 1) &&
(adapter->tx_itr_setting < IXGBE_100K_ITR)) {
@@ -2279,7 +2278,7 @@ static int ixgbe_set_coalesce(struct net_device *netdev,
(tx_itr_prev < IXGBE_100K_ITR))
need_reset = true;
}
-#endif
+
/* check the old value and enable RSC if necessary */
need_reset |= ixgbe_update_rsc(adapter);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index 2d9451e..ce40c77 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -696,46 +696,83 @@ static void ixgbe_set_num_queues(struct ixgbe_adapter *adapter)
ixgbe_set_rss_queues(adapter);
}
-static void ixgbe_acquire_msix_vectors(struct ixgbe_adapter *adapter,
- int vectors)
+/**
+ * ixgbe_acquire_msix_vectors - acquire MSI-X vectors
+ * @adapter: board private structure
+ *
+ * Attempts to acquire a suitable range of MSI-X vector interrupts. Will
+ * return a negative error code if unable to acquire MSI-X vectors for any
+ * reason.
+ */
+static int ixgbe_acquire_msix_vectors(struct ixgbe_adapter *adapter)
{
- int vector_threshold;
+ struct ixgbe_hw *hw = &adapter->hw;
+ int i, vectors, vector_threshold;
+
+ /* We start by asking for one vector per queue pair */
+ vectors = max(adapter->num_rx_queues, adapter->num_tx_queues);
- /* We'll want at least 2 (vector_threshold):
- * 1) TxQ[0] + RxQ[0] handler
- * 2) Other (Link Status Change, etc.)
+ /* It is easy to be greedy for MSI-X vectors. However, it really
+ * doesn't do much good if we have a lot more vectors than CPUs. We'll
+ * be somewhat conservative and only ask for (roughly) the same number
+ * of vectors as there are CPUs.
*/
- vector_threshold = MIN_MSIX_COUNT;
+ vectors = min_t(int, vectors, num_online_cpus());
- /*
- * The more we get, the more we will assign to Tx/Rx Cleanup
- * for the separate queues...where Rx Cleanup >= Tx Cleanup.
- * Right now, we simply care about how many we'll get; we'll
- * set them up later while requesting irq's.
+ /* Some vectors are necessary for non-queue interrupts */
+ vectors += NON_Q_VECTORS;
+
+ /* Hardware can only support a maximum of hw.mac->max_msix_vectors.
+ * With features such as RSS and VMDq, we can easily surpass the
+ * number of Rx and Tx descriptor queues supported by our device.
+ * Thus, we cap the maximum in the rare cases where the CPU count also
+ * exceeds our vector limit
*/
+ vectors = min_t(int, vectors, hw->mac.max_msix_vectors);
+
+ /* We want a minimum of two MSI-X vectors for (1) a TxQ[0] + RxQ[0]
+ * handler, and (2) an Other (Link Status Change, etc.) handler.
+ */
+ vector_threshold = MIN_MSIX_COUNT;
+
+ adapter->msix_entries = kcalloc(vectors,
+ sizeof(struct msix_entry),
+ GFP_KERNEL);
+ if (!adapter->msix_entries)
+ return -ENOMEM;
+
+ for (i = 0; i < vectors; i++)
+ adapter->msix_entries[i].entry = i;
+
vectors = pci_enable_msix_range(adapter->pdev, adapter->msix_entries,
vector_threshold, vectors);
if (vectors < 0) {
- /* Can't allocate enough MSI-X interrupts? Oh well.
- * This just means we'll go with either a single MSI
- * vector or fall back to legacy interrupts.
+ /* A negative count of allocated vectors indicates an error in
+ * acquiring within the specified range of MSI-X vectors
*/
- netif_printk(adapter, hw, KERN_DEBUG, adapter->netdev,
- "Unable to allocate MSI-X interrupts\n");
+ e_dev_warn("Failed to allocate MSI-X interrupts. Err: %d\n",
+ vectors);
+
adapter->flags &= ~IXGBE_FLAG_MSIX_ENABLED;
kfree(adapter->msix_entries);
adapter->msix_entries = NULL;
- } else {
- adapter->flags |= IXGBE_FLAG_MSIX_ENABLED; /* Woot! */
- /*
- * Adjust for only the vectors we'll use, which is minimum
- * of max_msix_q_vectors + NON_Q_VECTORS, or the number of
- * vectors we were allocated.
- */
- vectors -= NON_Q_VECTORS;
- adapter->num_q_vectors = min(vectors, adapter->max_q_vectors);
+
+ return vectors;
}
+
+ /* we successfully allocated some number of vectors within our
+ * requested range.
+ */
+ adapter->flags |= IXGBE_FLAG_MSIX_ENABLED;
+
+ /* Adjust for only the vectors we'll use, which is minimum
+ * of max_q_vectors, or the number of vectors we were allocated.
+ */
+ vectors -= NON_Q_VECTORS;
+ adapter->num_q_vectors = min_t(int, vectors, adapter->max_q_vectors);
+
+ return 0;
}
static void ixgbe_add_ring(struct ixgbe_ring *ring,
@@ -807,6 +844,11 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
ixgbe_poll, 64);
napi_hash_add(&q_vector->napi);
+#ifdef CONFIG_NET_RX_BUSY_POLL
+ /* initialize busy poll */
+ atomic_set(&q_vector->state, IXGBE_QV_STATE_DISABLE);
+
+#endif
/* tie q_vector and adapter together */
adapter->q_vector[v_idx] = q_vector;
q_vector->adapter = adapter;
@@ -1049,46 +1091,20 @@ static void ixgbe_reset_interrupt_capability(struct ixgbe_adapter *adapter)
**/
static void ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter)
{
- struct ixgbe_hw *hw = &adapter->hw;
- int vector, v_budget, err;
+ int err;
- /*
- * It's easy to be greedy for MSI-X vectors, but it really
- * doesn't do us much good if we have a lot more vectors
- * than CPU's. So let's be conservative and only ask for
- * (roughly) the same number of vectors as there are CPU's.
- * The default is to use pairs of vectors.
- */
- v_budget = max(adapter->num_rx_queues, adapter->num_tx_queues);
- v_budget = min_t(int, v_budget, num_online_cpus());
- v_budget += NON_Q_VECTORS;
+ /* We will try to get MSI-X interrupts first */
+ if (!ixgbe_acquire_msix_vectors(adapter))
+ return;
- /*
- * At the same time, hardware can only support a maximum of
- * hw.mac->max_msix_vectors vectors. With features
- * such as RSS and VMDq, we can easily surpass the number of Rx and Tx
- * descriptor queues supported by our device. Thus, we cap it off in
- * those rare cases where the cpu count also exceeds our vector limit.
+ /* At this point, we do not have MSI-X capabilities. We need to
+ * reconfigure or disable various features which require MSI-X
+ * capability.
*/
- v_budget = min_t(int, v_budget, hw->mac.max_msix_vectors);
-
- /* A failure in MSI-X entry allocation isn't fatal, but it does
- * mean we disable MSI-X capabilities of the adapter. */
- adapter->msix_entries = kcalloc(v_budget,
- sizeof(struct msix_entry), GFP_KERNEL);
- if (adapter->msix_entries) {
- for (vector = 0; vector < v_budget; vector++)
- adapter->msix_entries[vector].entry = vector;
-
- ixgbe_acquire_msix_vectors(adapter, v_budget);
-
- if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED)
- return;
- }
- /* disable DCB if number of TCs exceeds 1 */
+ /* Disable DCB unless we only have a single traffic class */
if (netdev_get_num_tc(adapter->netdev) > 1) {
- e_err(probe, "num TCs exceeds number of queues - disabling DCB\n");
+ e_dev_warn("Number of DCB TCs exceeds number of available queues. Disabling DCB support.\n");
netdev_reset_tc(adapter->netdev);
if (adapter->hw.mac.type == ixgbe_mac_82598EB)
@@ -1098,26 +1114,30 @@ static void ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter)
adapter->temp_dcb_cfg.pfc_mode_enable = false;
adapter->dcb_cfg.pfc_mode_enable = false;
}
+
adapter->dcb_cfg.num_tcs.pg_tcs = 1;
adapter->dcb_cfg.num_tcs.pfc_tcs = 1;
- /* disable SR-IOV */
+ /* Disable SR-IOV support */
+ e_dev_warn("Disabling SR-IOV support\n");
ixgbe_disable_sriov(adapter);
- /* disable RSS */
+ /* Disable RSS */
+ e_dev_warn("Disabling RSS support\n");
adapter->ring_feature[RING_F_RSS].limit = 1;
+ /* recalculate number of queues now that many features have been
+ * changed or disabled.
+ */
ixgbe_set_num_queues(adapter);
adapter->num_q_vectors = 1;
err = pci_enable_msi(adapter->pdev);
- if (err) {
- netif_printk(adapter, hw, KERN_DEBUG, adapter->netdev,
- "Unable to allocate MSI interrupt, falling back to legacy. Error: %d\n",
- err);
- return;
- }
- adapter->flags |= IXGBE_FLAG_MSI_ENABLED;
+ if (err)
+ e_dev_warn("Failed to allocate MSI interrupt, falling back to legacy. Error: %d\n",
+ err);
+ else
+ adapter->flags |= IXGBE_FLAG_MSI_ENABLED;
}
/**
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 87bd53f..d677b5a 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -440,7 +440,7 @@ static const struct ixgbe_reg_info ixgbe_reg_info_tbl[] = {
{IXGBE_TXDCTL(0), "TXDCTL"},
/* List Terminator */
- {}
+ { .name = NULL }
};
@@ -1094,7 +1094,7 @@ static bool ixgbe_clean_tx_irq(struct ixgbe_q_vector *q_vector,
total_packets += tx_buffer->gso_segs;
/* free the skb */
- dev_kfree_skb_any(tx_buffer->skb);
+ dev_consume_skb_any(tx_buffer->skb);
/* unmap skb header data */
dma_unmap_single(tx_ring->dev,
@@ -1521,120 +1521,6 @@ void ixgbe_alloc_rx_buffers(struct ixgbe_ring *rx_ring, u16 cleaned_count)
ixgbe_release_rx_desc(rx_ring, i);
}
-/**
- * ixgbe_get_headlen - determine size of header for RSC/LRO/GRO/FCOE
- * @data: pointer to the start of the headers
- * @max_len: total length of section to find headers in
- *
- * This function is meant to determine the length of headers that will
- * be recognized by hardware for LRO, GRO, and RSC offloads. The main
- * motivation of doing this is to only perform one pull for IPv4 TCP
- * packets so that we can do basic things like calculating the gso_size
- * based on the average data per packet.
- **/
-static unsigned int ixgbe_get_headlen(unsigned char *data,
- unsigned int max_len)
-{
- union {
- unsigned char *network;
- /* l2 headers */
- struct ethhdr *eth;
- struct vlan_hdr *vlan;
- /* l3 headers */
- struct iphdr *ipv4;
- struct ipv6hdr *ipv6;
- } hdr;
- __be16 protocol;
- u8 nexthdr = 0; /* default to not TCP */
- u8 hlen;
-
- /* this should never happen, but better safe than sorry */
- if (max_len < ETH_HLEN)
- return max_len;
-
- /* initialize network frame pointer */
- hdr.network = data;
-
- /* set first protocol and move network header forward */
- protocol = hdr.eth->h_proto;
- hdr.network += ETH_HLEN;
-
- /* handle any vlan tag if present */
- if (protocol == htons(ETH_P_8021Q)) {
- if ((hdr.network - data) > (max_len - VLAN_HLEN))
- return max_len;
-
- protocol = hdr.vlan->h_vlan_encapsulated_proto;
- hdr.network += VLAN_HLEN;
- }
-
- /* handle L3 protocols */
- if (protocol == htons(ETH_P_IP)) {
- if ((hdr.network - data) > (max_len - sizeof(struct iphdr)))
- return max_len;
-
- /* access ihl as a u8 to avoid unaligned access on ia64 */
- hlen = (hdr.network[0] & 0x0F) << 2;
-
- /* verify hlen meets minimum size requirements */
- if (hlen < sizeof(struct iphdr))
- return hdr.network - data;
-
- /* record next protocol if header is present */
- if (!(hdr.ipv4->frag_off & htons(IP_OFFSET)))
- nexthdr = hdr.ipv4->protocol;
- } else if (protocol == htons(ETH_P_IPV6)) {
- if ((hdr.network - data) > (max_len - sizeof(struct ipv6hdr)))
- return max_len;
-
- /* record next protocol */
- nexthdr = hdr.ipv6->nexthdr;
- hlen = sizeof(struct ipv6hdr);
-#ifdef IXGBE_FCOE
- } else if (protocol == htons(ETH_P_FCOE)) {
- if ((hdr.network - data) > (max_len - FCOE_HEADER_LEN))
- return max_len;
- hlen = FCOE_HEADER_LEN;
-#endif
- } else {
- return hdr.network - data;
- }
-
- /* relocate pointer to start of L4 header */
- hdr.network += hlen;
-
- /* finally sort out TCP/UDP */
- if (nexthdr == IPPROTO_TCP) {
- if ((hdr.network - data) > (max_len - sizeof(struct tcphdr)))
- return max_len;
-
- /* access doff as a u8 to avoid unaligned access on ia64 */
- hlen = (hdr.network[12] & 0xF0) >> 2;
-
- /* verify hlen meets minimum size requirements */
- if (hlen < sizeof(struct tcphdr))
- return hdr.network - data;
-
- hdr.network += hlen;
- } else if (nexthdr == IPPROTO_UDP) {
- if ((hdr.network - data) > (max_len - sizeof(struct udphdr)))
- return max_len;
-
- hdr.network += sizeof(struct udphdr);
- }
-
- /*
- * If everything has gone correctly hdr.network should be the
- * data section of the packet and will be the end of the header.
- * If not then it probably represents the end of the last recognized
- * header.
- */
- if ((hdr.network - data) < max_len)
- return hdr.network - data;
- else
- return max_len;
-}
-
static void ixgbe_set_rsc_gso_size(struct ixgbe_ring *ring,
struct sk_buff *skb)
{
@@ -1793,7 +1679,7 @@ static void ixgbe_pull_tail(struct ixgbe_ring *rx_ring,
* we need the header to contain the greater of either ETH_HLEN or
* 60 bytes if the skb->len is less than 60 for skb_pad.
*/
- pull_len = ixgbe_get_headlen(va, IXGBE_RX_HDR_SIZE);
+ pull_len = eth_get_headlen(va, IXGBE_RX_HDR_SIZE);
/* align pull length to size of long to optimize memcpy performance */
skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
@@ -2191,9 +2077,6 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
q_vector->rx.total_packets += total_rx_packets;
q_vector->rx.total_bytes += total_rx_bytes;
- if (cleaned_count)
- ixgbe_alloc_rx_buffers(rx_ring, cleaned_count);
-
return total_rx_packets;
}
@@ -3099,11 +2982,7 @@ void ixgbe_configure_tx_ring(struct ixgbe_adapter *adapter,
* to or less than the number of on chip descriptors, which is
* currently 40.
*/
-#if IS_ENABLED(CONFIG_BQL)
if (!ring->q_vector || (ring->q_vector->itr < IXGBE_100K_ITR))
-#else
- if (!ring->q_vector || (ring->q_vector->itr < 8))
-#endif
txdctl |= (1 << 16); /* WTHRESH = 1 */
else
txdctl |= (8 << 16); /* WTHRESH = 8 */
@@ -5300,15 +5179,15 @@ int ixgbe_setup_tx_resources(struct ixgbe_ring *tx_ring)
{
struct device *dev = tx_ring->dev;
int orig_node = dev_to_node(dev);
- int numa_node = -1;
+ int ring_node = -1;
int size;
size = sizeof(struct ixgbe_tx_buffer) * tx_ring->count;
if (tx_ring->q_vector)
- numa_node = tx_ring->q_vector->numa_node;
+ ring_node = tx_ring->q_vector->numa_node;
- tx_ring->tx_buffer_info = vzalloc_node(size, numa_node);
+ tx_ring->tx_buffer_info = vzalloc_node(size, ring_node);
if (!tx_ring->tx_buffer_info)
tx_ring->tx_buffer_info = vzalloc(size);
if (!tx_ring->tx_buffer_info)
@@ -5320,7 +5199,7 @@ int ixgbe_setup_tx_resources(struct ixgbe_ring *tx_ring)
tx_ring->size = tx_ring->count * sizeof(union ixgbe_adv_tx_desc);
tx_ring->size = ALIGN(tx_ring->size, 4096);
- set_dev_node(dev, numa_node);
+ set_dev_node(dev, ring_node);
tx_ring->desc = dma_alloc_coherent(dev,
tx_ring->size,
&tx_ring->dma,
@@ -5384,15 +5263,15 @@ int ixgbe_setup_rx_resources(struct ixgbe_ring *rx_ring)
{
struct device *dev = rx_ring->dev;
int orig_node = dev_to_node(dev);
- int numa_node = -1;
+ int ring_node = -1;
int size;
size = sizeof(struct ixgbe_rx_buffer) * rx_ring->count;
if (rx_ring->q_vector)
- numa_node = rx_ring->q_vector->numa_node;
+ ring_node = rx_ring->q_vector->numa_node;
- rx_ring->rx_buffer_info = vzalloc_node(size, numa_node);
+ rx_ring->rx_buffer_info = vzalloc_node(size, ring_node);
if (!rx_ring->rx_buffer_info)
rx_ring->rx_buffer_info = vzalloc(size);
if (!rx_ring->rx_buffer_info)
@@ -5404,7 +5283,7 @@ int ixgbe_setup_rx_resources(struct ixgbe_ring *rx_ring)
rx_ring->size = rx_ring->count * sizeof(union ixgbe_adv_rx_desc);
rx_ring->size = ALIGN(rx_ring->size, 4096);
- set_dev_node(dev, numa_node);
+ set_dev_node(dev, ring_node);
rx_ring->desc = dma_alloc_coherent(dev,
rx_ring->size,
&rx_ring->dma,
@@ -6319,25 +6198,55 @@ static void ixgbe_watchdog_link_is_down(struct ixgbe_adapter *adapter)
ixgbe_ping_all_vfs(adapter);
}
+static bool ixgbe_ring_tx_pending(struct ixgbe_adapter *adapter)
+{
+ int i;
+
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ struct ixgbe_ring *tx_ring = adapter->tx_ring[i];
+
+ if (tx_ring->next_to_use != tx_ring->next_to_clean)
+ return true;
+ }
+
+ return false;
+}
+
+static bool ixgbe_vf_tx_pending(struct ixgbe_adapter *adapter)
+{
+ struct ixgbe_hw *hw = &adapter->hw;
+ struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
+ u32 q_per_pool = __ALIGN_MASK(1, ~vmdq->mask);
+
+ int i, j;
+
+ if (!adapter->num_vfs)
+ return false;
+
+ for (i = 0; i < adapter->num_vfs; i++) {
+ for (j = 0; j < q_per_pool; j++) {
+ u32 h, t;
+
+ h = IXGBE_READ_REG(hw, IXGBE_PVFTDHN(q_per_pool, i, j));
+ t = IXGBE_READ_REG(hw, IXGBE_PVFTDTN(q_per_pool, i, j));
+
+ if (h != t)
+ return true;
+ }
+ }
+
+ return false;
+}
+
/**
* ixgbe_watchdog_flush_tx - flush queues on link down
* @adapter: pointer to the device adapter structure
**/
static void ixgbe_watchdog_flush_tx(struct ixgbe_adapter *adapter)
{
- int i;
- int some_tx_pending = 0;
-
if (!netif_carrier_ok(adapter->netdev)) {
- for (i = 0; i < adapter->num_tx_queues; i++) {
- struct ixgbe_ring *tx_ring = adapter->tx_ring[i];
- if (tx_ring->next_to_use != tx_ring->next_to_clean) {
- some_tx_pending = 1;
- break;
- }
- }
-
- if (some_tx_pending) {
+ if (ixgbe_ring_tx_pending(adapter) ||
+ ixgbe_vf_tx_pending(adapter)) {
/* We've lost link, so the controller stops DMA,
* but we've got queued Tx work that's never going
* to get done, so reset controller to flush Tx.
@@ -6837,6 +6746,36 @@ static void ixgbe_tx_olinfo_status(union ixgbe_adv_tx_desc *tx_desc,
tx_desc->read.olinfo_status = cpu_to_le32(olinfo_status);
}
+static int __ixgbe_maybe_stop_tx(struct ixgbe_ring *tx_ring, u16 size)
+{
+ netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+
+ /* Herbert's original patch had:
+ * smp_mb__after_netif_stop_queue();
+ * but since that doesn't exist yet, just open code it.
+ */
+ smp_mb();
+
+ /* We need to check again in a case another CPU has just
+ * made room available.
+ */
+ if (likely(ixgbe_desc_unused(tx_ring) < size))
+ return -EBUSY;
+
+ /* A reprieve! - use start_queue because it doesn't call schedule */
+ netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
+ ++tx_ring->tx_stats.restart_queue;
+ return 0;
+}
+
+static inline int ixgbe_maybe_stop_tx(struct ixgbe_ring *tx_ring, u16 size)
+{
+ if (likely(ixgbe_desc_unused(tx_ring) >= size))
+ return 0;
+
+ return __ixgbe_maybe_stop_tx(tx_ring, size);
+}
+
#define IXGBE_TXD_CMD (IXGBE_TXD_CMD_EOP | \
IXGBE_TXD_CMD_RS)
@@ -6958,8 +6897,12 @@ static void ixgbe_tx_map(struct ixgbe_ring *tx_ring,
tx_ring->next_to_use = i;
- /* notify HW of packet */
- ixgbe_write_tail(tx_ring, i);
+ ixgbe_maybe_stop_tx(tx_ring, DESC_NEEDED);
+
+ if (netif_xmit_stopped(txring_txq(tx_ring)) || !skb->xmit_more) {
+ /* notify HW of packet */
+ ixgbe_write_tail(tx_ring, i);
+ }
return;
dma_error:
@@ -7067,32 +7010,6 @@ static void ixgbe_atr(struct ixgbe_ring *ring,
input, common, ring->queue_index);
}
-static int __ixgbe_maybe_stop_tx(struct ixgbe_ring *tx_ring, u16 size)
-{
- netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
- /* Herbert's original patch had:
- * smp_mb__after_netif_stop_queue();
- * but since that doesn't exist yet, just open code it. */
- smp_mb();
-
- /* We need to check again in a case another CPU has just
- * made room available. */
- if (likely(ixgbe_desc_unused(tx_ring) < size))
- return -EBUSY;
-
- /* A reprieve! - use start_queue because it doesn't call schedule */
- netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
- ++tx_ring->tx_stats.restart_queue;
- return 0;
-}
-
-static inline int ixgbe_maybe_stop_tx(struct ixgbe_ring *tx_ring, u16 size)
-{
- if (likely(ixgbe_desc_unused(tx_ring) >= size))
- return 0;
- return __ixgbe_maybe_stop_tx(tx_ring, size);
-}
-
static u16 ixgbe_select_queue(struct net_device *dev, struct sk_buff *skb,
void *accel_priv, select_queue_fallback_t fallback)
{
@@ -7187,9 +7104,10 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct sk_buff *skb,
tx_flags |= IXGBE_TX_FLAGS_SW_VLAN;
}
- if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP &&
- !test_and_set_bit_lock(__IXGBE_PTP_TX_IN_PROGRESS,
- &adapter->state))) {
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+ adapter->ptp_clock &&
+ !test_and_set_bit_lock(__IXGBE_PTP_TX_IN_PROGRESS,
+ &adapter->state)) {
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
tx_flags |= IXGBE_TX_FLAGS_TSTAMP;
@@ -7261,8 +7179,6 @@ xmit_fcoe:
#endif /* IXGBE_FCOE */
ixgbe_tx_map(tx_ring, first, hdr_len);
- ixgbe_maybe_stop_tx(tx_ring, DESC_NEEDED);
-
return NETDEV_TX_OK;
out_drop:
@@ -7735,39 +7651,13 @@ static int ixgbe_ndo_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
const unsigned char *addr,
u16 flags)
{
- struct ixgbe_adapter *adapter = netdev_priv(dev);
- int err;
-
- if (!(adapter->flags & IXGBE_FLAG_SRIOV_ENABLED))
- return ndo_dflt_fdb_add(ndm, tb, dev, addr, flags);
-
- /* Hardware does not support aging addresses so if a
- * ndm_state is given only allow permanent addresses
- */
- if (ndm->ndm_state && !(ndm->ndm_state & NUD_PERMANENT)) {
- pr_info("%s: FDB only supports static addresses\n",
- ixgbe_driver_name);
- return -EINVAL;
- }
-
+ /* guarantee we can provide a unique filter for the unicast address */
if (is_unicast_ether_addr(addr) || is_link_local_ether_addr(addr)) {
- u32 rar_uc_entries = IXGBE_MAX_PF_MACVLANS;
-
- if (netdev_uc_count(dev) < rar_uc_entries)
- err = dev_uc_add_excl(dev, addr);
- else
- err = -ENOMEM;
- } else if (is_multicast_ether_addr(addr)) {
- err = dev_mc_add_excl(dev, addr);
- } else {
- err = -EINVAL;
+ if (IXGBE_MAX_PF_MACVLANS <= netdev_uc_count(dev))
+ return -ENOMEM;
}
- /* Only return duplicate errors if NLM_F_EXCL is set */
- if (err == -EEXIST && !(flags & NLM_F_EXCL))
- err = 0;
-
- return err;
+ return ndo_dflt_fdb_add(ndm, tb, dev, addr, flags);
}
static int ixgbe_ndo_bridge_setlink(struct net_device *dev,
@@ -7830,9 +7720,17 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
{
struct ixgbe_fwd_adapter *fwd_adapter = NULL;
struct ixgbe_adapter *adapter = netdev_priv(pdev);
+ int used_pools = adapter->num_vfs + adapter->num_rx_pools;
unsigned int limit;
int pool, err;
+ /* Hardware has a limited number of available pools. Each VF, and the
+ * PF require a pool. Check to ensure we don't attempt to use more
+ * then the available number of pools.
+ */
+ if (used_pools >= IXGBE_MAX_VF_FUNCTIONS)
+ return ERR_PTR(-EINVAL);
+
#ifdef CONFIG_RPS
if (vdev->num_rx_queues != vdev->num_tx_queues) {
netdev_info(pdev, "%s: Only supports a single queue count for TX and RX\n",
@@ -8080,7 +7978,6 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct ixgbe_adapter *adapter = NULL;
struct ixgbe_hw *hw;
const struct ixgbe_info *ii = ixgbe_info_tbl[ent->driver_data];
- static int cards_found;
int i, err, pci_using_dac, expected_gts;
unsigned int indices = MAX_TX_QUEUES;
u8 part_str[IXGBE_PBANUM_LENGTH];
@@ -8166,8 +8063,6 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
netdev->watchdog_timeo = 5 * HZ;
strlcpy(netdev->name, pci_name(pdev), sizeof(netdev->name));
- adapter->bd_number = cards_found;
-
/* Setup hw api */
memcpy(&hw->mac.ops, ii->mac_ops, sizeof(hw->mac.ops));
hw->mac.type = ii->mac;
@@ -8451,7 +8346,6 @@ skip_sriov:
ixgbe_add_sanmac_netdev(netdev);
e_dev_info("%s\n", ixgbe_default_device_descr);
- cards_found++;
#ifdef CONFIG_IXGBE_HWMON
if (ixgbe_sysfs_init(adapter))
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
index 11f02ea..d47b19f 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
@@ -445,8 +445,6 @@ s32 ixgbe_write_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr,
s32 ixgbe_setup_phy_link_generic(struct ixgbe_hw *hw)
{
s32 status = 0;
- u32 time_out;
- u32 max_time_out = 10;
u16 autoneg_reg = IXGBE_MII_AUTONEG_REG;
bool autoneg = false;
ixgbe_link_speed speed;
@@ -514,25 +512,6 @@ s32 ixgbe_setup_phy_link_generic(struct ixgbe_hw *hw)
hw->phy.ops.write_reg(hw, MDIO_CTRL1,
MDIO_MMD_AN, autoneg_reg);
- /* Wait for autonegotiation to finish */
- for (time_out = 0; time_out < max_time_out; time_out++) {
- udelay(10);
- /* Restart PHY autonegotiation and wait for completion */
- status = hw->phy.ops.read_reg(hw, MDIO_STAT1,
- MDIO_MMD_AN,
- &autoneg_reg);
-
- autoneg_reg &= MDIO_AN_STAT1_COMPLETE;
- if (autoneg_reg == MDIO_AN_STAT1_COMPLETE) {
- break;
- }
- }
-
- if (time_out == max_time_out) {
- hw_dbg(hw, "ixgbe_setup_phy_link_generic: time out\n");
- return IXGBE_ERR_LINK_SETUP;
- }
-
return status;
}
@@ -657,8 +636,6 @@ s32 ixgbe_check_phy_link_tnx(struct ixgbe_hw *hw, ixgbe_link_speed *speed,
s32 ixgbe_setup_phy_link_tnx(struct ixgbe_hw *hw)
{
s32 status;
- u32 time_out;
- u32 max_time_out = 10;
u16 autoneg_reg = IXGBE_MII_AUTONEG_REG;
bool autoneg = false;
ixgbe_link_speed speed;
@@ -724,24 +701,6 @@ s32 ixgbe_setup_phy_link_tnx(struct ixgbe_hw *hw)
hw->phy.ops.write_reg(hw, MDIO_CTRL1,
MDIO_MMD_AN, autoneg_reg);
- /* Wait for autonegotiation to finish */
- for (time_out = 0; time_out < max_time_out; time_out++) {
- udelay(10);
- /* Restart PHY autonegotiation and wait for completion */
- status = hw->phy.ops.read_reg(hw, MDIO_STAT1,
- MDIO_MMD_AN,
- &autoneg_reg);
-
- autoneg_reg &= MDIO_AN_STAT1_COMPLETE;
- if (autoneg_reg == MDIO_AN_STAT1_COMPLETE)
- break;
- }
-
- if (time_out == max_time_out) {
- hw_dbg(hw, "ixgbe_setup_phy_link_tnx: time out\n");
- return IXGBE_ERR_LINK_SETUP;
- }
-
return status;
}
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index c14d4d8..706fc69 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -250,13 +250,15 @@ static int ixgbe_pci_sriov_enable(struct pci_dev *dev, int num_vfs)
if (err)
return err;
- /* While the SR-IOV capability structure reports total VFs to be
- * 64 we limit the actual number that can be allocated to 63 so
- * that some transmit/receive resources can be reserved to the
- * PF. The PCI bus driver already checks for other values out of
- * range.
+ /* While the SR-IOV capability structure reports total VFs to be 64,
+ * we have to limit the actual number allocated based on two factors.
+ * First, we reserve some transmit/receive resources for the PF.
+ * Second, VMDQ also uses the same pools that SR-IOV does. We need to
+ * account for this, so that we don't accidentally allocate more VFs
+ * than we have available pools. The PCI bus driver already checks for
+ * other values out of range.
*/
- if (num_vfs > IXGBE_MAX_VFS_DRV_LIMIT)
+ if ((num_vfs + adapter->num_rx_pools) > IXGBE_MAX_VF_FUNCTIONS)
return -EPERM;
adapter->num_vfs = num_vfs;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
index e6b07c2..dfd55d8 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
@@ -2194,6 +2194,8 @@ enum {
#define IXGBE_VFLRE(_i) ((((_i) & 1) ? 0x001C0 : 0x00600))
#define IXGBE_VFLREC(_i) (0x00700 + ((_i) * 4))
/* Translated register #defines */
+#define IXGBE_PVFTDH(P) (0x06010 + (0x40 * (P)))
+#define IXGBE_PVFTDT(P) (0x06018 + (0x40 * (P)))
#define IXGBE_PVFTDWBAL(P) (0x06038 + (0x40 * (P)))
#define IXGBE_PVFTDWBAH(P) (0x0603C + (0x40 * (P)))
@@ -2202,6 +2204,11 @@ enum {
#define IXGBE_PVFTDWBAHn(q_per_pool, vf_number, vf_q_index) \
(IXGBE_PVFTDWBAH((q_per_pool)*(vf_number) + (vf_q_index)))
+#define IXGBE_PVFTDHN(q_per_pool, vf_number, vf_q_index) \
+ (IXGBE_PVFTDH((q_per_pool)*(vf_number) + (vf_q_index)))
+#define IXGBE_PVFTDTN(q_per_pool, vf_number, vf_q_index) \
+ (IXGBE_PVFTDT((q_per_pool)*(vf_number) + (vf_q_index)))
+
enum ixgbe_fdir_pballoc_type {
IXGBE_FDIR_PBALLOC_NONE = 0,
IXGBE_FDIR_PBALLOC_64K = 1,
diff --git a/drivers/net/ethernet/intel/ixgbevf/ethtool.c b/drivers/net/ethernet/intel/ixgbevf/ethtool.c
index d420f12..cc0e5b7 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ethtool.c
@@ -523,7 +523,7 @@ static const struct ixgbevf_reg_test reg_test_vf[] = {
{ IXGBE_VFTDBAL(0), 2, PATTERN_TEST, 0xFFFFFF80, 0xFFFFFFFF },
{ IXGBE_VFTDBAH(0), 2, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
{ IXGBE_VFTDLEN(0), 2, PATTERN_TEST, 0x000FFF80, 0x000FFF80 },
- { 0, 0, 0, 0 }
+ { .reg = 0 }
};
static const u32 register_test_patterns[] = {
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
index a0a1de9..ba96cb5 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
@@ -385,7 +385,6 @@ struct ixgbevf_adapter {
/* structs defined in ixgbe_vf.h */
struct ixgbe_hw hw;
u16 msg_enable;
- u16 bd_number;
/* Interrupt Throttle Rate */
u32 eitr_param;
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index c22a00c..030a219 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -3464,7 +3464,6 @@ static int ixgbevf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct ixgbevf_adapter *adapter = NULL;
struct ixgbe_hw *hw = NULL;
const struct ixgbevf_info *ii = ixgbevf_info_tbl[ent->driver_data];
- static int cards_found;
int err, pci_using_dac;
err = pci_enable_device(pdev);
@@ -3525,8 +3524,6 @@ static int ixgbevf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
ixgbevf_assign_netdev_ops(netdev);
- adapter->bd_number = cards_found;
-
/* Setup hw api */
memcpy(&hw->mac.ops, ii->mac_ops, sizeof(hw->mac.ops));
hw->mac.type = ii->mac;
@@ -3601,7 +3598,6 @@ static int ixgbevf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
hw_dbg(hw, "MAC: %d\n", hw->mac.type);
hw_dbg(hw, "Intel(R) 82599 Virtual Function\n");
- cards_found++;
return 0;
err_register:
diff --git a/drivers/net/ethernet/intel/ixgbevf/vf.c b/drivers/net/ethernet/intel/ixgbevf/vf.c
index 4d44d64..9cddd56 100644
--- a/drivers/net/ethernet/intel/ixgbevf/vf.c
+++ b/drivers/net/ethernet/intel/ixgbevf/vf.c
@@ -434,6 +434,21 @@ static s32 ixgbevf_check_mac_link_vf(struct ixgbe_hw *hw,
if (!(links_reg & IXGBE_LINKS_UP))
goto out;
+ /* for SFP+ modules and DA cables on 82599 it can take up to 500usecs
+ * before the link status is correct
+ */
+ if (mac->type == ixgbe_mac_82599_vf) {
+ int i;
+
+ for (i = 0; i < 5; i++) {
+ udelay(100);
+ links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS);
+
+ if (!(links_reg & IXGBE_LINKS_UP))
+ goto out;
+ }
+ }
+
switch (links_reg & IXGBE_LINKS_SPEED_82599) {
case IXGBE_LINKS_SPEED_10G_82599:
*speed = IXGBE_LINK_SPEED_10GB_FULL;
diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c
index fd4b6ae..2dad4d5 100644
--- a/drivers/net/ethernet/lantiq_etop.c
+++ b/drivers/net/ethernet/lantiq_etop.c
@@ -633,7 +633,6 @@ ltq_etop_init(struct net_device *dev)
int err;
bool random_mac = false;
- ether_setup(dev);
dev->watchdog_timeo = 10 * HZ;
err = ltq_etop_hw_init(dev);
if (err)
diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig
index 1b4fc7c..b3b72ad 100644
--- a/drivers/net/ethernet/marvell/Kconfig
+++ b/drivers/net/ethernet/marvell/Kconfig
@@ -64,7 +64,7 @@ config MVPP2
config PXA168_ETH
tristate "Marvell pxa168 ethernet support"
- depends on CPU_PXA168
+ depends on (CPU_PXA168 || ARCH_BERLIN || COMPILE_TEST) && HAS_IOMEM
select PHYLIB
---help---
This driver supports the pxa168 Ethernet ports.
diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
index 8f5aa7c..c3b209c 100644
--- a/drivers/net/ethernet/marvell/pxa168_eth.c
+++ b/drivers/net/ethernet/marvell/pxa168_eth.c
@@ -22,27 +22,30 @@
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
-#include <linux/dma-mapping.h>
-#include <linux/in.h>
-#include <linux/ip.h>
-#include <linux/tcp.h>
-#include <linux/udp.h>
-#include <linux/etherdevice.h>
#include <linux/bitops.h>
+#include <linux/clk.h>
#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/etherdevice.h>
#include <linux/ethtool.h>
-#include <linux/platform_device.h>
-#include <linux/module.h>
+#include <linux/in.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/ip.h>
#include <linux/kernel.h>
-#include <linux/workqueue.h>
-#include <linux/clk.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_net.h>
#include <linux/phy.h>
-#include <linux/io.h>
-#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/pxa168_eth.h>
+#include <linux/tcp.h>
#include <linux/types.h>
+#include <linux/udp.h>
+#include <linux/workqueue.h>
+
#include <asm/pgtable.h>
#include <asm/cacheflush.h>
-#include <linux/pxa168_eth.h>
#define DRIVER_NAME "pxa168-eth"
#define DRIVER_VERSION "0.3"
@@ -58,6 +61,8 @@
#define PORT_COMMAND 0x0410
#define PORT_STATUS 0x0418
#define HTPR 0x0428
+#define MAC_ADDR_LOW 0x0430
+#define MAC_ADDR_HIGH 0x0438
#define SDMA_CONFIG 0x0440
#define SDMA_CMD 0x0448
#define INT_CAUSE 0x0450
@@ -161,7 +166,7 @@
/* Bit definitions for Port status */
#define PORT_SPEED_100 (1 << 0)
#define FULL_DUPLEX (1 << 1)
-#define FLOW_CONTROL_ENABLED (1 << 2)
+#define FLOW_CONTROL_DISABLED (1 << 2)
#define LINK_UP (1 << 3)
/* Bit definitions for work to be done */
@@ -191,6 +196,7 @@ struct tx_desc {
struct pxa168_eth_private {
int port_num; /* User Ethernet port number */
+ int phy_addr;
int rx_resource_err; /* Rx ring resource error flag */
@@ -296,7 +302,7 @@ static void abort_dma(struct pxa168_eth_private *pep)
} while (max_retries-- > 0 && delay <= 0);
if (max_retries <= 0)
- printk(KERN_ERR "%s : DMA Stuck\n", __func__);
+ netdev_err(pep->dev, "%s : DMA Stuck\n", __func__);
}
static int ethernet_phy_get(struct pxa168_eth_private *pep)
@@ -507,9 +513,10 @@ static int add_del_hash_entry(struct pxa168_eth_private *pep,
if (i == HOP_NUMBER) {
if (!del) {
- printk(KERN_INFO "%s: table section is full, need to "
- "move to 16kB implementation?\n",
- __FILE__);
+ netdev_info(pep->dev,
+ "%s: table section is full, need to "
+ "move to 16kB implementation?\n",
+ __FILE__);
return -ENOSPC;
} else
return 0;
@@ -600,16 +607,42 @@ static void pxa168_eth_set_rx_mode(struct net_device *dev)
update_hash_table_mac_address(pep, NULL, ha->addr);
}
+static void pxa168_eth_get_mac_address(struct net_device *dev,
+ unsigned char *addr)
+{
+ struct pxa168_eth_private *pep = netdev_priv(dev);
+ unsigned int mac_h = rdl(pep, MAC_ADDR_HIGH);
+ unsigned int mac_l = rdl(pep, MAC_ADDR_LOW);
+
+ addr[0] = (mac_h >> 24) & 0xff;
+ addr[1] = (mac_h >> 16) & 0xff;
+ addr[2] = (mac_h >> 8) & 0xff;
+ addr[3] = mac_h & 0xff;
+ addr[4] = (mac_l >> 8) & 0xff;
+ addr[5] = mac_l & 0xff;
+}
+
static int pxa168_eth_set_mac_address(struct net_device *dev, void *addr)
{
struct sockaddr *sa = addr;
struct pxa168_eth_private *pep = netdev_priv(dev);
unsigned char oldMac[ETH_ALEN];
+ u32 mac_h, mac_l;
if (!is_valid_ether_addr(sa->sa_data))
return -EADDRNOTAVAIL;
memcpy(oldMac, dev->dev_addr, ETH_ALEN);
memcpy(dev->dev_addr, sa->sa_data, ETH_ALEN);
+
+ mac_h = dev->dev_addr[0] << 24;
+ mac_h |= dev->dev_addr[1] << 16;
+ mac_h |= dev->dev_addr[2] << 8;
+ mac_h |= dev->dev_addr[3];
+ mac_l = dev->dev_addr[4] << 8;
+ mac_l |= dev->dev_addr[5];
+ wrl(pep, MAC_ADDR_HIGH, mac_h);
+ wrl(pep, MAC_ADDR_LOW, mac_l);
+
netif_addr_lock_bh(dev);
update_hash_table_mac_address(pep, oldMac, dev->dev_addr);
netif_addr_unlock_bh(dev);
@@ -726,7 +759,7 @@ static int txq_reclaim(struct net_device *dev, int force)
if (cmd_sts & TX_ERROR) {
if (net_ratelimit())
- printk(KERN_ERR "%s: Error in TX\n", dev->name);
+ netdev_err(dev, "Error in TX\n");
dev->stats.tx_errors++;
}
dma_unmap_single(NULL, addr, count, DMA_TO_DEVICE);
@@ -743,8 +776,7 @@ static void pxa168_eth_tx_timeout(struct net_device *dev)
{
struct pxa168_eth_private *pep = netdev_priv(dev);
- printk(KERN_INFO "%s: TX timeout desc_count %d\n",
- dev->name, pep->tx_desc_count);
+ netdev_info(dev, "TX timeout desc_count %d\n", pep->tx_desc_count);
schedule_work(&pep->tx_timeout_task);
}
@@ -814,9 +846,8 @@ static int rxq_process(struct net_device *dev, int budget)
if ((cmd_sts & (RX_FIRST_DESC | RX_LAST_DESC)) !=
(RX_FIRST_DESC | RX_LAST_DESC)) {
if (net_ratelimit())
- printk(KERN_ERR
- "%s: Rx pkt on multiple desc\n",
- dev->name);
+ netdev_err(dev,
+ "Rx pkt on multiple desc\n");
}
if (cmd_sts & RX_ERROR)
stats->rx_errors++;
@@ -871,7 +902,7 @@ static void handle_link_event(struct pxa168_eth_private *pep)
port_status = rdl(pep, PORT_STATUS);
if (!(port_status & LINK_UP)) {
if (netif_carrier_ok(dev)) {
- printk(KERN_INFO "%s: link down\n", dev->name);
+ netdev_info(dev, "link down\n");
netif_carrier_off(dev);
txq_reclaim(dev, 1);
}
@@ -883,10 +914,9 @@ static void handle_link_event(struct pxa168_eth_private *pep)
speed = 10;
duplex = (port_status & FULL_DUPLEX) ? 1 : 0;
- fc = (port_status & FLOW_CONTROL_ENABLED) ? 1 : 0;
- printk(KERN_INFO "%s: link up, %d Mb/s, %s duplex, "
- "flow control %sabled\n", dev->name,
- speed, duplex ? "full" : "half", fc ? "en" : "dis");
+ fc = (port_status & FLOW_CONTROL_DISABLED) ? 0 : 1;
+ netdev_info(dev, "link up, %d Mb/s, %s duplex, flow control %sabled\n",
+ speed, duplex ? "full" : "half", fc ? "en" : "dis");
if (!netif_carrier_ok(dev))
netif_carrier_on(dev);
}
@@ -1039,9 +1069,8 @@ static void rxq_deinit(struct net_device *dev)
}
}
if (pep->rx_desc_count)
- printk(KERN_ERR
- "Error in freeing Rx Ring. %d skb's still\n",
- pep->rx_desc_count);
+ netdev_err(dev, "Error in freeing Rx Ring. %d skb's still\n",
+ pep->rx_desc_count);
/* Free RX ring */
if (pep->p_rx_desc_area)
dma_free_coherent(pep->dev->dev.parent, pep->rx_desc_area_size,
@@ -1280,15 +1309,15 @@ static int pxa168_smi_read(struct mii_bus *bus, int phy_addr, int regnum)
int val;
if (smi_wait_ready(pep)) {
- printk(KERN_WARNING "pxa168_eth: SMI bus busy timeout\n");
+ netdev_warn(pep->dev, "pxa168_eth: SMI bus busy timeout\n");
return -ETIMEDOUT;
}
wrl(pep, SMI, (phy_addr << 16) | (regnum << 21) | SMI_OP_R);
/* now wait for the data to be valid */
for (i = 0; !((val = rdl(pep, SMI)) & SMI_R_VALID); i++) {
if (i == PHY_WAIT_ITERATIONS) {
- printk(KERN_WARNING
- "pxa168_eth: SMI bus read not valid\n");
+ netdev_warn(pep->dev,
+ "pxa168_eth: SMI bus read not valid\n");
return -ENODEV;
}
msleep(10);
@@ -1303,7 +1332,7 @@ static int pxa168_smi_write(struct mii_bus *bus, int phy_addr, int regnum,
struct pxa168_eth_private *pep = bus->priv;
if (smi_wait_ready(pep)) {
- printk(KERN_WARNING "pxa168_eth: SMI bus busy timeout\n");
+ netdev_warn(pep->dev, "pxa168_eth: SMI bus busy timeout\n");
return -ETIMEDOUT;
}
@@ -1311,7 +1340,7 @@ static int pxa168_smi_write(struct mii_bus *bus, int phy_addr, int regnum,
SMI_OP_W | (value & 0xffff));
if (smi_wait_ready(pep)) {
- printk(KERN_ERR "pxa168_eth: SMI bus busy timeout\n");
+ netdev_err(pep->dev, "pxa168_eth: SMI bus busy timeout\n");
return -ETIMEDOUT;
}
@@ -1361,24 +1390,25 @@ static struct phy_device *phy_scan(struct pxa168_eth_private *pep, int phy_addr)
return phydev;
}
-static void phy_init(struct pxa168_eth_private *pep, int speed, int duplex)
+static void phy_init(struct pxa168_eth_private *pep)
{
struct phy_device *phy = pep->phy;
phy_attach(pep->dev, dev_name(&phy->dev), PHY_INTERFACE_MODE_MII);
- if (speed == 0) {
+ if (pep->pd && pep->pd->speed != 0) {
+ phy->autoneg = AUTONEG_DISABLE;
+ phy->advertising = 0;
+ phy->speed = pep->pd->speed;
+ phy->duplex = pep->pd->duplex;
+ } else {
phy->autoneg = AUTONEG_ENABLE;
phy->speed = 0;
phy->duplex = 0;
phy->supported &= PHY_BASIC_FEATURES;
phy->advertising = phy->supported | ADVERTISED_Autoneg;
- } else {
- phy->autoneg = AUTONEG_DISABLE;
- phy->advertising = 0;
- phy->speed = speed;
- phy->duplex = duplex;
}
+
phy_start_aneg(phy);
}
@@ -1386,11 +1416,13 @@ static int ethernet_phy_setup(struct net_device *dev)
{
struct pxa168_eth_private *pep = netdev_priv(dev);
- if (pep->pd->init)
+ if (pep->pd && pep->pd->init)
pep->pd->init();
- pep->phy = phy_scan(pep, pep->pd->phy_addr & 0x1f);
+
+ pep->phy = phy_scan(pep, pep->phy_addr & 0x1f);
if (pep->phy != NULL)
- phy_init(pep, pep->pd->speed, pep->pd->duplex);
+ phy_init(pep);
+
update_hash_table_mac_address(pep, NULL, dev->dev_addr);
return 0;
@@ -1425,23 +1457,23 @@ static void pxa168_get_drvinfo(struct net_device *dev,
}
static const struct ethtool_ops pxa168_ethtool_ops = {
- .get_settings = pxa168_get_settings,
- .set_settings = pxa168_set_settings,
- .get_drvinfo = pxa168_get_drvinfo,
- .get_link = ethtool_op_get_link,
- .get_ts_info = ethtool_op_get_ts_info,
+ .get_settings = pxa168_get_settings,
+ .set_settings = pxa168_set_settings,
+ .get_drvinfo = pxa168_get_drvinfo,
+ .get_link = ethtool_op_get_link,
+ .get_ts_info = ethtool_op_get_ts_info,
};
static const struct net_device_ops pxa168_eth_netdev_ops = {
- .ndo_open = pxa168_eth_open,
- .ndo_stop = pxa168_eth_stop,
- .ndo_start_xmit = pxa168_eth_start_xmit,
- .ndo_set_rx_mode = pxa168_eth_set_rx_mode,
- .ndo_set_mac_address = pxa168_eth_set_mac_address,
- .ndo_validate_addr = eth_validate_addr,
- .ndo_do_ioctl = pxa168_eth_do_ioctl,
- .ndo_change_mtu = pxa168_eth_change_mtu,
- .ndo_tx_timeout = pxa168_eth_tx_timeout,
+ .ndo_open = pxa168_eth_open,
+ .ndo_stop = pxa168_eth_stop,
+ .ndo_start_xmit = pxa168_eth_start_xmit,
+ .ndo_set_rx_mode = pxa168_eth_set_rx_mode,
+ .ndo_set_mac_address = pxa168_eth_set_mac_address,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_do_ioctl = pxa168_eth_do_ioctl,
+ .ndo_change_mtu = pxa168_eth_change_mtu,
+ .ndo_tx_timeout = pxa168_eth_tx_timeout,
};
static int pxa168_eth_probe(struct platform_device *pdev)
@@ -1450,17 +1482,18 @@ static int pxa168_eth_probe(struct platform_device *pdev)
struct net_device *dev = NULL;
struct resource *res;
struct clk *clk;
+ struct device_node *np;
+ const unsigned char *mac_addr = NULL;
int err;
printk(KERN_NOTICE "PXA168 10/100 Ethernet Driver\n");
- clk = clk_get(&pdev->dev, "MFUCLK");
+ clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(clk)) {
- printk(KERN_ERR "%s: Fast Ethernet failed to get clock\n",
- DRIVER_NAME);
+ dev_err(&pdev->dev, "Fast Ethernet failed to get clock\n");
return -ENODEV;
}
- clk_enable(clk);
+ clk_prepare_enable(clk);
dev = alloc_etherdev(sizeof(struct pxa168_eth_private));
if (!dev) {
@@ -1477,8 +1510,8 @@ static int pxa168_eth_probe(struct platform_device *pdev)
err = -ENODEV;
goto err_netdev;
}
- pep->base = ioremap(res->start, resource_size(res));
- if (pep->base == NULL) {
+ pep->base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(pep->base)) {
err = -ENOMEM;
goto err_netdev;
}
@@ -1492,19 +1525,42 @@ static int pxa168_eth_probe(struct platform_device *pdev)
INIT_WORK(&pep->tx_timeout_task, pxa168_eth_tx_timeout_task);
- printk(KERN_INFO "%s:Using random mac address\n", DRIVER_NAME);
- eth_hw_addr_random(dev);
+ if (pdev->dev.of_node)
+ mac_addr = of_get_mac_address(pdev->dev.of_node);
- pep->pd = dev_get_platdata(&pdev->dev);
- pep->rx_ring_size = NUM_RX_DESCS;
- if (pep->pd->rx_queue_size)
- pep->rx_ring_size = pep->pd->rx_queue_size;
+ if (mac_addr && is_valid_ether_addr(mac_addr)) {
+ ether_addr_copy(dev->dev_addr, mac_addr);
+ } else {
+ /* try reading the mac address, if set by the bootloader */
+ pxa168_eth_get_mac_address(dev, dev->dev_addr);
+ if (!is_valid_ether_addr(dev->dev_addr)) {
+ dev_info(&pdev->dev, "Using random mac address\n");
+ eth_hw_addr_random(dev);
+ }
+ }
+ pep->rx_ring_size = NUM_RX_DESCS;
pep->tx_ring_size = NUM_TX_DESCS;
- if (pep->pd->tx_queue_size)
- pep->tx_ring_size = pep->pd->tx_queue_size;
- pep->port_num = pep->pd->port_number;
+ pep->pd = dev_get_platdata(&pdev->dev);
+ if (pep->pd) {
+ if (pep->pd->rx_queue_size)
+ pep->rx_ring_size = pep->pd->rx_queue_size;
+
+ if (pep->pd->tx_queue_size)
+ pep->tx_ring_size = pep->pd->tx_queue_size;
+
+ pep->port_num = pep->pd->port_number;
+ pep->phy_addr = pep->pd->phy_addr;
+ } else if (pdev->dev.of_node) {
+ of_property_read_u32(pdev->dev.of_node, "port-id",
+ &pep->port_num);
+
+ np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+ if (np)
+ of_property_read_u32(np, "reg", &pep->phy_addr);
+ }
+
/* Hardware supports only 3 ports */
BUG_ON(pep->port_num > 2);
netif_napi_add(dev, &pep->napi, pxa168_rx_poll, pep->rx_ring_size);
@@ -1605,6 +1661,12 @@ static int pxa168_eth_suspend(struct platform_device *pdev, pm_message_t state)
#define pxa168_eth_suspend NULL
#endif
+static const struct of_device_id pxa168_eth_of_match[] = {
+ { .compatible = "marvell,pxa168-eth" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, pxa168_eth_of_match);
+
static struct platform_driver pxa168_eth_driver = {
.probe = pxa168_eth_probe,
.remove = pxa168_eth_remove,
@@ -1612,8 +1674,9 @@ static struct platform_driver pxa168_eth_driver = {
.resume = pxa168_eth_resume,
.suspend = pxa168_eth_suspend,
.driver = {
- .name = DRIVER_NAME,
- },
+ .name = DRIVER_NAME,
+ .of_match_table = of_match_ptr(pxa168_eth_of_match),
+ },
};
module_platform_driver(pxa168_eth_driver);
diff --git a/drivers/net/ethernet/marvell/skge.c b/drivers/net/ethernet/marvell/skge.c
index 24b2422..264eab7 100644
--- a/drivers/net/ethernet/marvell/skge.c
+++ b/drivers/net/ethernet/marvell/skge.c
@@ -1107,7 +1107,7 @@ static u16 xm_phy_read(struct skge_hw *hw, int port, u16 reg)
{
u16 v = 0;
if (__xm_phy_read(hw, port, reg, &v))
- pr_warning("%s: phy read timed out\n", hw->dev[port]->name);
+ pr_warn("%s: phy read timed out\n", hw->dev[port]->name);
return v;
}
@@ -1903,7 +1903,7 @@ static int gm_phy_write(struct skge_hw *hw, int port, u16 reg, u16 val)
return 0;
}
- pr_warning("%s: phy write timeout\n", hw->dev[port]->name);
+ pr_warn("%s: phy write timeout\n", hw->dev[port]->name);
return -EIO;
}
@@ -1931,7 +1931,7 @@ static u16 gm_phy_read(struct skge_hw *hw, int port, u16 reg)
{
u16 v = 0;
if (__gm_phy_read(hw, port, reg, &v))
- pr_warning("%s: phy read timeout\n", hw->dev[port]->name);
+ pr_warn("%s: phy read timeout\n", hw->dev[port]->name);
return v;
}
diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
index dba48a5c..bd33662 100644
--- a/drivers/net/ethernet/marvell/sky2.c
+++ b/drivers/net/ethernet/marvell/sky2.c
@@ -2814,7 +2814,7 @@ static int sky2_status_intr(struct sky2_hw *hw, int to_do, u16 idx)
default:
if (net_ratelimit())
- pr_warning("unknown status opcode 0x%x\n", opcode);
+ pr_warn("unknown status opcode 0x%x\n", opcode);
}
} while (hw->st_idx != idx);
diff --git a/drivers/net/ethernet/mellanox/mlx4/cmd.c b/drivers/net/ethernet/mellanox/mlx4/cmd.c
index 923c487..b16e1b9 100644
--- a/drivers/net/ethernet/mellanox/mlx4/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx4/cmd.c
@@ -580,8 +580,18 @@ static int mlx4_cmd_wait(struct mlx4_dev *dev, u64 in_param, u64 *out_param,
err = context->result;
if (err) {
- mlx4_err(dev, "command 0x%x failed: fw status = 0x%x\n",
- op, context->fw_status);
+ /* Since we do not want to have this error message always
+ * displayed at driver start when there are ConnectX2 HCAs
+ * on the host, we deprecate the error message for this
+ * specific command/input_mod/opcode_mod/fw-status to be debug.
+ */
+ if (op == MLX4_CMD_SET_PORT && in_modifier == 1 &&
+ op_modifier == 0 && context->fw_status == CMD_STAT_BAD_SIZE)
+ mlx4_dbg(dev, "command 0x%x failed: fw status = 0x%x\n",
+ op, context->fw_status);
+ else
+ mlx4_err(dev, "command 0x%x failed: fw status = 0x%x\n",
+ op, context->fw_status);
goto out;
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
index 35ff292..ae83da9 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
@@ -112,6 +112,7 @@ static const char main_strings[][ETH_GSTRING_LEN] = {
/* port statistics */
"tso_packets",
+ "xmit_more",
"queue_stopped", "wake_queue", "tx_timeout", "rx_alloc_failed",
"rx_csum_good", "rx_csum_none", "tx_chksum_offload",
@@ -1266,6 +1267,48 @@ static u32 mlx4_en_get_priv_flags(struct net_device *dev)
return priv->pflags;
}
+static int mlx4_en_get_tunable(struct net_device *dev,
+ const struct ethtool_tunable *tuna,
+ void *data)
+{
+ const struct mlx4_en_priv *priv = netdev_priv(dev);
+ int ret = 0;
+
+ switch (tuna->id) {
+ case ETHTOOL_TX_COPYBREAK:
+ *(u32 *)data = priv->prof->inline_thold;
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
+static int mlx4_en_set_tunable(struct net_device *dev,
+ const struct ethtool_tunable *tuna,
+ const void *data)
+{
+ struct mlx4_en_priv *priv = netdev_priv(dev);
+ int val, ret = 0;
+
+ switch (tuna->id) {
+ case ETHTOOL_TX_COPYBREAK:
+ val = *(u32 *)data;
+ if (val < MIN_PKT_LEN || val > MAX_INLINE)
+ ret = -EINVAL;
+ else
+ priv->prof->inline_thold = val;
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
const struct ethtool_ops mlx4_en_ethtool_ops = {
.get_drvinfo = mlx4_en_get_drvinfo,
@@ -1296,6 +1339,8 @@ const struct ethtool_ops mlx4_en_ethtool_ops = {
.get_ts_info = mlx4_en_get_ts_info,
.set_priv_flags = mlx4_en_set_priv_flags,
.get_priv_flags = mlx4_en_get_priv_flags,
+ .get_tunable = mlx4_en_get_tunable,
+ .set_tunable = mlx4_en_set_tunable,
};
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_main.c b/drivers/net/ethernet/mellanox/mlx4/en_main.c
index 3626fdf..2091ae88 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_main.c
@@ -78,27 +78,24 @@ MLX4_EN_PARM_INT(inline_thold, MAX_INLINE,
#define MAX_PFC_TX 0xff
#define MAX_PFC_RX 0xff
-int en_print(const char *level, const struct mlx4_en_priv *priv,
- const char *format, ...)
+void en_print(const char *level, const struct mlx4_en_priv *priv,
+ const char *format, ...)
{
va_list args;
struct va_format vaf;
- int i;
va_start(args, format);
vaf.fmt = format;
vaf.va = &args;
if (priv->registered)
- i = printk("%s%s: %s: %pV",
- level, DRV_NAME, priv->dev->name, &vaf);
+ printk("%s%s: %s: %pV",
+ level, DRV_NAME, priv->dev->name, &vaf);
else
- i = printk("%s%s: %s: Port %d: %pV",
- level, DRV_NAME, dev_name(&priv->mdev->pdev->dev),
- priv->port, &vaf);
+ printk("%s%s: %s: Port %d: %pV",
+ level, DRV_NAME, dev_name(&priv->mdev->pdev->dev),
+ priv->port, &vaf);
va_end(args);
-
- return i;
}
void mlx4_en_update_loopback_state(struct net_device *dev,
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index abddcf8..f3032fe 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -2459,6 +2459,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
}
priv->rx_ring_num = prof->rx_ring_num;
priv->cqe_factor = (mdev->dev->caps.cqe_size == 64) ? 1 : 0;
+ priv->cqe_size = mdev->dev->caps.cqe_size;
priv->mac_index = -1;
priv->msg_enable = MLX4_EN_MSG_LEVEL;
spin_lock_init(&priv->stats_lock);
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_port.c b/drivers/net/ethernet/mellanox/mlx4/en_port.c
index c2cfb05..0a0261d 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_port.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_port.c
@@ -150,14 +150,19 @@ int mlx4_en_DUMP_ETH_STATS(struct mlx4_en_dev *mdev, u8 port, u8 reset)
priv->port_stats.tx_chksum_offload = 0;
priv->port_stats.queue_stopped = 0;
priv->port_stats.wake_queue = 0;
+ priv->port_stats.tso_packets = 0;
+ priv->port_stats.xmit_more = 0;
for (i = 0; i < priv->tx_ring_num; i++) {
- stats->tx_packets += priv->tx_ring[i]->packets;
- stats->tx_bytes += priv->tx_ring[i]->bytes;
- priv->port_stats.tx_chksum_offload += priv->tx_ring[i]->tx_csum;
- priv->port_stats.queue_stopped +=
- priv->tx_ring[i]->queue_stopped;
- priv->port_stats.wake_queue += priv->tx_ring[i]->wake_queue;
+ const struct mlx4_en_tx_ring *ring = priv->tx_ring[i];
+
+ stats->tx_packets += ring->packets;
+ stats->tx_bytes += ring->bytes;
+ priv->port_stats.tx_chksum_offload += ring->tx_csum;
+ priv->port_stats.queue_stopped += ring->queue_stopped;
+ priv->port_stats.wake_queue += ring->wake_queue;
+ priv->port_stats.tso_packets += ring->tso_packets;
+ priv->port_stats.xmit_more += ring->xmit_more;
}
stats->rx_errors = be64_to_cpu(mlx4_en_stats->PCS) +
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index 9c909d2..a33048e 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -588,6 +588,8 @@ static struct sk_buff *mlx4_en_rx_skb(struct mlx4_en_priv *priv,
skb_copy_to_linear_data(skb, va, length);
skb->tail += length;
} else {
+ unsigned int pull_len;
+
/* Move relevant fragments to skb */
used_frags = mlx4_en_complete_rx_desc(priv, rx_desc, frags,
skb, length);
@@ -597,16 +599,17 @@ static struct sk_buff *mlx4_en_rx_skb(struct mlx4_en_priv *priv,
}
skb_shinfo(skb)->nr_frags = used_frags;
+ pull_len = eth_get_headlen(va, SMALL_PACKET_SIZE);
/* Copy headers into the skb linear buffer */
- memcpy(skb->data, va, HEADER_COPY_SIZE);
- skb->tail += HEADER_COPY_SIZE;
+ memcpy(skb->data, va, pull_len);
+ skb->tail += pull_len;
/* Skip headers in first fragment */
- skb_shinfo(skb)->frags[0].page_offset += HEADER_COPY_SIZE;
+ skb_shinfo(skb)->frags[0].page_offset += pull_len;
/* Adjust size of first fragment */
- skb_frag_size_sub(&skb_shinfo(skb)->frags[0], HEADER_COPY_SIZE);
- skb->data_len = length - HEADER_COPY_SIZE;
+ skb_frag_size_sub(&skb_shinfo(skb)->frags[0], pull_len);
+ skb->data_len = length - pull_len;
}
return skb;
}
@@ -668,7 +671,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
* descriptor offset can be deduced from the CQE index instead of
* reading 'cqe->index' */
index = cq->mcq.cons_index & ring->size_mask;
- cqe = &cq->buf[(index << factor) + factor];
+ cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
/* Process all completed CQEs */
while (XNOR(cqe->owner_sr_opcode & MLX4_CQE_OWNER_MASK,
@@ -769,7 +772,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
gro_skb->ip_summed = CHECKSUM_UNNECESSARY;
if (l2_tunnel)
- gro_skb->encapsulation = 1;
+ gro_skb->csum_level = 1;
if ((cqe->vlan_my_qpn &
cpu_to_be32(MLX4_CQE_VLAN_PRESENT_MASK)) &&
(dev->features & NETIF_F_HW_VLAN_CTAG_RX)) {
@@ -823,8 +826,8 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
skb->protocol = eth_type_trans(skb, dev);
skb_record_rx_queue(skb, cq->ring);
- if (l2_tunnel)
- skb->encapsulation = 1;
+ if (l2_tunnel && ip_summed == CHECKSUM_UNNECESSARY)
+ skb->csum_level = 1;
if (dev->features & NETIF_F_RXHASH)
skb_set_hash(skb,
@@ -855,7 +858,7 @@ next:
++cq->mcq.cons_index;
index = (cq->mcq.cons_index) & ring->size_mask;
- cqe = &cq->buf[(index << factor) + factor];
+ cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
if (++polled == budget)
goto out;
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
index dae3da6..34c1378 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
@@ -37,6 +37,7 @@
#include <linux/mlx4/qp.h>
#include <linux/skbuff.h>
#include <linux/if_vlan.h>
+#include <linux/prefetch.h>
#include <linux/vmalloc.h>
#include <linux/tcp.h>
#include <linux/ip.h>
@@ -65,10 +66,9 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
ring->size = size;
ring->size_mask = size - 1;
ring->stride = stride;
- ring->inline_thold = priv->prof->inline_thold;
tmp = size * sizeof(struct mlx4_en_tx_info);
- ring->tx_info = vmalloc_node(tmp, node);
+ ring->tx_info = kmalloc_node(tmp, GFP_KERNEL | __GFP_NOWARN, node);
if (!ring->tx_info) {
ring->tx_info = vmalloc(tmp);
if (!ring->tx_info) {
@@ -151,7 +151,7 @@ err_bounce:
kfree(ring->bounce_buf);
ring->bounce_buf = NULL;
err_info:
- vfree(ring->tx_info);
+ kvfree(ring->tx_info);
ring->tx_info = NULL;
err_ring:
kfree(ring);
@@ -174,7 +174,7 @@ void mlx4_en_destroy_tx_ring(struct mlx4_en_priv *priv,
mlx4_free_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
kfree(ring->bounce_buf);
ring->bounce_buf = NULL;
- vfree(ring->tx_info);
+ kvfree(ring->tx_info);
ring->tx_info = NULL;
kfree(ring);
*pring = NULL;
@@ -191,12 +191,12 @@ int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
ring->prod = 0;
ring->cons = 0xffffffff;
ring->last_nr_txbb = 1;
- ring->poll_cnt = 0;
memset(ring->tx_info, 0, ring->size * sizeof(struct mlx4_en_tx_info));
memset(ring->buf, 0, ring->buf_size);
ring->qp_state = MLX4_QP_STATE_RST;
- ring->doorbell_qpn = ring->qp.qpn << 8;
+ ring->doorbell_qpn = cpu_to_be32(ring->qp.qpn << 8);
+ ring->mr_key = cpu_to_be32(mdev->mr.key);
mlx4_en_fill_qp_context(priv, ring->size, ring->stride, 1, 0, ring->qpn,
ring->cqn, user_prio, &ring->context);
@@ -259,38 +259,45 @@ static u32 mlx4_en_free_tx_desc(struct mlx4_en_priv *priv,
struct mlx4_en_tx_ring *ring,
int index, u8 owner, u64 timestamp)
{
- struct mlx4_en_dev *mdev = priv->mdev;
struct mlx4_en_tx_info *tx_info = &ring->tx_info[index];
struct mlx4_en_tx_desc *tx_desc = ring->buf + index * TXBB_SIZE;
struct mlx4_wqe_data_seg *data = (void *) tx_desc + tx_info->data_offset;
- struct sk_buff *skb = tx_info->skb;
- struct skb_frag_struct *frag;
void *end = ring->buf + ring->buf_size;
- int frags = skb_shinfo(skb)->nr_frags;
+ struct sk_buff *skb = tx_info->skb;
+ int nr_maps = tx_info->nr_maps;
int i;
- struct skb_shared_hwtstamps hwts;
- if (timestamp) {
- mlx4_en_fill_hwtstamps(mdev, &hwts, timestamp);
+ /* We do not touch skb here, so prefetch skb->users location
+ * to speedup consume_skb()
+ */
+ prefetchw(&skb->users);
+
+ if (unlikely(timestamp)) {
+ struct skb_shared_hwtstamps hwts;
+
+ mlx4_en_fill_hwtstamps(priv->mdev, &hwts, timestamp);
skb_tstamp_tx(skb, &hwts);
}
/* Optimize the common case when there are no wraparounds */
if (likely((void *) tx_desc + tx_info->nr_txbb * TXBB_SIZE <= end)) {
if (!tx_info->inl) {
- if (tx_info->linear) {
+ if (tx_info->linear)
dma_unmap_single(priv->ddev,
- (dma_addr_t) be64_to_cpu(data->addr),
- be32_to_cpu(data->byte_count),
- PCI_DMA_TODEVICE);
- ++data;
- }
-
- for (i = 0; i < frags; i++) {
- frag = &skb_shinfo(skb)->frags[i];
+ tx_info->map0_dma,
+ tx_info->map0_byte_count,
+ PCI_DMA_TODEVICE);
+ else
dma_unmap_page(priv->ddev,
- (dma_addr_t) be64_to_cpu(data[i].addr),
- skb_frag_size(frag), PCI_DMA_TODEVICE);
+ tx_info->map0_dma,
+ tx_info->map0_byte_count,
+ PCI_DMA_TODEVICE);
+ for (i = 1; i < nr_maps; i++) {
+ data++;
+ dma_unmap_page(priv->ddev,
+ (dma_addr_t)be64_to_cpu(data->addr),
+ be32_to_cpu(data->byte_count),
+ PCI_DMA_TODEVICE);
}
}
} else {
@@ -299,27 +306,29 @@ static u32 mlx4_en_free_tx_desc(struct mlx4_en_priv *priv,
data = ring->buf + ((void *)data - end);
}
- if (tx_info->linear) {
+ if (tx_info->linear)
dma_unmap_single(priv->ddev,
- (dma_addr_t) be64_to_cpu(data->addr),
- be32_to_cpu(data->byte_count),
- PCI_DMA_TODEVICE);
- ++data;
- }
-
- for (i = 0; i < frags; i++) {
+ tx_info->map0_dma,
+ tx_info->map0_byte_count,
+ PCI_DMA_TODEVICE);
+ else
+ dma_unmap_page(priv->ddev,
+ tx_info->map0_dma,
+ tx_info->map0_byte_count,
+ PCI_DMA_TODEVICE);
+ for (i = 1; i < nr_maps; i++) {
+ data++;
/* Check for wraparound before unmapping */
if ((void *) data >= end)
data = ring->buf;
- frag = &skb_shinfo(skb)->frags[i];
dma_unmap_page(priv->ddev,
- (dma_addr_t) be64_to_cpu(data->addr),
- skb_frag_size(frag), PCI_DMA_TODEVICE);
- ++data;
+ (dma_addr_t)be64_to_cpu(data->addr),
+ be32_to_cpu(data->byte_count),
+ PCI_DMA_TODEVICE);
}
}
}
- dev_kfree_skb_any(skb);
+ dev_consume_skb_any(skb);
return tx_info->nr_txbb;
}
@@ -377,13 +386,19 @@ static bool mlx4_en_process_tx_cq(struct net_device *dev,
u64 timestamp = 0;
int done = 0;
int budget = priv->tx_work_limit;
+ u32 last_nr_txbb;
+ u32 ring_cons;
if (!priv->port_up)
return true;
+ netdev_txq_bql_complete_prefetchw(ring->tx_queue);
+
index = cons_index & size_mask;
- cqe = &buf[(index << factor) + factor];
- ring_index = ring->cons & size_mask;
+ cqe = mlx4_en_get_cqe(buf, index, priv->cqe_size) + factor;
+ last_nr_txbb = ACCESS_ONCE(ring->last_nr_txbb);
+ ring_cons = ACCESS_ONCE(ring->cons);
+ ring_index = ring_cons & size_mask;
stamp_index = ring_index;
/* Process all completed CQEs */
@@ -408,19 +423,19 @@ static bool mlx4_en_process_tx_cq(struct net_device *dev,
new_index = be16_to_cpu(cqe->wqe_index) & size_mask;
do {
- txbbs_skipped += ring->last_nr_txbb;
- ring_index = (ring_index + ring->last_nr_txbb) & size_mask;
+ txbbs_skipped += last_nr_txbb;
+ ring_index = (ring_index + last_nr_txbb) & size_mask;
if (ring->tx_info[ring_index].ts_requested)
timestamp = mlx4_en_get_cqe_ts(cqe);
/* free next descriptor */
- ring->last_nr_txbb = mlx4_en_free_tx_desc(
+ last_nr_txbb = mlx4_en_free_tx_desc(
priv, ring, ring_index,
- !!((ring->cons + txbbs_skipped) &
+ !!((ring_cons + txbbs_skipped) &
ring->size), timestamp);
mlx4_en_stamp_wqe(priv, ring, stamp_index,
- !!((ring->cons + txbbs_stamp) &
+ !!((ring_cons + txbbs_stamp) &
ring->size));
stamp_index = ring_index;
txbbs_stamp = txbbs_skipped;
@@ -430,7 +445,7 @@ static bool mlx4_en_process_tx_cq(struct net_device *dev,
++cons_index;
index = cons_index & size_mask;
- cqe = &buf[(index << factor) + factor];
+ cqe = mlx4_en_get_cqe(buf, index, priv->cqe_size) + factor;
}
@@ -441,7 +456,11 @@ static bool mlx4_en_process_tx_cq(struct net_device *dev,
mcq->cons_index = cons_index;
mlx4_cq_set_ci(mcq);
wmb();
- ring->cons += txbbs_skipped;
+
+ /* we want to dirty this cache line once */
+ ACCESS_ONCE(ring->last_nr_txbb) = last_nr_txbb;
+ ACCESS_ONCE(ring->cons) = ring_cons + txbbs_skipped;
+
netdev_tx_completed_queue(ring->tx_queue, packets, bytes);
/*
@@ -512,30 +531,35 @@ static struct mlx4_en_tx_desc *mlx4_en_bounce_to_desc(struct mlx4_en_priv *priv,
return ring->buf + index * TXBB_SIZE;
}
-static int is_inline(int inline_thold, struct sk_buff *skb, void **pfrag)
+/* Decide if skb can be inlined in tx descriptor to avoid dma mapping
+ *
+ * It seems strange we do not simply use skb_copy_bits().
+ * This would allow to inline all skbs iff skb->len <= inline_thold
+ *
+ * Note that caller already checked skb was not a gso packet
+ */
+static bool is_inline(int inline_thold, const struct sk_buff *skb,
+ const struct skb_shared_info *shinfo,
+ void **pfrag)
{
void *ptr;
- if (inline_thold && !skb_is_gso(skb) && skb->len <= inline_thold) {
- if (skb_shinfo(skb)->nr_frags == 1) {
- ptr = skb_frag_address_safe(&skb_shinfo(skb)->frags[0]);
- if (unlikely(!ptr))
- return 0;
-
- if (pfrag)
- *pfrag = ptr;
+ if (skb->len > inline_thold || !inline_thold)
+ return false;
- return 1;
- } else if (unlikely(skb_shinfo(skb)->nr_frags))
- return 0;
- else
- return 1;
+ if (shinfo->nr_frags == 1) {
+ ptr = skb_frag_address_safe(&shinfo->frags[0]);
+ if (unlikely(!ptr))
+ return false;
+ *pfrag = ptr;
+ return true;
}
-
- return 0;
+ if (shinfo->nr_frags)
+ return false;
+ return true;
}
-static int inline_size(struct sk_buff *skb)
+static int inline_size(const struct sk_buff *skb)
{
if (skb->len + CTRL_SIZE + sizeof(struct mlx4_wqe_inline_seg)
<= MLX4_INLINE_ALIGN)
@@ -546,18 +570,23 @@ static int inline_size(struct sk_buff *skb)
sizeof(struct mlx4_wqe_inline_seg), 16);
}
-static int get_real_size(struct sk_buff *skb, struct net_device *dev,
- int *lso_header_size)
+static int get_real_size(const struct sk_buff *skb,
+ const struct skb_shared_info *shinfo,
+ struct net_device *dev,
+ int *lso_header_size,
+ bool *inline_ok,
+ void **pfrag)
{
struct mlx4_en_priv *priv = netdev_priv(dev);
int real_size;
- if (skb_is_gso(skb)) {
+ if (shinfo->gso_size) {
+ *inline_ok = false;
if (skb->encapsulation)
*lso_header_size = (skb_inner_transport_header(skb) - skb->data) + inner_tcp_hdrlen(skb);
else
*lso_header_size = skb_transport_offset(skb) + tcp_hdrlen(skb);
- real_size = CTRL_SIZE + skb_shinfo(skb)->nr_frags * DS_SIZE +
+ real_size = CTRL_SIZE + shinfo->nr_frags * DS_SIZE +
ALIGN(*lso_header_size + 4, DS_SIZE);
if (unlikely(*lso_header_size != skb_headlen(skb))) {
/* We add a segment for the skb linear buffer only if
@@ -572,20 +601,28 @@ static int get_real_size(struct sk_buff *skb, struct net_device *dev,
}
} else {
*lso_header_size = 0;
- if (!is_inline(priv->prof->inline_thold, skb, NULL))
- real_size = CTRL_SIZE + (skb_shinfo(skb)->nr_frags + 1) * DS_SIZE;
- else
+ *inline_ok = is_inline(priv->prof->inline_thold, skb,
+ shinfo, pfrag);
+
+ if (*inline_ok)
real_size = inline_size(skb);
+ else
+ real_size = CTRL_SIZE +
+ (shinfo->nr_frags + 1) * DS_SIZE;
}
return real_size;
}
-static void build_inline_wqe(struct mlx4_en_tx_desc *tx_desc, struct sk_buff *skb,
- int real_size, u16 *vlan_tag, int tx_ind, void *fragptr)
+static void build_inline_wqe(struct mlx4_en_tx_desc *tx_desc,
+ const struct sk_buff *skb,
+ const struct skb_shared_info *shinfo,
+ int real_size, u16 *vlan_tag,
+ int tx_ind, void *fragptr)
{
struct mlx4_wqe_inline_seg *inl = &tx_desc->inl;
int spc = MLX4_INLINE_ALIGN - CTRL_SIZE - sizeof *inl;
+ unsigned int hlen = skb_headlen(skb);
if (skb->len <= spc) {
if (likely(skb->len >= MIN_PKT_LEN)) {
@@ -595,19 +632,19 @@ static void build_inline_wqe(struct mlx4_en_tx_desc *tx_desc, struct sk_buff *sk
memset(((void *)(inl + 1)) + skb->len, 0,
MIN_PKT_LEN - skb->len);
}
- skb_copy_from_linear_data(skb, inl + 1, skb_headlen(skb));
- if (skb_shinfo(skb)->nr_frags)
- memcpy(((void *)(inl + 1)) + skb_headlen(skb), fragptr,
- skb_frag_size(&skb_shinfo(skb)->frags[0]));
+ skb_copy_from_linear_data(skb, inl + 1, hlen);
+ if (shinfo->nr_frags)
+ memcpy(((void *)(inl + 1)) + hlen, fragptr,
+ skb_frag_size(&shinfo->frags[0]));
} else {
inl->byte_count = cpu_to_be32(1 << 31 | spc);
- if (skb_headlen(skb) <= spc) {
- skb_copy_from_linear_data(skb, inl + 1, skb_headlen(skb));
- if (skb_headlen(skb) < spc) {
- memcpy(((void *)(inl + 1)) + skb_headlen(skb),
- fragptr, spc - skb_headlen(skb));
- fragptr += spc - skb_headlen(skb);
+ if (hlen <= spc) {
+ skb_copy_from_linear_data(skb, inl + 1, hlen);
+ if (hlen < spc) {
+ memcpy(((void *)(inl + 1)) + hlen,
+ fragptr, spc - hlen);
+ fragptr += spc - hlen;
}
inl = (void *) (inl + 1) + spc;
memcpy(((void *)(inl + 1)), fragptr, skb->len - spc);
@@ -615,10 +652,11 @@ static void build_inline_wqe(struct mlx4_en_tx_desc *tx_desc, struct sk_buff *sk
skb_copy_from_linear_data(skb, inl + 1, spc);
inl = (void *) (inl + 1) + spc;
skb_copy_from_linear_data_offset(skb, spc, inl + 1,
- skb_headlen(skb) - spc);
- if (skb_shinfo(skb)->nr_frags)
- memcpy(((void *)(inl + 1)) + skb_headlen(skb) - spc,
- fragptr, skb_frag_size(&skb_shinfo(skb)->frags[0]));
+ hlen - spc);
+ if (shinfo->nr_frags)
+ memcpy(((void *)(inl + 1)) + hlen - spc,
+ fragptr,
+ skb_frag_size(&shinfo->frags[0]));
}
wmb();
@@ -642,15 +680,16 @@ u16 mlx4_en_select_queue(struct net_device *dev, struct sk_buff *skb,
return fallback(dev, skb) % rings_p_up + up * rings_p_up;
}
-static void mlx4_bf_copy(void __iomem *dst, unsigned long *src, unsigned bytecnt)
+static void mlx4_bf_copy(void __iomem *dst, const void *src,
+ unsigned int bytecnt)
{
__iowrite64_copy(dst, src, bytecnt / 8);
}
netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
{
+ struct skb_shared_info *shinfo = skb_shinfo(skb);
struct mlx4_en_priv *priv = netdev_priv(dev);
- struct mlx4_en_dev *mdev = priv->mdev;
struct device *ddev = priv->ddev;
struct mlx4_en_tx_ring *ring;
struct mlx4_en_tx_desc *tx_desc;
@@ -663,15 +702,26 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
u32 index, bf_index;
__be32 op_own;
u16 vlan_tag = 0;
- int i;
+ int i_frag;
int lso_header_size;
- void *fragptr;
+ void *fragptr = NULL;
bool bounce = false;
+ bool send_doorbell;
+ bool stop_queue;
+ bool inline_ok;
+ u32 ring_cons;
if (!priv->port_up)
goto tx_drop;
- real_size = get_real_size(skb, dev, &lso_header_size);
+ tx_ind = skb_get_queue_mapping(skb);
+ ring = priv->tx_ring[tx_ind];
+
+ /* fetch ring->cons far ahead before needing it to avoid stall */
+ ring_cons = ACCESS_ONCE(ring->cons);
+
+ real_size = get_real_size(skb, shinfo, dev, &lso_header_size,
+ &inline_ok, &fragptr);
if (unlikely(!real_size))
goto tx_drop;
@@ -684,38 +734,15 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
goto tx_drop;
}
- tx_ind = skb->queue_mapping;
- ring = priv->tx_ring[tx_ind];
if (vlan_tx_tag_present(skb))
vlan_tag = vlan_tx_tag_get(skb);
- /* Check available TXBBs And 2K spare for prefetch */
- if (unlikely(((int)(ring->prod - ring->cons)) >
- ring->size - HEADROOM - MAX_DESC_TXBBS)) {
- /* every full Tx ring stops queue */
- netif_tx_stop_queue(ring->tx_queue);
- ring->queue_stopped++;
- /* If queue was emptied after the if, and before the
- * stop_queue - need to wake the queue, or else it will remain
- * stopped forever.
- * Need a memory barrier to make sure ring->cons was not
- * updated before queue was stopped.
- */
- wmb();
-
- if (unlikely(((int)(ring->prod - ring->cons)) <=
- ring->size - HEADROOM - MAX_DESC_TXBBS)) {
- netif_tx_wake_queue(ring->tx_queue);
- ring->wake_queue++;
- } else {
- return NETDEV_TX_BUSY;
- }
- }
+ netdev_txq_bql_enqueue_prefetchw(ring->tx_queue);
/* Track current inflight packets for performance analysis */
AVG_PERF_COUNTER(priv->pstats.inflight_avg,
- (u32) (ring->prod - ring->cons - 1));
+ (u32)(ring->prod - ring_cons - 1));
/* Packet is good - grab an index and transmit it */
index = ring->prod & ring->size_mask;
@@ -735,46 +762,48 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
tx_info->skb = skb;
tx_info->nr_txbb = nr_txbb;
+ data = &tx_desc->data;
if (lso_header_size)
data = ((void *)&tx_desc->lso + ALIGN(lso_header_size + 4,
DS_SIZE));
- else
- data = &tx_desc->data;
/* valid only for none inline segments */
tx_info->data_offset = (void *)data - (void *)tx_desc;
+ tx_info->inl = inline_ok;
+
tx_info->linear = (lso_header_size < skb_headlen(skb) &&
- !is_inline(ring->inline_thold, skb, NULL)) ? 1 : 0;
+ !inline_ok) ? 1 : 0;
- data += skb_shinfo(skb)->nr_frags + tx_info->linear - 1;
+ tx_info->nr_maps = shinfo->nr_frags + tx_info->linear;
+ data += tx_info->nr_maps - 1;
- if (is_inline(ring->inline_thold, skb, &fragptr)) {
- tx_info->inl = 1;
- } else {
- /* Map fragments */
- for (i = skb_shinfo(skb)->nr_frags - 1; i >= 0; i--) {
- struct skb_frag_struct *frag;
- dma_addr_t dma;
+ if (!tx_info->inl) {
+ dma_addr_t dma = 0;
+ u32 byte_count = 0;
+
+ /* Map fragments if any */
+ for (i_frag = shinfo->nr_frags - 1; i_frag >= 0; i_frag--) {
+ const struct skb_frag_struct *frag;
- frag = &skb_shinfo(skb)->frags[i];
+ frag = &shinfo->frags[i_frag];
+ byte_count = skb_frag_size(frag);
dma = skb_frag_dma_map(ddev, frag,
- 0, skb_frag_size(frag),
+ 0, byte_count,
DMA_TO_DEVICE);
if (dma_mapping_error(ddev, dma))
goto tx_drop_unmap;
data->addr = cpu_to_be64(dma);
- data->lkey = cpu_to_be32(mdev->mr.key);
+ data->lkey = ring->mr_key;
wmb();
- data->byte_count = cpu_to_be32(skb_frag_size(frag));
+ data->byte_count = cpu_to_be32(byte_count);
--data;
}
- /* Map linear part */
+ /* Map linear part if needed */
if (tx_info->linear) {
- u32 byte_count = skb_headlen(skb) - lso_header_size;
- dma_addr_t dma;
+ byte_count = skb_headlen(skb) - lso_header_size;
dma = dma_map_single(ddev, skb->data +
lso_header_size, byte_count,
@@ -783,29 +812,28 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
goto tx_drop_unmap;
data->addr = cpu_to_be64(dma);
- data->lkey = cpu_to_be32(mdev->mr.key);
+ data->lkey = ring->mr_key;
wmb();
data->byte_count = cpu_to_be32(byte_count);
}
- tx_info->inl = 0;
+ /* tx completion can avoid cache line miss for common cases */
+ tx_info->map0_dma = dma;
+ tx_info->map0_byte_count = byte_count;
}
/*
* For timestamping add flag to skb_shinfo and
* set flag for further reference
*/
- if (ring->hwtstamp_tx_type == HWTSTAMP_TX_ON &&
- skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) {
- skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ tx_info->ts_requested = 0;
+ if (unlikely(ring->hwtstamp_tx_type == HWTSTAMP_TX_ON &&
+ shinfo->tx_flags & SKBTX_HW_TSTAMP)) {
+ shinfo->tx_flags |= SKBTX_IN_PROGRESS;
tx_info->ts_requested = 1;
}
/* Prepare ctrl segement apart opcode+ownership, which depends on
* whether LSO is used */
- tx_desc->ctrl.vlan_tag = cpu_to_be16(vlan_tag);
- tx_desc->ctrl.ins_vlan = MLX4_WQE_CTRL_INS_VLAN *
- !!vlan_tx_tag_present(skb);
- tx_desc->ctrl.fence_size = (real_size / 16) & 0x3f;
tx_desc->ctrl.srcrb_flags = priv->ctrl_flags;
if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
tx_desc->ctrl.srcrb_flags |= cpu_to_be32(MLX4_WQE_CTRL_IP_CSUM |
@@ -826,6 +854,8 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
/* Handle LSO (TSO) packets */
if (lso_header_size) {
+ int i;
+
/* Mark opcode as LSO */
op_own = cpu_to_be32(MLX4_OPCODE_LSO | (1 << 6)) |
((ring->prod & ring->size) ?
@@ -833,15 +863,16 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
/* Fill in the LSO prefix */
tx_desc->lso.mss_hdr_size = cpu_to_be32(
- skb_shinfo(skb)->gso_size << 16 | lso_header_size);
+ shinfo->gso_size << 16 | lso_header_size);
/* Copy headers;
* note that we already verified that it is linear */
memcpy(tx_desc->lso.header, skb->data, lso_header_size);
- priv->port_stats.tso_packets++;
- i = ((skb->len - lso_header_size) / skb_shinfo(skb)->gso_size) +
- !!((skb->len - lso_header_size) % skb_shinfo(skb)->gso_size);
+ ring->tso_packets++;
+
+ i = ((skb->len - lso_header_size) / shinfo->gso_size) +
+ !!((skb->len - lso_header_size) % shinfo->gso_size);
tx_info->nr_bytes = skb->len + (i - 1) * lso_header_size;
ring->packets += i;
} else {
@@ -851,16 +882,14 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
cpu_to_be32(MLX4_EN_BIT_DESC_OWN) : 0);
tx_info->nr_bytes = max_t(unsigned int, skb->len, ETH_ZLEN);
ring->packets++;
-
}
ring->bytes += tx_info->nr_bytes;
netdev_tx_sent_queue(ring->tx_queue, tx_info->nr_bytes);
AVG_PERF_COUNTER(priv->pstats.tx_pktsz_avg, skb->len);
- if (tx_info->inl) {
- build_inline_wqe(tx_desc, skb, real_size, &vlan_tag, tx_ind, fragptr);
- tx_info->inl = 1;
- }
+ if (tx_info->inl)
+ build_inline_wqe(tx_desc, skb, shinfo, real_size, &vlan_tag,
+ tx_ind, fragptr);
if (skb->encapsulation) {
struct iphdr *ipv4 = (struct iphdr *)skb_inner_network_header(skb);
@@ -873,44 +902,85 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
ring->prod += nr_txbb;
/* If we used a bounce buffer then copy descriptor back into place */
- if (bounce)
+ if (unlikely(bounce))
tx_desc = mlx4_en_bounce_to_desc(priv, ring, index, desc_size);
skb_tx_timestamp(skb);
- if (ring->bf_enabled && desc_size <= MAX_BF && !bounce && !vlan_tx_tag_present(skb)) {
- tx_desc->ctrl.bf_qpn |= cpu_to_be32(ring->doorbell_qpn);
+ /* Check available TXBBs And 2K spare for prefetch */
+ stop_queue = (int)(ring->prod - ring_cons) >
+ ring->size - HEADROOM - MAX_DESC_TXBBS;
+ if (unlikely(stop_queue)) {
+ netif_tx_stop_queue(ring->tx_queue);
+ ring->queue_stopped++;
+ }
+ send_doorbell = !skb->xmit_more || netif_xmit_stopped(ring->tx_queue);
+
+ real_size = (real_size / 16) & 0x3f;
+
+ if (ring->bf_enabled && desc_size <= MAX_BF && !bounce &&
+ !vlan_tx_tag_present(skb) && send_doorbell) {
+ tx_desc->ctrl.bf_qpn = ring->doorbell_qpn |
+ cpu_to_be32(real_size);
op_own |= htonl((bf_index & 0xffff) << 8);
- /* Ensure new descirptor hits memory
- * before setting ownership of this descriptor to HW */
+ /* Ensure new descriptor hits memory
+ * before setting ownership of this descriptor to HW
+ */
wmb();
tx_desc->ctrl.owner_opcode = op_own;
wmb();
- mlx4_bf_copy(ring->bf.reg + ring->bf.offset, (unsigned long *) &tx_desc->ctrl,
- desc_size);
+ mlx4_bf_copy(ring->bf.reg + ring->bf.offset, &tx_desc->ctrl,
+ desc_size);
wmb();
ring->bf.offset ^= ring->bf.buf_size;
} else {
- /* Ensure new descirptor hits memory
- * before setting ownership of this descriptor to HW */
+ tx_desc->ctrl.vlan_tag = cpu_to_be16(vlan_tag);
+ tx_desc->ctrl.ins_vlan = MLX4_WQE_CTRL_INS_VLAN *
+ !!vlan_tx_tag_present(skb);
+ tx_desc->ctrl.fence_size = real_size;
+
+ /* Ensure new descriptor hits memory
+ * before setting ownership of this descriptor to HW
+ */
wmb();
tx_desc->ctrl.owner_opcode = op_own;
- wmb();
- iowrite32be(ring->doorbell_qpn, ring->bf.uar->map + MLX4_SEND_DOORBELL);
+ if (send_doorbell) {
+ wmb();
+ iowrite32(ring->doorbell_qpn,
+ ring->bf.uar->map + MLX4_SEND_DOORBELL);
+ } else {
+ ring->xmit_more++;
+ }
}
+ if (unlikely(stop_queue)) {
+ /* If queue was emptied after the if (stop_queue) , and before
+ * the netif_tx_stop_queue() - need to wake the queue,
+ * or else it will remain stopped forever.
+ * Need a memory barrier to make sure ring->cons was not
+ * updated before queue was stopped.
+ */
+ smp_rmb();
+
+ ring_cons = ACCESS_ONCE(ring->cons);
+ if (unlikely(((int)(ring->prod - ring_cons)) <=
+ ring->size - HEADROOM - MAX_DESC_TXBBS)) {
+ netif_tx_wake_queue(ring->tx_queue);
+ ring->wake_queue++;
+ }
+ }
return NETDEV_TX_OK;
tx_drop_unmap:
en_err(priv, "DMA mapping error\n");
- for (i++; i < skb_shinfo(skb)->nr_frags; i++) {
- data++;
+ while (++i_frag < shinfo->nr_frags) {
+ ++data;
dma_unmap_page(ddev, (dma_addr_t) be64_to_cpu(data->addr),
be32_to_cpu(data->byte_count),
PCI_DMA_TODEVICE);
diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c
index 2a004b3..a49c9d1 100644
--- a/drivers/net/ethernet/mellanox/mlx4/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/eq.c
@@ -101,21 +101,24 @@ static void eq_set_ci(struct mlx4_eq *eq, int req_not)
mb();
}
-static struct mlx4_eqe *get_eqe(struct mlx4_eq *eq, u32 entry, u8 eqe_factor)
+static struct mlx4_eqe *get_eqe(struct mlx4_eq *eq, u32 entry, u8 eqe_factor,
+ u8 eqe_size)
{
/* (entry & (eq->nent - 1)) gives us a cyclic array */
- unsigned long offset = (entry & (eq->nent - 1)) * (MLX4_EQ_ENTRY_SIZE << eqe_factor);
- /* CX3 is capable of extending the EQE from 32 to 64 bytes.
- * When this feature is enabled, the first (in the lower addresses)
+ unsigned long offset = (entry & (eq->nent - 1)) * eqe_size;
+ /* CX3 is capable of extending the EQE from 32 to 64 bytes with
+ * strides of 64B,128B and 256B.
+ * When 64B EQE is used, the first (in the lower addresses)
* 32 bytes in the 64 byte EQE are reserved and the next 32 bytes
* contain the legacy EQE information.
+ * In all other cases, the first 32B contains the legacy EQE info.
*/
return eq->page_list[offset / PAGE_SIZE].buf + (offset + (eqe_factor ? MLX4_EQ_ENTRY_SIZE : 0)) % PAGE_SIZE;
}
-static struct mlx4_eqe *next_eqe_sw(struct mlx4_eq *eq, u8 eqe_factor)
+static struct mlx4_eqe *next_eqe_sw(struct mlx4_eq *eq, u8 eqe_factor, u8 size)
{
- struct mlx4_eqe *eqe = get_eqe(eq, eq->cons_index, eqe_factor);
+ struct mlx4_eqe *eqe = get_eqe(eq, eq->cons_index, eqe_factor, size);
return !!(eqe->owner & 0x80) ^ !!(eq->cons_index & eq->nent) ? NULL : eqe;
}
@@ -459,8 +462,9 @@ static int mlx4_eq_int(struct mlx4_dev *dev, struct mlx4_eq *eq)
enum slave_port_gen_event gen_event;
unsigned long flags;
struct mlx4_vport_state *s_info;
+ int eqe_size = dev->caps.eqe_size;
- while ((eqe = next_eqe_sw(eq, dev->caps.eqe_factor))) {
+ while ((eqe = next_eqe_sw(eq, dev->caps.eqe_factor, eqe_size))) {
/*
* Make sure we read EQ entry contents after we've
* checked the ownership bit.
@@ -894,8 +898,10 @@ static int mlx4_create_eq(struct mlx4_dev *dev, int nent,
eq->dev = dev;
eq->nent = roundup_pow_of_two(max(nent, 2));
- /* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes */
- npages = PAGE_ALIGN(eq->nent * (MLX4_EQ_ENTRY_SIZE << dev->caps.eqe_factor)) / PAGE_SIZE;
+ /* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes, with
+ * strides of 64B,128B and 256B.
+ */
+ npages = PAGE_ALIGN(eq->nent * dev->caps.eqe_size) / PAGE_SIZE;
eq->page_list = kmalloc(npages * sizeof *eq->page_list,
GFP_KERNEL);
@@ -997,8 +1003,10 @@ static void mlx4_free_eq(struct mlx4_dev *dev,
struct mlx4_cmd_mailbox *mailbox;
int err;
int i;
- /* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes */
- int npages = PAGE_ALIGN((MLX4_EQ_ENTRY_SIZE << dev->caps.eqe_factor) * eq->nent) / PAGE_SIZE;
+ /* CX3 is capable of extending the CQE/EQE from 32 to 64 bytes, with
+ * strides of 64B,128B and 256B
+ */
+ int npages = PAGE_ALIGN(dev->caps.eqe_size * eq->nent) / PAGE_SIZE;
mailbox = mlx4_alloc_cmd_mailbox(dev);
if (IS_ERR(mailbox))
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
index 494753e..2e88a23 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
@@ -137,7 +137,9 @@ static void dump_dev_cap_flags2(struct mlx4_dev *dev, u64 flags)
[8] = "Dynamic QP updates support",
[9] = "Device managed flow steering IPoIB support",
[10] = "TCP/IP offloads/flow-steering for VXLAN support",
- [11] = "MAD DEMUX (Secure-Host) support"
+ [11] = "MAD DEMUX (Secure-Host) support",
+ [12] = "Large cache line (>64B) CQE stride support",
+ [13] = "Large cache line (>64B) EQE stride support"
};
int i;
@@ -557,6 +559,7 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
#define QUERY_DEV_CAP_FLOW_STEERING_IPOIB_OFFSET 0x74
#define QUERY_DEV_CAP_FLOW_STEERING_RANGE_EN_OFFSET 0x76
#define QUERY_DEV_CAP_FLOW_STEERING_MAX_QP_OFFSET 0x77
+#define QUERY_DEV_CAP_CQ_EQ_CACHE_LINE_STRIDE 0x7a
#define QUERY_DEV_CAP_RDMARC_ENTRY_SZ_OFFSET 0x80
#define QUERY_DEV_CAP_QPC_ENTRY_SZ_OFFSET 0x82
#define QUERY_DEV_CAP_AUX_ENTRY_SZ_OFFSET 0x84
@@ -733,6 +736,11 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
dev_cap->max_rq_sg = field;
MLX4_GET(size, outbox, QUERY_DEV_CAP_MAX_DESC_SZ_RQ_OFFSET);
dev_cap->max_rq_desc_sz = size;
+ MLX4_GET(field, outbox, QUERY_DEV_CAP_CQ_EQ_CACHE_LINE_STRIDE);
+ if (field & (1 << 6))
+ dev_cap->flags2 |= MLX4_DEV_CAP_FLAG2_CQE_STRIDE;
+ if (field & (1 << 7))
+ dev_cap->flags2 |= MLX4_DEV_CAP_FLAG2_EQE_STRIDE;
MLX4_GET(dev_cap->bmme_flags, outbox,
QUERY_DEV_CAP_BMME_FLAGS_OFFSET);
@@ -974,8 +982,13 @@ int mlx4_QUERY_PORT_wrapper(struct mlx4_dev *dev, int slave,
if (port < 0)
return -EINVAL;
- vhcr->in_modifier = (vhcr->in_modifier & ~0xFF) |
- (port & 0xFF);
+ /* Protect against untrusted guests: enforce that this is the
+ * QUERY_PORT general query.
+ */
+ if (vhcr->op_modifier || vhcr->in_modifier & ~0xFF)
+ return -EINVAL;
+
+ vhcr->in_modifier = port;
err = mlx4_cmd_box(dev, 0, outbox->dma, vhcr->in_modifier, 0,
MLX4_CMD_QUERY_PORT, MLX4_CMD_TIME_CLASS_B,
@@ -1376,6 +1389,7 @@ int mlx4_INIT_HCA(struct mlx4_dev *dev, struct mlx4_init_hca_param *param)
#define INIT_HCA_CQC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x30)
#define INIT_HCA_LOG_CQ_OFFSET (INIT_HCA_QPC_OFFSET + 0x37)
#define INIT_HCA_EQE_CQE_OFFSETS (INIT_HCA_QPC_OFFSET + 0x38)
+#define INIT_HCA_EQE_CQE_STRIDE_OFFSET (INIT_HCA_QPC_OFFSET + 0x3b)
#define INIT_HCA_ALTC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x40)
#define INIT_HCA_AUXC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x50)
#define INIT_HCA_EQC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x60)
@@ -1452,11 +1466,25 @@ int mlx4_INIT_HCA(struct mlx4_dev *dev, struct mlx4_init_hca_param *param)
if (dev->caps.flags & MLX4_DEV_CAP_FLAG_64B_CQE) {
*(inbox + INIT_HCA_EQE_CQE_OFFSETS / 4) |= cpu_to_be32(1 << 30);
dev->caps.cqe_size = 64;
- dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_64B_CQE;
+ dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_LARGE_CQE;
} else {
dev->caps.cqe_size = 32;
}
+ /* CX3 is capable of extending CQEs\EQEs to strides larger than 64B */
+ if ((dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_EQE_STRIDE) &&
+ (dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_CQE_STRIDE)) {
+ dev->caps.eqe_size = cache_line_size();
+ dev->caps.cqe_size = cache_line_size();
+ dev->caps.eqe_factor = 0;
+ MLX4_PUT(inbox, (u8)((ilog2(dev->caps.eqe_size) - 5) << 4 |
+ (ilog2(dev->caps.eqe_size) - 5)),
+ INIT_HCA_EQE_CQE_STRIDE_OFFSET);
+
+ /* User still need to know to support CQE > 32B */
+ dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_LARGE_CQE;
+ }
+
/* QPC/EEC/CQC/EQC/RDMARC attributes */
MLX4_PUT(inbox, param->qpc_base, INIT_HCA_QPC_BASE_OFFSET);
@@ -1616,6 +1644,17 @@ int mlx4_QUERY_HCA(struct mlx4_dev *dev,
if (byte_field & 0x40) /* 64-bytes cqe enabled */
param->dev_cap_enabled |= MLX4_DEV_CAP_64B_CQE_ENABLED;
+ /* CX3 is capable of extending CQEs\EQEs to strides larger than 64B */
+ MLX4_GET(byte_field, outbox, INIT_HCA_EQE_CQE_STRIDE_OFFSET);
+ if (byte_field) {
+ param->dev_cap_enabled |= MLX4_DEV_CAP_64B_EQE_ENABLED;
+ param->dev_cap_enabled |= MLX4_DEV_CAP_64B_CQE_ENABLED;
+ param->cqe_size = 1 << ((byte_field &
+ MLX4_CQE_SIZE_MASK_STRIDE) + 5);
+ param->eqe_size = 1 << (((byte_field &
+ MLX4_EQE_SIZE_MASK_STRIDE) >> 4) + 5);
+ }
+
/* TPT attributes */
MLX4_GET(param->dmpt_base, outbox, INIT_HCA_DMPT_BASE_OFFSET);
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.h b/drivers/net/ethernet/mellanox/mlx4/fw.h
index 1fce03e..9b835ae 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.h
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.h
@@ -178,6 +178,8 @@ struct mlx4_init_hca_param {
u8 uar_page_sz; /* log pg sz in 4k chunks */
u8 steering_mode; /* for QUERY_HCA */
u64 dev_cap_enabled;
+ u16 cqe_size; /* For use only when CQE stride feature enabled */
+ u16 eqe_size; /* For use only when EQE stride feature enabled */
};
struct mlx4_init_ib_param {
diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
index 871e3a5..90de6e1 100644
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -104,7 +104,8 @@ module_param(enable_64b_cqe_eqe, bool, 0444);
MODULE_PARM_DESC(enable_64b_cqe_eqe,
"Enable 64 byte CQEs/EQEs when the FW supports this (default: True)");
-#define PF_CONTEXT_BEHAVIOUR_MASK MLX4_FUNC_CAP_64B_EQE_CQE
+#define PF_CONTEXT_BEHAVIOUR_MASK (MLX4_FUNC_CAP_64B_EQE_CQE | \
+ MLX4_FUNC_CAP_EQE_CQE_STRIDE)
static char mlx4_version[] =
DRV_NAME ": Mellanox ConnectX core driver v"
@@ -196,6 +197,40 @@ static void mlx4_set_port_mask(struct mlx4_dev *dev)
dev->caps.port_mask[i] = dev->caps.port_type[i];
}
+static void mlx4_enable_cqe_eqe_stride(struct mlx4_dev *dev)
+{
+ struct mlx4_caps *dev_cap = &dev->caps;
+
+ /* FW not supporting or cancelled by user */
+ if (!(dev_cap->flags2 & MLX4_DEV_CAP_FLAG2_EQE_STRIDE) ||
+ !(dev_cap->flags2 & MLX4_DEV_CAP_FLAG2_CQE_STRIDE))
+ return;
+
+ /* Must have 64B CQE_EQE enabled by FW to use bigger stride
+ * When FW has NCSI it may decide not to report 64B CQE/EQEs
+ */
+ if (!(dev_cap->flags & MLX4_DEV_CAP_FLAG_64B_EQE) ||
+ !(dev_cap->flags & MLX4_DEV_CAP_FLAG_64B_CQE)) {
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_CQE_STRIDE;
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_EQE_STRIDE;
+ return;
+ }
+
+ if (cache_line_size() == 128 || cache_line_size() == 256) {
+ mlx4_dbg(dev, "Enabling CQE stride cacheLine supported\n");
+ /* Changing the real data inside CQE size to 32B */
+ dev_cap->flags &= ~MLX4_DEV_CAP_FLAG_64B_CQE;
+ dev_cap->flags &= ~MLX4_DEV_CAP_FLAG_64B_EQE;
+
+ if (mlx4_is_master(dev))
+ dev_cap->function_caps |= MLX4_FUNC_CAP_EQE_CQE_STRIDE;
+ } else {
+ mlx4_dbg(dev, "Disabling CQE stride cacheLine unsupported\n");
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_CQE_STRIDE;
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_EQE_STRIDE;
+ }
+}
+
static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
{
int err;
@@ -390,6 +425,14 @@ static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
dev->caps.flags &= ~MLX4_DEV_CAP_FLAG_64B_CQE;
dev->caps.flags &= ~MLX4_DEV_CAP_FLAG_64B_EQE;
}
+
+ if (dev_cap->flags2 &
+ (MLX4_DEV_CAP_FLAG2_CQE_STRIDE |
+ MLX4_DEV_CAP_FLAG2_EQE_STRIDE)) {
+ mlx4_warn(dev, "Disabling EQE/CQE stride per user request\n");
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_CQE_STRIDE;
+ dev_cap->flags2 &= ~MLX4_DEV_CAP_FLAG2_EQE_STRIDE;
+ }
}
if ((dev->caps.flags &
@@ -397,6 +440,9 @@ static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
mlx4_is_master(dev))
dev->caps.function_caps |= MLX4_FUNC_CAP_64B_EQE_CQE;
+ if (!mlx4_is_slave(dev))
+ mlx4_enable_cqe_eqe_stride(dev);
+
return 0;
}
@@ -724,11 +770,22 @@ static int mlx4_slave_cap(struct mlx4_dev *dev)
if (hca_param.dev_cap_enabled & MLX4_DEV_CAP_64B_CQE_ENABLED) {
dev->caps.cqe_size = 64;
- dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_64B_CQE;
+ dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_LARGE_CQE;
} else {
dev->caps.cqe_size = 32;
}
+ if (hca_param.dev_cap_enabled & MLX4_DEV_CAP_EQE_STRIDE_ENABLED) {
+ dev->caps.eqe_size = hca_param.eqe_size;
+ dev->caps.eqe_factor = 0;
+ }
+
+ if (hca_param.dev_cap_enabled & MLX4_DEV_CAP_CQE_STRIDE_ENABLED) {
+ dev->caps.cqe_size = hca_param.cqe_size;
+ /* User still need to know when CQE > 32B */
+ dev->caps.userspace_caps |= MLX4_USER_DEV_CAP_LARGE_CQE;
+ }
+
dev->caps.flags2 &= ~MLX4_DEV_CAP_FLAG2_TS;
mlx4_warn(dev, "Timestamping is not supported in slave mode\n");
@@ -2202,115 +2259,18 @@ static void mlx4_free_ownership(struct mlx4_dev *dev)
iounmap(owner);
}
-static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data)
+static int mlx4_load_one(struct pci_dev *pdev, int pci_dev_data,
+ int total_vfs, int *nvfs, struct mlx4_priv *priv)
{
- struct mlx4_priv *priv;
struct mlx4_dev *dev;
+ unsigned sum = 0;
int err;
int port;
- int nvfs[MLX4_MAX_PORTS + 1] = {0, 0, 0};
- int prb_vf[MLX4_MAX_PORTS + 1] = {0, 0, 0};
- const int param_map[MLX4_MAX_PORTS + 1][MLX4_MAX_PORTS + 1] = {
- {2, 0, 0}, {0, 1, 2}, {0, 1, 2} };
- unsigned total_vfs = 0;
- int sriov_initialized = 0;
- unsigned int i;
-
- pr_info(DRV_NAME ": Initializing %s\n", pci_name(pdev));
-
- err = pci_enable_device(pdev);
- if (err) {
- dev_err(&pdev->dev, "Cannot enable PCI device, aborting\n");
- return err;
- }
-
- /* Due to requirement that all VFs and the PF are *guaranteed* 2 MACS
- * per port, we must limit the number of VFs to 63 (since their are
- * 128 MACs)
- */
- for (i = 0; i < sizeof(nvfs)/sizeof(nvfs[0]) && i < num_vfs_argc;
- total_vfs += nvfs[param_map[num_vfs_argc - 1][i]], i++) {
- nvfs[param_map[num_vfs_argc - 1][i]] = num_vfs[i];
- if (nvfs[i] < 0) {
- dev_err(&pdev->dev, "num_vfs module parameter cannot be negative\n");
- return -EINVAL;
- }
- }
- for (i = 0; i < sizeof(prb_vf)/sizeof(prb_vf[0]) && i < probe_vfs_argc;
- i++) {
- prb_vf[param_map[probe_vfs_argc - 1][i]] = probe_vf[i];
- if (prb_vf[i] < 0 || prb_vf[i] > nvfs[i]) {
- dev_err(&pdev->dev, "probe_vf module parameter cannot be negative or greater than num_vfs\n");
- return -EINVAL;
- }
- }
- if (total_vfs >= MLX4_MAX_NUM_VF) {
- dev_err(&pdev->dev,
- "Requested more VF's (%d) than allowed (%d)\n",
- total_vfs, MLX4_MAX_NUM_VF - 1);
- return -EINVAL;
- }
-
- for (i = 0; i < MLX4_MAX_PORTS; i++) {
- if (nvfs[i] + nvfs[2] >= MLX4_MAX_NUM_VF_P_PORT) {
- dev_err(&pdev->dev,
- "Requested more VF's (%d) for port (%d) than allowed (%d)\n",
- nvfs[i] + nvfs[2], i + 1,
- MLX4_MAX_NUM_VF_P_PORT - 1);
- return -EINVAL;
- }
- }
-
-
- /*
- * Check for BARs.
- */
- if (!(pci_dev_data & MLX4_PCI_DEV_IS_VF) &&
- !(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) {
- dev_err(&pdev->dev, "Missing DCS, aborting (driver_data: 0x%x, pci_resource_flags(pdev, 0):0x%lx)\n",
- pci_dev_data, pci_resource_flags(pdev, 0));
- err = -ENODEV;
- goto err_disable_pdev;
- }
- if (!(pci_resource_flags(pdev, 2) & IORESOURCE_MEM)) {
- dev_err(&pdev->dev, "Missing UAR, aborting\n");
- err = -ENODEV;
- goto err_disable_pdev;
- }
-
- err = pci_request_regions(pdev, DRV_NAME);
- if (err) {
- dev_err(&pdev->dev, "Couldn't get PCI resources, aborting\n");
- goto err_disable_pdev;
- }
-
- pci_set_master(pdev);
-
- err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
- if (err) {
- dev_warn(&pdev->dev, "Warning: couldn't set 64-bit PCI DMA mask\n");
- err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
- if (err) {
- dev_err(&pdev->dev, "Can't set PCI DMA mask, aborting\n");
- goto err_release_regions;
- }
- }
- err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
- if (err) {
- dev_warn(&pdev->dev, "Warning: couldn't set 64-bit consistent PCI DMA mask\n");
- err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
- if (err) {
- dev_err(&pdev->dev, "Can't set consistent PCI DMA mask, aborting\n");
- goto err_release_regions;
- }
- }
+ int i;
+ int existing_vfs = 0;
- /* Allow large DMA segments, up to the firmware limit of 1 GB */
- dma_set_max_seg_size(&pdev->dev, 1024 * 1024 * 1024);
+ dev = &priv->dev;
- dev = pci_get_drvdata(pdev);
- priv = mlx4_priv(dev);
- dev->pdev = pdev;
INIT_LIST_HEAD(&priv->ctx_list);
spin_lock_init(&priv->ctx_lock);
@@ -2324,28 +2284,9 @@ static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data)
dev->rev_id = pdev->revision;
dev->numa_node = dev_to_node(&pdev->dev);
+
/* Detect if this device is a virtual function */
if (pci_dev_data & MLX4_PCI_DEV_IS_VF) {
- /* When acting as pf, we normally skip vfs unless explicitly
- * requested to probe them. */
- if (total_vfs) {
- unsigned vfs_offset = 0;
- for (i = 0; i < sizeof(nvfs)/sizeof(nvfs[0]) &&
- vfs_offset + nvfs[i] < extended_func_num(pdev);
- vfs_offset += nvfs[i], i++)
- ;
- if (i == sizeof(nvfs)/sizeof(nvfs[0])) {
- err = -ENODEV;
- goto err_free_dev;
- }
- if ((extended_func_num(pdev) - vfs_offset)
- > prb_vf[i]) {
- mlx4_warn(dev, "Skipping virtual function:%d\n",
- extended_func_num(pdev));
- err = -ENODEV;
- goto err_free_dev;
- }
- }
mlx4_warn(dev, "Detected virtual function - running in slave mode\n");
dev->flags |= MLX4_FLAG_SLAVE;
} else {
@@ -2355,11 +2296,10 @@ static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data)
err = mlx4_get_ownership(dev);
if (err) {
if (err < 0)
- goto err_free_dev;
+ return err;
else {
mlx4_warn(dev, "Multiple PFs not yet supported - Skipping PF\n");
- err = -EINVAL;
- goto err_free_dev;
+ return -EINVAL;
}
}
@@ -2371,21 +2311,28 @@ static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data)
GFP_KERNEL);
if (NULL == dev->dev_vfs) {
mlx4_err(dev, "Failed to allocate memory for VFs\n");
- err = 0;
+ err = -ENOMEM;
+ goto err_free_own;
} else {
atomic_inc(&pf_loading);
- err = pci_enable_sriov(pdev, total_vfs);
+ existing_vfs = pci_num_vf(pdev);
+ if (existing_vfs) {
+ err = 0;
+ if (existing_vfs != total_vfs)
+ mlx4_err(dev, "SR-IOV was already enabled, but with num_vfs (%d) different than requested (%d)\n",
+ existing_vfs, total_vfs);
+ } else {
+ err = pci_enable_sriov(pdev, total_vfs);
+ }
if (err) {
mlx4_err(dev, "Failed to enable SR-IOV, continuing without SR-IOV (err = %d)\n",
err);
atomic_dec(&pf_loading);
- err = 0;
} else {
mlx4_warn(dev, "Running in master mode\n");
dev->flags |= MLX4_FLAG_SRIOV |
MLX4_FLAG_MASTER;
dev->num_vfs = total_vfs;
- sriov_initialized = 1;
}
}
}
@@ -2401,7 +2348,7 @@ static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data)
err = mlx4_reset(dev);
if (err) {
mlx4_err(dev, "Failed to reset HCA, aborting\n");
- goto err_rel_own;
+ goto err_sriov;
}
}
@@ -2451,34 +2398,46 @@ slave_start:
/* In master functions, the communication channel must be initialized
* after obtaining its address from fw */
if (mlx4_is_master(dev)) {
- unsigned sum = 0;
- err = mlx4_multi_func_init(dev);
- if (err) {
- mlx4_err(dev, "Failed to init master mfunc interface, aborting\n");
+ int ib_ports = 0;
+
+ mlx4_foreach_port(i, dev, MLX4_PORT_TYPE_IB)
+ ib_ports++;
+
+ if (ib_ports &&
+ (num_vfs_argc > 1 || probe_vfs_argc > 1)) {
+ mlx4_err(dev,
+ "Invalid syntax of num_vfs/probe_vfs with IB port - single port VFs syntax is only supported when all ports are configured as ethernet\n");
+ err = -EINVAL;
+ goto err_close;
+ }
+ if (dev->caps.num_ports < 2 &&
+ num_vfs_argc > 1) {
+ err = -EINVAL;
+ mlx4_err(dev,
+ "Error: Trying to configure VFs on port 2, but HCA has only %d physical ports\n",
+ dev->caps.num_ports);
goto err_close;
}
- if (sriov_initialized) {
- int ib_ports = 0;
- mlx4_foreach_port(i, dev, MLX4_PORT_TYPE_IB)
- ib_ports++;
+ memcpy(dev->nvfs, nvfs, sizeof(dev->nvfs));
- if (ib_ports &&
- (num_vfs_argc > 1 || probe_vfs_argc > 1)) {
- mlx4_err(dev,
- "Invalid syntax of num_vfs/probe_vfs with IB port - single port VFs syntax is only supported when all ports are configured as ethernet\n");
- err = -EINVAL;
- goto err_master_mfunc;
- }
- for (i = 0; i < sizeof(nvfs)/sizeof(nvfs[0]); i++) {
- unsigned j;
- for (j = 0; j < nvfs[i]; ++sum, ++j) {
- dev->dev_vfs[sum].min_port =
- i < 2 ? i + 1 : 1;
- dev->dev_vfs[sum].n_ports = i < 2 ? 1 :
- dev->caps.num_ports;
- }
+ for (i = 0; i < sizeof(dev->nvfs)/sizeof(dev->nvfs[0]); i++) {
+ unsigned j;
+
+ for (j = 0; j < dev->nvfs[i]; ++sum, ++j) {
+ dev->dev_vfs[sum].min_port = i < 2 ? i + 1 : 1;
+ dev->dev_vfs[sum].n_ports = i < 2 ? 1 :
+ dev->caps.num_ports;
}
}
+
+ /* In master functions, the communication channel
+ * must be initialized after obtaining its address from fw
+ */
+ err = mlx4_multi_func_init(dev);
+ if (err) {
+ mlx4_err(dev, "Failed to init master mfunc interface, aborting.\n");
+ goto err_close;
+ }
}
err = mlx4_alloc_eq_table(dev);
@@ -2499,7 +2458,7 @@ slave_start:
if (!mlx4_is_slave(dev)) {
err = mlx4_init_steering(dev);
if (err)
- goto err_free_eq;
+ goto err_disable_msix;
}
err = mlx4_setup_hca(dev);
@@ -2559,6 +2518,10 @@ err_steer:
if (!mlx4_is_slave(dev))
mlx4_clear_steering(dev);
+err_disable_msix:
+ if (dev->flags & MLX4_FLAG_MSI_X)
+ pci_disable_msix(pdev);
+
err_free_eq:
mlx4_free_eq_table(dev);
@@ -2575,9 +2538,6 @@ err_master_mfunc:
}
err_close:
- if (dev->flags & MLX4_FLAG_MSI_X)
- pci_disable_msix(pdev);
-
mlx4_close_hca(dev);
err_mfunc:
@@ -2588,20 +2548,154 @@ err_cmd:
mlx4_cmd_cleanup(dev);
err_sriov:
- if (dev->flags & MLX4_FLAG_SRIOV)
+ if (dev->flags & MLX4_FLAG_SRIOV && !existing_vfs)
pci_disable_sriov(pdev);
-err_rel_own:
- if (!mlx4_is_slave(dev))
- mlx4_free_ownership(dev);
-
if (mlx4_is_master(dev) && dev->num_vfs)
atomic_dec(&pf_loading);
kfree(priv->dev.dev_vfs);
-err_free_dev:
- kfree(priv);
+err_free_own:
+ if (!mlx4_is_slave(dev))
+ mlx4_free_ownership(dev);
+
+ return err;
+}
+
+static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data,
+ struct mlx4_priv *priv)
+{
+ int err;
+ int nvfs[MLX4_MAX_PORTS + 1] = {0, 0, 0};
+ int prb_vf[MLX4_MAX_PORTS + 1] = {0, 0, 0};
+ const int param_map[MLX4_MAX_PORTS + 1][MLX4_MAX_PORTS + 1] = {
+ {2, 0, 0}, {0, 1, 2}, {0, 1, 2} };
+ unsigned total_vfs = 0;
+ unsigned int i;
+
+ pr_info(DRV_NAME ": Initializing %s\n", pci_name(pdev));
+
+ err = pci_enable_device(pdev);
+ if (err) {
+ dev_err(&pdev->dev, "Cannot enable PCI device, aborting\n");
+ return err;
+ }
+
+ /* Due to requirement that all VFs and the PF are *guaranteed* 2 MACS
+ * per port, we must limit the number of VFs to 63 (since their are
+ * 128 MACs)
+ */
+ for (i = 0; i < sizeof(nvfs)/sizeof(nvfs[0]) && i < num_vfs_argc;
+ total_vfs += nvfs[param_map[num_vfs_argc - 1][i]], i++) {
+ nvfs[param_map[num_vfs_argc - 1][i]] = num_vfs[i];
+ if (nvfs[i] < 0) {
+ dev_err(&pdev->dev, "num_vfs module parameter cannot be negative\n");
+ err = -EINVAL;
+ goto err_disable_pdev;
+ }
+ }
+ for (i = 0; i < sizeof(prb_vf)/sizeof(prb_vf[0]) && i < probe_vfs_argc;
+ i++) {
+ prb_vf[param_map[probe_vfs_argc - 1][i]] = probe_vf[i];
+ if (prb_vf[i] < 0 || prb_vf[i] > nvfs[i]) {
+ dev_err(&pdev->dev, "probe_vf module parameter cannot be negative or greater than num_vfs\n");
+ err = -EINVAL;
+ goto err_disable_pdev;
+ }
+ }
+ if (total_vfs >= MLX4_MAX_NUM_VF) {
+ dev_err(&pdev->dev,
+ "Requested more VF's (%d) than allowed (%d)\n",
+ total_vfs, MLX4_MAX_NUM_VF - 1);
+ err = -EINVAL;
+ goto err_disable_pdev;
+ }
+
+ for (i = 0; i < MLX4_MAX_PORTS; i++) {
+ if (nvfs[i] + nvfs[2] >= MLX4_MAX_NUM_VF_P_PORT) {
+ dev_err(&pdev->dev,
+ "Requested more VF's (%d) for port (%d) than allowed (%d)\n",
+ nvfs[i] + nvfs[2], i + 1,
+ MLX4_MAX_NUM_VF_P_PORT - 1);
+ err = -EINVAL;
+ goto err_disable_pdev;
+ }
+ }
+
+ /* Check for BARs. */
+ if (!(pci_dev_data & MLX4_PCI_DEV_IS_VF) &&
+ !(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) {
+ dev_err(&pdev->dev, "Missing DCS, aborting (driver_data: 0x%x, pci_resource_flags(pdev, 0):0x%lx)\n",
+ pci_dev_data, pci_resource_flags(pdev, 0));
+ err = -ENODEV;
+ goto err_disable_pdev;
+ }
+ if (!(pci_resource_flags(pdev, 2) & IORESOURCE_MEM)) {
+ dev_err(&pdev->dev, "Missing UAR, aborting\n");
+ err = -ENODEV;
+ goto err_disable_pdev;
+ }
+
+ err = pci_request_regions(pdev, DRV_NAME);
+ if (err) {
+ dev_err(&pdev->dev, "Couldn't get PCI resources, aborting\n");
+ goto err_disable_pdev;
+ }
+
+ pci_set_master(pdev);
+
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (err) {
+ dev_warn(&pdev->dev, "Warning: couldn't set 64-bit PCI DMA mask\n");
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (err) {
+ dev_err(&pdev->dev, "Can't set PCI DMA mask, aborting\n");
+ goto err_release_regions;
+ }
+ }
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (err) {
+ dev_warn(&pdev->dev, "Warning: couldn't set 64-bit consistent PCI DMA mask\n");
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (err) {
+ dev_err(&pdev->dev, "Can't set consistent PCI DMA mask, aborting\n");
+ goto err_release_regions;
+ }
+ }
+
+ /* Allow large DMA segments, up to the firmware limit of 1 GB */
+ dma_set_max_seg_size(&pdev->dev, 1024 * 1024 * 1024);
+ /* Detect if this device is a virtual function */
+ if (pci_dev_data & MLX4_PCI_DEV_IS_VF) {
+ /* When acting as pf, we normally skip vfs unless explicitly
+ * requested to probe them.
+ */
+ if (total_vfs) {
+ unsigned vfs_offset = 0;
+
+ for (i = 0; i < sizeof(nvfs)/sizeof(nvfs[0]) &&
+ vfs_offset + nvfs[i] < extended_func_num(pdev);
+ vfs_offset += nvfs[i], i++)
+ ;
+ if (i == sizeof(nvfs)/sizeof(nvfs[0])) {
+ err = -ENODEV;
+ goto err_release_regions;
+ }
+ if ((extended_func_num(pdev) - vfs_offset)
+ > prb_vf[i]) {
+ dev_warn(&pdev->dev, "Skipping virtual function:%d\n",
+ extended_func_num(pdev));
+ err = -ENODEV;
+ goto err_release_regions;
+ }
+ }
+ }
+
+ err = mlx4_load_one(pdev, pci_dev_data, total_vfs, nvfs, priv);
+ if (err)
+ goto err_release_regions;
+ return 0;
err_release_regions:
pci_release_regions(pdev);
@@ -2616,6 +2710,7 @@ static int mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct mlx4_priv *priv;
struct mlx4_dev *dev;
+ int ret;
printk_once(KERN_INFO "%s", mlx4_version);
@@ -2624,28 +2719,38 @@ static int mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
return -ENOMEM;
dev = &priv->dev;
+ dev->pdev = pdev;
pci_set_drvdata(pdev, dev);
priv->pci_dev_data = id->driver_data;
- return __mlx4_init_one(pdev, id->driver_data);
+ ret = __mlx4_init_one(pdev, id->driver_data, priv);
+ if (ret)
+ kfree(priv);
+
+ return ret;
}
-static void __mlx4_remove_one(struct pci_dev *pdev)
+static void mlx4_unload_one(struct pci_dev *pdev)
{
struct mlx4_dev *dev = pci_get_drvdata(pdev);
struct mlx4_priv *priv = mlx4_priv(dev);
int pci_dev_data;
int p;
+ int active_vfs = 0;
if (priv->removed)
return;
pci_dev_data = priv->pci_dev_data;
- /* in SRIOV it is not allowed to unload the pf's
- * driver while there are alive vf's */
- if (mlx4_is_master(dev) && mlx4_how_many_lives_vf(dev))
- pr_warn("Removing PF when there are assigned VF's !!!\n");
+ /* Disabling SR-IOV is not allowed while there are active vf's */
+ if (mlx4_is_master(dev)) {
+ active_vfs = mlx4_how_many_lives_vf(dev);
+ if (active_vfs) {
+ pr_warn("Removing PF when there are active VF's !!\n");
+ pr_warn("Will not disable SR-IOV.\n");
+ }
+ }
mlx4_stop_sense(dev);
mlx4_unregister_device(dev);
@@ -2688,7 +2793,7 @@ static void __mlx4_remove_one(struct pci_dev *pdev)
if (dev->flags & MLX4_FLAG_MSI_X)
pci_disable_msix(pdev);
- if (dev->flags & MLX4_FLAG_SRIOV) {
+ if (dev->flags & MLX4_FLAG_SRIOV && !active_vfs) {
mlx4_warn(dev, "Disabling SR-IOV\n");
pci_disable_sriov(pdev);
dev->num_vfs = 0;
@@ -2704,8 +2809,6 @@ static void __mlx4_remove_one(struct pci_dev *pdev)
kfree(dev->caps.qp1_proxy);
kfree(dev->dev_vfs);
- pci_release_regions(pdev);
- pci_disable_device(pdev);
memset(priv, 0, sizeof(*priv));
priv->pci_dev_data = pci_dev_data;
priv->removed = 1;
@@ -2716,7 +2819,9 @@ static void mlx4_remove_one(struct pci_dev *pdev)
struct mlx4_dev *dev = pci_get_drvdata(pdev);
struct mlx4_priv *priv = mlx4_priv(dev);
- __mlx4_remove_one(pdev);
+ mlx4_unload_one(pdev);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
kfree(priv);
pci_set_drvdata(pdev, NULL);
}
@@ -2725,11 +2830,22 @@ int mlx4_restart_one(struct pci_dev *pdev)
{
struct mlx4_dev *dev = pci_get_drvdata(pdev);
struct mlx4_priv *priv = mlx4_priv(dev);
- int pci_dev_data;
+ int nvfs[MLX4_MAX_PORTS + 1] = {0, 0, 0};
+ int pci_dev_data, err, total_vfs;
pci_dev_data = priv->pci_dev_data;
- __mlx4_remove_one(pdev);
- return __mlx4_init_one(pdev, pci_dev_data);
+ total_vfs = dev->num_vfs;
+ memcpy(nvfs, dev->nvfs, sizeof(dev->nvfs));
+
+ mlx4_unload_one(pdev);
+ err = mlx4_load_one(pdev, pci_dev_data, total_vfs, nvfs, priv);
+ if (err) {
+ mlx4_err(dev, "%s: ERROR: mlx4_load_one failed, pci_name=%s, err=%d\n",
+ __func__, pci_name(pdev), err);
+ return err;
+ }
+
+ return err;
}
static const struct pci_device_id mlx4_pci_table[] = {
@@ -2783,7 +2899,7 @@ MODULE_DEVICE_TABLE(pci, mlx4_pci_table);
static pci_ers_result_t mlx4_pci_err_detected(struct pci_dev *pdev,
pci_channel_state_t state)
{
- __mlx4_remove_one(pdev);
+ mlx4_unload_one(pdev);
return state == pci_channel_io_perm_failure ?
PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_NEED_RESET;
@@ -2795,7 +2911,7 @@ static pci_ers_result_t mlx4_pci_slot_reset(struct pci_dev *pdev)
struct mlx4_priv *priv = mlx4_priv(dev);
int ret;
- ret = __mlx4_init_one(pdev, priv->pci_dev_data);
+ ret = __mlx4_init_one(pdev, priv->pci_dev_data, priv);
return ret ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;
}
@@ -2809,7 +2925,7 @@ static struct pci_driver mlx4_driver = {
.name = DRV_NAME,
.id_table = mlx4_pci_table,
.probe = mlx4_init_one,
- .shutdown = __mlx4_remove_one,
+ .shutdown = mlx4_unload_one,
.remove = mlx4_remove_one,
.err_handler = &mlx4_err_handler,
};
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4.h b/drivers/net/ethernet/mellanox/mlx4/mlx4.h
index b508c78..de10dbb 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4.h
@@ -285,6 +285,9 @@ struct mlx4_icm_table {
#define MLX4_MPT_STATUS_SW 0xF0
#define MLX4_MPT_STATUS_HW 0x00
+#define MLX4_CQE_SIZE_MASK_STRIDE 0x3
+#define MLX4_EQE_SIZE_MASK_STRIDE 0x30
+
/*
* Must be packed because mtt_seg is 64 bits but only aligned to 32 bits.
*/
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
index 3de41be..8fef658 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
@@ -216,13 +216,16 @@ enum cq_type {
struct mlx4_en_tx_info {
struct sk_buff *skb;
- u32 nr_txbb;
- u32 nr_bytes;
- u8 linear;
- u8 data_offset;
- u8 inl;
- u8 ts_requested;
-};
+ dma_addr_t map0_dma;
+ u32 map0_byte_count;
+ u32 nr_txbb;
+ u32 nr_bytes;
+ u8 linear;
+ u8 data_offset;
+ u8 inl;
+ u8 ts_requested;
+ u8 nr_maps;
+} ____cacheline_aligned_in_smp;
#define MLX4_EN_BIT_DESC_OWN 0x80000000
@@ -253,39 +256,46 @@ struct mlx4_en_rx_alloc {
};
struct mlx4_en_tx_ring {
+ /* cache line used and dirtied in tx completion
+ * (mlx4_en_free_tx_buf())
+ */
+ u32 last_nr_txbb;
+ u32 cons;
+ unsigned long wake_queue;
+
+ /* cache line used and dirtied in mlx4_en_xmit() */
+ u32 prod ____cacheline_aligned_in_smp;
+ unsigned long bytes;
+ unsigned long packets;
+ unsigned long tx_csum;
+ unsigned long tso_packets;
+ unsigned long xmit_more;
+ struct mlx4_bf bf;
+ unsigned long queue_stopped;
+
+ /* Following part should be mostly read */
+ cpumask_t affinity_mask;
+ struct mlx4_qp qp;
struct mlx4_hwq_resources wqres;
- u32 size ; /* number of TXBBs */
- u32 size_mask;
- u16 stride;
- u16 cqn; /* index of port CQ associated with this ring */
- u32 prod;
- u32 cons;
- u32 buf_size;
- u32 doorbell_qpn;
- void *buf;
- u16 poll_cnt;
- struct mlx4_en_tx_info *tx_info;
- u8 *bounce_buf;
- u8 queue_index;
- cpumask_t affinity_mask;
- u32 last_nr_txbb;
- struct mlx4_qp qp;
- struct mlx4_qp_context context;
- int qpn;
- enum mlx4_qp_state qp_state;
- struct mlx4_srq dummy;
- unsigned long bytes;
- unsigned long packets;
- unsigned long tx_csum;
- unsigned long queue_stopped;
- unsigned long wake_queue;
- struct mlx4_bf bf;
- bool bf_enabled;
- bool bf_alloced;
- struct netdev_queue *tx_queue;
- int hwtstamp_tx_type;
- int inline_thold;
-};
+ u32 size; /* number of TXBBs */
+ u32 size_mask;
+ u16 stride;
+ u16 cqn; /* index of port CQ associated with this ring */
+ u32 buf_size;
+ __be32 doorbell_qpn;
+ __be32 mr_key;
+ void *buf;
+ struct mlx4_en_tx_info *tx_info;
+ u8 *bounce_buf;
+ struct mlx4_qp_context context;
+ int qpn;
+ enum mlx4_qp_state qp_state;
+ u8 queue_index;
+ bool bf_enabled;
+ bool bf_alloced;
+ struct netdev_queue *tx_queue;
+ int hwtstamp_tx_type;
+} ____cacheline_aligned_in_smp;
struct mlx4_en_rx_desc {
/* actual number of entries depends on rx ring stride */
@@ -426,6 +436,7 @@ struct mlx4_en_pkt_stats {
struct mlx4_en_port_stats {
unsigned long tso_packets;
+ unsigned long xmit_more;
unsigned long queue_stopped;
unsigned long wake_queue;
unsigned long tx_timeout;
@@ -433,7 +444,7 @@ struct mlx4_en_port_stats {
unsigned long rx_chksum_good;
unsigned long rx_chksum_none;
unsigned long tx_chksum_offload;
-#define NUM_PORT_STATS 8
+#define NUM_PORT_STATS 9
};
struct mlx4_en_perf_stats {
@@ -542,6 +553,7 @@ struct mlx4_en_priv {
unsigned max_mtu;
int base_qpn;
int cqe_factor;
+ int cqe_size;
struct mlx4_en_rss_map rss_map;
__be32 ctrl_flags;
@@ -612,6 +624,11 @@ struct mlx4_mac_entry {
struct rcu_head rcu;
};
+static inline struct mlx4_cqe *mlx4_en_get_cqe(void *buf, int idx, int cqe_sz)
+{
+ return buf + idx * cqe_sz;
+}
+
#ifdef CONFIG_NET_RX_BUSY_POLL
static inline void mlx4_en_cq_init_lock(struct mlx4_en_cq *cq)
{
@@ -836,8 +853,8 @@ extern const struct ethtool_ops mlx4_en_ethtool_ops;
*/
__printf(3, 4)
-int en_print(const char *level, const struct mlx4_en_priv *priv,
- const char *format, ...);
+void en_print(const char *level, const struct mlx4_en_priv *priv,
+ const char *format, ...);
#define en_dbg(mlevel, priv, format, ...) \
do { \
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index 65a7da6..368c6c5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -357,60 +357,24 @@ const char *mlx5_command_str(int command)
case MLX5_CMD_OP_2ERR_QP:
return "2ERR_QP";
- case MLX5_CMD_OP_RTS2SQD_QP:
- return "RTS2SQD_QP";
-
- case MLX5_CMD_OP_SQD2RTS_QP:
- return "SQD2RTS_QP";
-
case MLX5_CMD_OP_2RST_QP:
return "2RST_QP";
case MLX5_CMD_OP_QUERY_QP:
return "QUERY_QP";
- case MLX5_CMD_OP_CONF_SQP:
- return "CONF_SQP";
-
case MLX5_CMD_OP_MAD_IFC:
return "MAD_IFC";
case MLX5_CMD_OP_INIT2INIT_QP:
return "INIT2INIT_QP";
- case MLX5_CMD_OP_SUSPEND_QP:
- return "SUSPEND_QP";
-
- case MLX5_CMD_OP_UNSUSPEND_QP:
- return "UNSUSPEND_QP";
-
- case MLX5_CMD_OP_SQD2SQD_QP:
- return "SQD2SQD_QP";
-
- case MLX5_CMD_OP_ALLOC_QP_COUNTER_SET:
- return "ALLOC_QP_COUNTER_SET";
-
- case MLX5_CMD_OP_DEALLOC_QP_COUNTER_SET:
- return "DEALLOC_QP_COUNTER_SET";
-
- case MLX5_CMD_OP_QUERY_QP_COUNTER_SET:
- return "QUERY_QP_COUNTER_SET";
-
case MLX5_CMD_OP_CREATE_PSV:
return "CREATE_PSV";
case MLX5_CMD_OP_DESTROY_PSV:
return "DESTROY_PSV";
- case MLX5_CMD_OP_QUERY_PSV:
- return "QUERY_PSV";
-
- case MLX5_CMD_OP_QUERY_SIG_RULE_TABLE:
- return "QUERY_SIG_RULE_TABLE";
-
- case MLX5_CMD_OP_QUERY_BLOCK_SIZE_TABLE:
- return "QUERY_BLOCK_SIZE_TABLE";
-
case MLX5_CMD_OP_CREATE_SRQ:
return "CREATE_SRQ";
@@ -1538,16 +1502,9 @@ static const char *cmd_status_str(u8 status)
}
}
-int mlx5_cmd_status_to_err(struct mlx5_outbox_hdr *hdr)
+static int cmd_status_to_err(u8 status)
{
- if (!hdr->status)
- return 0;
-
- pr_warn("command failed, status %s(0x%x), syndrome 0x%x\n",
- cmd_status_str(hdr->status), hdr->status,
- be32_to_cpu(hdr->syndrome));
-
- switch (hdr->status) {
+ switch (status) {
case MLX5_CMD_STAT_OK: return 0;
case MLX5_CMD_STAT_INT_ERR: return -EIO;
case MLX5_CMD_STAT_BAD_OP_ERR: return -EINVAL;
@@ -1567,3 +1524,33 @@ int mlx5_cmd_status_to_err(struct mlx5_outbox_hdr *hdr)
default: return -EIO;
}
}
+
+/* this will be available till all the commands use set/get macros */
+int mlx5_cmd_status_to_err(struct mlx5_outbox_hdr *hdr)
+{
+ if (!hdr->status)
+ return 0;
+
+ pr_warn("command failed, status %s(0x%x), syndrome 0x%x\n",
+ cmd_status_str(hdr->status), hdr->status,
+ be32_to_cpu(hdr->syndrome));
+
+ return cmd_status_to_err(hdr->status);
+}
+
+int mlx5_cmd_status_to_err_v2(void *ptr)
+{
+ u32 syndrome;
+ u8 status;
+
+ status = be32_to_cpu(*(__be32 *)ptr) >> 24;
+ if (!status)
+ return 0;
+
+ syndrome = be32_to_cpu(*(__be32 *)(ptr + 4));
+
+ pr_warn("command failed, status %s(0x%x), syndrome 0x%x\n",
+ cmd_status_str(status), status, syndrome);
+
+ return cmd_status_to_err(status);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
index 4e8bd0b..ed53291 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
@@ -198,7 +198,7 @@ static int mlx5_eq_int(struct mlx5_core_dev *dev, struct mlx5_eq *eq)
int eqes_found = 0;
int set_ci = 0;
u32 cqn;
- u32 srqn;
+ u32 rsn;
u8 port;
while ((eqe = next_eqe_sw(eq))) {
@@ -224,18 +224,18 @@ static int mlx5_eq_int(struct mlx5_core_dev *dev, struct mlx5_eq *eq)
case MLX5_EVENT_TYPE_PATH_MIG_FAILED:
case MLX5_EVENT_TYPE_WQ_INVAL_REQ_ERROR:
case MLX5_EVENT_TYPE_WQ_ACCESS_ERROR:
+ rsn = be32_to_cpu(eqe->data.qp_srq.qp_srq_n) & 0xffffff;
mlx5_core_dbg(dev, "event %s(%d) arrived\n",
eqe_type_str(eqe->type), eqe->type);
- mlx5_qp_event(dev, be32_to_cpu(eqe->data.qp_srq.qp_srq_n) & 0xffffff,
- eqe->type);
+ mlx5_rsc_event(dev, rsn, eqe->type);
break;
case MLX5_EVENT_TYPE_SRQ_RQ_LIMIT:
case MLX5_EVENT_TYPE_SRQ_CATAS_ERROR:
- srqn = be32_to_cpu(eqe->data.qp_srq.qp_srq_n) & 0xffffff;
+ rsn = be32_to_cpu(eqe->data.qp_srq.qp_srq_n) & 0xffffff;
mlx5_core_dbg(dev, "SRQ event %s(%d): srqn 0x%x\n",
- eqe_type_str(eqe->type), eqe->type, srqn);
- mlx5_srq_event(dev, srqn, eqe->type);
+ eqe_type_str(eqe->type), eqe->type, rsn);
+ mlx5_srq_event(dev, rsn, eqe->type);
break;
case MLX5_EVENT_TYPE_CMD:
@@ -468,7 +468,7 @@ int mlx5_start_eqs(struct mlx5_core_dev *dev)
err = mlx5_create_map_eq(dev, &table->pages_eq,
MLX5_EQ_VEC_PAGES,
- dev->caps.max_vf + 1,
+ dev->caps.gen.max_vf + 1,
1 << MLX5_EVENT_TYPE_PAGE_REQUEST, "mlx5_pages_eq",
&dev->priv.uuari.uars[0]);
if (err) {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw.c b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
index f012658..087c4c7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
@@ -64,86 +64,9 @@ out_out:
return err;
}
-int mlx5_cmd_query_hca_cap(struct mlx5_core_dev *dev,
- struct mlx5_caps *caps)
+int mlx5_cmd_query_hca_cap(struct mlx5_core_dev *dev, struct mlx5_caps *caps)
{
- struct mlx5_cmd_query_hca_cap_mbox_out *out;
- struct mlx5_cmd_query_hca_cap_mbox_in in;
- struct mlx5_query_special_ctxs_mbox_out ctx_out;
- struct mlx5_query_special_ctxs_mbox_in ctx_in;
- int err;
- u16 t16;
-
- out = kzalloc(sizeof(*out), GFP_KERNEL);
- if (!out)
- return -ENOMEM;
-
- memset(&in, 0, sizeof(in));
- in.hdr.opcode = cpu_to_be16(MLX5_CMD_OP_QUERY_HCA_CAP);
- in.hdr.opmod = cpu_to_be16(0x1);
- err = mlx5_cmd_exec(dev, &in, sizeof(in), out, sizeof(*out));
- if (err)
- goto out_out;
-
- if (out->hdr.status) {
- err = mlx5_cmd_status_to_err(&out->hdr);
- goto out_out;
- }
-
-
- caps->log_max_eq = out->hca_cap.log_max_eq & 0xf;
- caps->max_cqes = 1 << out->hca_cap.log_max_cq_sz;
- caps->max_wqes = 1 << out->hca_cap.log_max_qp_sz;
- caps->max_sq_desc_sz = be16_to_cpu(out->hca_cap.max_desc_sz_sq);
- caps->max_rq_desc_sz = be16_to_cpu(out->hca_cap.max_desc_sz_rq);
- caps->flags = be64_to_cpu(out->hca_cap.flags);
- caps->stat_rate_support = be16_to_cpu(out->hca_cap.stat_rate_support);
- caps->log_max_msg = out->hca_cap.log_max_msg & 0x1f;
- caps->num_ports = out->hca_cap.num_ports & 0xf;
- caps->log_max_cq = out->hca_cap.log_max_cq & 0x1f;
- if (caps->num_ports > MLX5_MAX_PORTS) {
- mlx5_core_err(dev, "device has %d ports while the driver supports max %d ports\n",
- caps->num_ports, MLX5_MAX_PORTS);
- err = -EINVAL;
- goto out_out;
- }
- caps->log_max_qp = out->hca_cap.log_max_qp & 0x1f;
- caps->log_max_mkey = out->hca_cap.log_max_mkey & 0x3f;
- caps->log_max_pd = out->hca_cap.log_max_pd & 0x1f;
- caps->log_max_srq = out->hca_cap.log_max_srqs & 0x1f;
- caps->local_ca_ack_delay = out->hca_cap.local_ca_ack_delay & 0x1f;
- caps->log_max_mcg = out->hca_cap.log_max_mcg;
- caps->max_qp_mcg = be32_to_cpu(out->hca_cap.max_qp_mcg) & 0xffffff;
- caps->max_ra_res_qp = 1 << (out->hca_cap.log_max_ra_res_qp & 0x3f);
- caps->max_ra_req_qp = 1 << (out->hca_cap.log_max_ra_req_qp & 0x3f);
- caps->max_srq_wqes = 1 << out->hca_cap.log_max_srq_sz;
- t16 = be16_to_cpu(out->hca_cap.bf_log_bf_reg_size);
- if (t16 & 0x8000) {
- caps->bf_reg_size = 1 << (t16 & 0x1f);
- caps->bf_regs_per_page = MLX5_BF_REGS_PER_PAGE;
- } else {
- caps->bf_reg_size = 0;
- caps->bf_regs_per_page = 0;
- }
- caps->min_page_sz = ~(u32)((1 << out->hca_cap.log_pg_sz) - 1);
-
- memset(&ctx_in, 0, sizeof(ctx_in));
- memset(&ctx_out, 0, sizeof(ctx_out));
- ctx_in.hdr.opcode = cpu_to_be16(MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS);
- err = mlx5_cmd_exec(dev, &ctx_in, sizeof(ctx_in),
- &ctx_out, sizeof(ctx_out));
- if (err)
- goto out_out;
-
- if (ctx_out.hdr.status)
- err = mlx5_cmd_status_to_err(&ctx_out.hdr);
-
- caps->reserved_lkey = be32_to_cpu(ctx_out.reserved_lkey);
-
-out_out:
- kfree(out);
-
- return err;
+ return mlx5_core_get_caps(dev, caps, HCA_CAP_OPMOD_GET_CUR);
}
int mlx5_cmd_init_hca(struct mlx5_core_dev *dev)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index f2716cc..3d8e8e4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -43,6 +43,7 @@
#include <linux/mlx5/qp.h>
#include <linux/mlx5/srq.h>
#include <linux/debugfs.h>
+#include <linux/mlx5/mlx5_ifc.h>
#include "mlx5_core.h"
#define DRIVER_NAME "mlx5_core"
@@ -207,11 +208,11 @@ static void release_bar(struct pci_dev *pdev)
static int mlx5_enable_msix(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = &dev->priv.eq_table;
- int num_eqs = 1 << dev->caps.log_max_eq;
+ int num_eqs = 1 << dev->caps.gen.log_max_eq;
int nvec;
int i;
- nvec = dev->caps.num_ports * num_online_cpus() + MLX5_EQ_VEC_COMP_BASE;
+ nvec = dev->caps.gen.num_ports * num_online_cpus() + MLX5_EQ_VEC_COMP_BASE;
nvec = min_t(int, nvec, num_eqs);
if (nvec <= MLX5_EQ_VEC_COMP_BASE)
return -ENOMEM;
@@ -250,91 +251,205 @@ struct mlx5_reg_host_endianess {
#define CAP_MASK(pos, size) ((u64)((1 << (size)) - 1) << (pos))
enum {
- MLX5_CAP_BITS_RW_MASK = CAP_MASK(MLX5_CAP_OFF_CMDIF_CSUM, 2) |
- CAP_MASK(MLX5_CAP_OFF_DCT, 1),
+ MLX5_CAP_BITS_RW_MASK = CAP_MASK(MLX5_CAP_OFF_CMDIF_CSUM, 2) |
+ MLX5_DEV_CAP_FLAG_DCT,
};
+static u16 to_fw_pkey_sz(u32 size)
+{
+ switch (size) {
+ case 128:
+ return 0;
+ case 256:
+ return 1;
+ case 512:
+ return 2;
+ case 1024:
+ return 3;
+ case 2048:
+ return 4;
+ case 4096:
+ return 5;
+ default:
+ pr_warn("invalid pkey table size %d\n", size);
+ return 0;
+ }
+}
+
/* selectively copy writable fields clearing any reserved area
*/
-static void copy_rw_fields(struct mlx5_hca_cap *to, struct mlx5_hca_cap *from)
+static void copy_rw_fields(void *to, struct mlx5_caps *from)
{
+ __be64 *flags_off = (__be64 *)MLX5_ADDR_OF(cmd_hca_cap, to, reserved_22);
u64 v64;
- to->log_max_qp = from->log_max_qp & 0x1f;
- to->log_max_ra_req_dc = from->log_max_ra_req_dc & 0x3f;
- to->log_max_ra_res_dc = from->log_max_ra_res_dc & 0x3f;
- to->log_max_ra_req_qp = from->log_max_ra_req_qp & 0x3f;
- to->log_max_ra_res_qp = from->log_max_ra_res_qp & 0x3f;
- to->log_max_atomic_size_qp = from->log_max_atomic_size_qp;
- to->log_max_atomic_size_dc = from->log_max_atomic_size_dc;
- v64 = be64_to_cpu(from->flags) & MLX5_CAP_BITS_RW_MASK;
- to->flags = cpu_to_be64(v64);
+ MLX5_SET(cmd_hca_cap, to, log_max_qp, from->gen.log_max_qp);
+ MLX5_SET(cmd_hca_cap, to, log_max_ra_req_qp, from->gen.log_max_ra_req_qp);
+ MLX5_SET(cmd_hca_cap, to, log_max_ra_res_qp, from->gen.log_max_ra_res_qp);
+ MLX5_SET(cmd_hca_cap, to, pkey_table_size, from->gen.pkey_table_size);
+ MLX5_SET(cmd_hca_cap, to, log_max_ra_req_dc, from->gen.log_max_ra_req_dc);
+ MLX5_SET(cmd_hca_cap, to, log_max_ra_res_dc, from->gen.log_max_ra_res_dc);
+ MLX5_SET(cmd_hca_cap, to, pkey_table_size, to_fw_pkey_sz(from->gen.pkey_table_size));
+ v64 = from->gen.flags & MLX5_CAP_BITS_RW_MASK;
+ *flags_off = cpu_to_be64(v64);
}
-enum {
- HCA_CAP_OPMOD_GET_MAX = 0,
- HCA_CAP_OPMOD_GET_CUR = 1,
-};
+static u16 get_pkey_table_size(int pkey)
+{
+ if (pkey > MLX5_MAX_LOG_PKEY_TABLE)
+ return 0;
-static int handle_hca_cap(struct mlx5_core_dev *dev)
+ return MLX5_MIN_PKEY_TABLE_SIZE << pkey;
+}
+
+static void fw2drv_caps(struct mlx5_caps *caps, void *out)
+{
+ struct mlx5_general_caps *gen = &caps->gen;
+
+ gen->max_srq_wqes = 1 << MLX5_GET_PR(cmd_hca_cap, out, log_max_srq_sz);
+ gen->max_wqes = 1 << MLX5_GET_PR(cmd_hca_cap, out, log_max_qp_sz);
+ gen->log_max_qp = MLX5_GET_PR(cmd_hca_cap, out, log_max_qp);
+ gen->log_max_strq = MLX5_GET_PR(cmd_hca_cap, out, log_max_strq_sz);
+ gen->log_max_srq = MLX5_GET_PR(cmd_hca_cap, out, log_max_srqs);
+ gen->max_cqes = 1 << MLX5_GET_PR(cmd_hca_cap, out, log_max_cq_sz);
+ gen->log_max_cq = MLX5_GET_PR(cmd_hca_cap, out, log_max_cq);
+ gen->max_eqes = 1 << MLX5_GET_PR(cmd_hca_cap, out, log_max_eq_sz);
+ gen->log_max_mkey = MLX5_GET_PR(cmd_hca_cap, out, log_max_mkey);
+ gen->log_max_eq = MLX5_GET_PR(cmd_hca_cap, out, log_max_eq);
+ gen->max_indirection = MLX5_GET_PR(cmd_hca_cap, out, max_indirection);
+ gen->log_max_mrw_sz = MLX5_GET_PR(cmd_hca_cap, out, log_max_mrw_sz);
+ gen->log_max_bsf_list_size = MLX5_GET_PR(cmd_hca_cap, out, log_max_bsf_list_size);
+ gen->log_max_klm_list_size = MLX5_GET_PR(cmd_hca_cap, out, log_max_klm_list_size);
+ gen->log_max_ra_req_dc = MLX5_GET_PR(cmd_hca_cap, out, log_max_ra_req_dc);
+ gen->log_max_ra_res_dc = MLX5_GET_PR(cmd_hca_cap, out, log_max_ra_res_dc);
+ gen->log_max_ra_req_qp = MLX5_GET_PR(cmd_hca_cap, out, log_max_ra_req_qp);
+ gen->log_max_ra_res_qp = MLX5_GET_PR(cmd_hca_cap, out, log_max_ra_res_qp);
+ gen->max_qp_counters = MLX5_GET_PR(cmd_hca_cap, out, max_qp_cnt);
+ gen->pkey_table_size = get_pkey_table_size(MLX5_GET_PR(cmd_hca_cap, out, pkey_table_size));
+ gen->local_ca_ack_delay = MLX5_GET_PR(cmd_hca_cap, out, local_ca_ack_delay);
+ gen->num_ports = MLX5_GET_PR(cmd_hca_cap, out, num_ports);
+ gen->log_max_msg = MLX5_GET_PR(cmd_hca_cap, out, log_max_msg);
+ gen->stat_rate_support = MLX5_GET_PR(cmd_hca_cap, out, stat_rate_support);
+ gen->flags = be64_to_cpu(*(__be64 *)MLX5_ADDR_OF(cmd_hca_cap, out, reserved_22));
+ pr_debug("flags = 0x%llx\n", gen->flags);
+ gen->uar_sz = MLX5_GET_PR(cmd_hca_cap, out, uar_sz);
+ gen->min_log_pg_sz = MLX5_GET_PR(cmd_hca_cap, out, log_pg_sz);
+ gen->bf_reg_size = MLX5_GET_PR(cmd_hca_cap, out, bf);
+ gen->bf_reg_size = 1 << MLX5_GET_PR(cmd_hca_cap, out, log_bf_reg_size);
+ gen->max_sq_desc_sz = MLX5_GET_PR(cmd_hca_cap, out, max_wqe_sz_sq);
+ gen->max_rq_desc_sz = MLX5_GET_PR(cmd_hca_cap, out, max_wqe_sz_rq);
+ gen->max_dc_sq_desc_sz = MLX5_GET_PR(cmd_hca_cap, out, max_wqe_sz_sq_dc);
+ gen->max_qp_mcg = MLX5_GET_PR(cmd_hca_cap, out, max_qp_mcg);
+ gen->log_max_pd = MLX5_GET_PR(cmd_hca_cap, out, log_max_pd);
+ gen->log_max_xrcd = MLX5_GET_PR(cmd_hca_cap, out, log_max_xrcd);
+ gen->log_uar_page_sz = MLX5_GET_PR(cmd_hca_cap, out, log_uar_page_sz);
+}
+
+static const char *caps_opmod_str(u16 opmod)
{
- struct mlx5_cmd_query_hca_cap_mbox_out *query_out = NULL;
- struct mlx5_cmd_set_hca_cap_mbox_in *set_ctx = NULL;
- struct mlx5_cmd_query_hca_cap_mbox_in query_ctx;
- struct mlx5_cmd_set_hca_cap_mbox_out set_out;
- u64 flags;
+ switch (opmod) {
+ case HCA_CAP_OPMOD_GET_MAX:
+ return "GET_MAX";
+ case HCA_CAP_OPMOD_GET_CUR:
+ return "GET_CUR";
+ default:
+ return "Invalid";
+ }
+}
+
+int mlx5_core_get_caps(struct mlx5_core_dev *dev, struct mlx5_caps *caps,
+ u16 opmod)
+{
+ u8 in[MLX5_ST_SZ_BYTES(query_hca_cap_in)];
+ int out_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out);
+ void *out;
int err;
- memset(&query_ctx, 0, sizeof(query_ctx));
- query_out = kzalloc(sizeof(*query_out), GFP_KERNEL);
- if (!query_out)
+ memset(in, 0, sizeof(in));
+ out = kzalloc(out_sz, GFP_KERNEL);
+ if (!out)
return -ENOMEM;
+ MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP);
+ MLX5_SET(query_hca_cap_in, in, op_mod, opmod);
+ err = mlx5_cmd_exec(dev, in, sizeof(in), out, out_sz);
+ if (err)
+ goto query_ex;
- set_ctx = kzalloc(sizeof(*set_ctx), GFP_KERNEL);
- if (!set_ctx) {
- err = -ENOMEM;
+ err = mlx5_cmd_status_to_err_v2(out);
+ if (err) {
+ mlx5_core_warn(dev, "query max hca cap failed, %d\n", err);
goto query_ex;
}
+ mlx5_core_dbg(dev, "%s\n", caps_opmod_str(opmod));
+ fw2drv_caps(caps, MLX5_ADDR_OF(query_hca_cap_out, out, capability_struct));
+
+query_ex:
+ kfree(out);
+ return err;
+}
- query_ctx.hdr.opcode = cpu_to_be16(MLX5_CMD_OP_QUERY_HCA_CAP);
- query_ctx.hdr.opmod = cpu_to_be16(HCA_CAP_OPMOD_GET_CUR);
- err = mlx5_cmd_exec(dev, &query_ctx, sizeof(query_ctx),
- query_out, sizeof(*query_out));
+static int set_caps(struct mlx5_core_dev *dev, void *in, int in_sz)
+{
+ u32 out[MLX5_ST_SZ_DW(set_hca_cap_out)];
+ int err;
+
+ memset(out, 0, sizeof(out));
+
+ MLX5_SET(set_hca_cap_in, in, opcode, MLX5_CMD_OP_SET_HCA_CAP);
+ err = mlx5_cmd_exec(dev, in, in_sz, out, sizeof(out));
if (err)
- goto query_ex;
+ return err;
- err = mlx5_cmd_status_to_err(&query_out->hdr);
- if (err) {
- mlx5_core_warn(dev, "query hca cap failed, %d\n", err);
+ err = mlx5_cmd_status_to_err_v2(out);
+
+ return err;
+}
+
+static int handle_hca_cap(struct mlx5_core_dev *dev)
+{
+ void *set_ctx = NULL;
+ struct mlx5_profile *prof = dev->profile;
+ struct mlx5_caps *cur_caps = NULL;
+ struct mlx5_caps *max_caps = NULL;
+ int err = -ENOMEM;
+ int set_sz = MLX5_ST_SZ_BYTES(set_hca_cap_in);
+
+ set_ctx = kzalloc(set_sz, GFP_KERNEL);
+ if (!set_ctx)
goto query_ex;
- }
- copy_rw_fields(&set_ctx->hca_cap, &query_out->hca_cap);
+ max_caps = kzalloc(sizeof(*max_caps), GFP_KERNEL);
+ if (!max_caps)
+ goto query_ex;
- if (dev->profile && dev->profile->mask & MLX5_PROF_MASK_QP_SIZE)
- set_ctx->hca_cap.log_max_qp = dev->profile->log_max_qp;
+ cur_caps = kzalloc(sizeof(*cur_caps), GFP_KERNEL);
+ if (!cur_caps)
+ goto query_ex;
- flags = be64_to_cpu(query_out->hca_cap.flags);
- /* disable checksum */
- flags &= ~MLX5_DEV_CAP_FLAG_CMDIF_CSUM;
-
- set_ctx->hca_cap.flags = cpu_to_be64(flags);
- memset(&set_out, 0, sizeof(set_out));
- set_ctx->hca_cap.log_uar_page_sz = cpu_to_be16(PAGE_SHIFT - 12);
- set_ctx->hdr.opcode = cpu_to_be16(MLX5_CMD_OP_SET_HCA_CAP);
- err = mlx5_cmd_exec(dev, set_ctx, sizeof(*set_ctx),
- &set_out, sizeof(set_out));
- if (err) {
- mlx5_core_warn(dev, "set hca cap failed, %d\n", err);
+ err = mlx5_core_get_caps(dev, max_caps, HCA_CAP_OPMOD_GET_MAX);
+ if (err)
goto query_ex;
- }
- err = mlx5_cmd_status_to_err(&set_out.hdr);
+ err = mlx5_core_get_caps(dev, cur_caps, HCA_CAP_OPMOD_GET_CUR);
if (err)
goto query_ex;
+ /* we limit the size of the pkey table to 128 entries for now */
+ cur_caps->gen.pkey_table_size = 128;
+
+ if (prof->mask & MLX5_PROF_MASK_QP_SIZE)
+ cur_caps->gen.log_max_qp = prof->log_max_qp;
+
+ /* disable checksum */
+ cur_caps->gen.flags &= ~MLX5_DEV_CAP_FLAG_CMDIF_CSUM;
+
+ copy_rw_fields(MLX5_ADDR_OF(set_hca_cap_in, set_ctx, hca_capability_struct),
+ cur_caps);
+ err = set_caps(dev, set_ctx, set_sz);
+
query_ex:
- kfree(query_out);
+ kfree(cur_caps);
+ kfree(max_caps);
kfree(set_ctx);
return err;
@@ -782,6 +897,7 @@ static void remove_one(struct pci_dev *pdev)
static const struct pci_device_id mlx5_core_pci_table[] = {
{ PCI_VDEVICE(MELLANOX, 4113) }, /* MT4113 Connect-IB */
+ { PCI_VDEVICE(MELLANOX, 4115) }, /* ConnectX-4 */
{ 0, }
};
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qp.c b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
index 8145b46..5261a2b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/qp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
@@ -39,28 +39,53 @@
#include "mlx5_core.h"
-void mlx5_qp_event(struct mlx5_core_dev *dev, u32 qpn, int event_type)
+static struct mlx5_core_rsc_common *mlx5_get_rsc(struct mlx5_core_dev *dev,
+ u32 rsn)
{
struct mlx5_qp_table *table = &dev->priv.qp_table;
- struct mlx5_core_qp *qp;
+ struct mlx5_core_rsc_common *common;
spin_lock(&table->lock);
- qp = radix_tree_lookup(&table->tree, qpn);
- if (qp)
- atomic_inc(&qp->refcount);
+ common = radix_tree_lookup(&table->tree, rsn);
+ if (common)
+ atomic_inc(&common->refcount);
spin_unlock(&table->lock);
- if (!qp) {
- mlx5_core_warn(dev, "Async event for bogus QP 0x%x\n", qpn);
- return;
+ if (!common) {
+ mlx5_core_warn(dev, "Async event for bogus resource 0x%x\n",
+ rsn);
+ return NULL;
}
+ return common;
+}
- qp->event(qp, event_type);
+void mlx5_core_put_rsc(struct mlx5_core_rsc_common *common)
+{
+ if (atomic_dec_and_test(&common->refcount))
+ complete(&common->free);
+}
+
+void mlx5_rsc_event(struct mlx5_core_dev *dev, u32 rsn, int event_type)
+{
+ struct mlx5_core_rsc_common *common = mlx5_get_rsc(dev, rsn);
+ struct mlx5_core_qp *qp;
+
+ if (!common)
+ return;
+
+ switch (common->res) {
+ case MLX5_RES_QP:
+ qp = (struct mlx5_core_qp *)common;
+ qp->event(qp, event_type);
+ break;
+
+ default:
+ mlx5_core_warn(dev, "invalid resource type for 0x%x\n", rsn);
+ }
- if (atomic_dec_and_test(&qp->refcount))
- complete(&qp->free);
+ mlx5_core_put_rsc(common);
}
int mlx5_core_create_qp(struct mlx5_core_dev *dev,
@@ -92,6 +117,7 @@ int mlx5_core_create_qp(struct mlx5_core_dev *dev,
qp->qpn = be32_to_cpu(out.qpn) & 0xffffff;
mlx5_core_dbg(dev, "qpn = 0x%x\n", qp->qpn);
+ qp->common.res = MLX5_RES_QP;
spin_lock_irq(&table->lock);
err = radix_tree_insert(&table->tree, qp->qpn, qp);
spin_unlock_irq(&table->lock);
@@ -106,9 +132,9 @@ int mlx5_core_create_qp(struct mlx5_core_dev *dev,
qp->qpn);
qp->pid = current->pid;
- atomic_set(&qp->refcount, 1);
+ atomic_set(&qp->common.refcount, 1);
atomic_inc(&dev->num_qps);
- init_completion(&qp->free);
+ init_completion(&qp->common.free);
return 0;
@@ -138,9 +164,8 @@ int mlx5_core_destroy_qp(struct mlx5_core_dev *dev,
radix_tree_delete(&table->tree, qp->qpn);
spin_unlock_irqrestore(&table->lock, flags);
- if (atomic_dec_and_test(&qp->refcount))
- complete(&qp->free);
- wait_for_completion(&qp->free);
+ mlx5_core_put_rsc((struct mlx5_core_rsc_common *)qp);
+ wait_for_completion(&qp->common.free);
memset(&in, 0, sizeof(in));
memset(&out, 0, sizeof(out));
@@ -184,13 +209,10 @@ int mlx5_core_qp_modify(struct mlx5_core_dev *dev, enum mlx5_qp_state cur_state,
[MLX5_QP_STATE_RST] = MLX5_CMD_OP_2RST_QP,
[MLX5_QP_STATE_ERR] = MLX5_CMD_OP_2ERR_QP,
[MLX5_QP_STATE_RTS] = MLX5_CMD_OP_RTS2RTS_QP,
- [MLX5_QP_STATE_SQD] = MLX5_CMD_OP_RTS2SQD_QP,
},
[MLX5_QP_STATE_SQD] = {
[MLX5_QP_STATE_RST] = MLX5_CMD_OP_2RST_QP,
[MLX5_QP_STATE_ERR] = MLX5_CMD_OP_2ERR_QP,
- [MLX5_QP_STATE_RTS] = MLX5_CMD_OP_SQD2RTS_QP,
- [MLX5_QP_STATE_SQD] = MLX5_CMD_OP_SQD2SQD_QP,
},
[MLX5_QP_STATE_SQER] = {
[MLX5_QP_STATE_RST] = MLX5_CMD_OP_2RST_QP,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/uar.c b/drivers/net/ethernet/mellanox/mlx5/core/uar.c
index 68f5d9c..0a6348c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/uar.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/uar.c
@@ -174,11 +174,11 @@ int mlx5_alloc_uuars(struct mlx5_core_dev *dev, struct mlx5_uuar_info *uuari)
for (i = 0; i < tot_uuars; i++) {
bf = &uuari->bfs[i];
- bf->buf_size = dev->caps.bf_reg_size / 2;
+ bf->buf_size = dev->caps.gen.bf_reg_size / 2;
bf->uar = &uuari->uars[i / MLX5_BF_REGS_PER_PAGE];
bf->regreg = uuari->uars[i / MLX5_BF_REGS_PER_PAGE].map;
bf->reg = NULL; /* Add WC support */
- bf->offset = (i % MLX5_BF_REGS_PER_PAGE) * dev->caps.bf_reg_size +
+ bf->offset = (i % MLX5_BF_REGS_PER_PAGE) * dev->caps.gen.bf_reg_size +
MLX5_BF_OFFSET;
bf->need_lock = need_uuar_lock(i);
spin_lock_init(&bf->lock);
diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
index 2f12c88..bde1b70 100644
--- a/drivers/net/ethernet/moxa/moxart_ether.c
+++ b/drivers/net/ethernet/moxa/moxart_ether.c
@@ -511,7 +511,6 @@ static int moxart_mac_probe(struct platform_device *pdev)
goto init_fail;
}
- ether_setup(ndev);
ndev->netdev_ops = &moxart_netdev_ops;
netif_napi_add(ndev, &priv->napi, moxart_rx_poll, RX_DESC_NUM);
ndev->priv_flags |= IFF_UNICAST_FLT;
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.c b/drivers/net/ethernet/neterion/vxge/vxge-main.c
index 4f40d7b..cc0485e 100644
--- a/drivers/net/ethernet/neterion/vxge/vxge-main.c
+++ b/drivers/net/ethernet/neterion/vxge/vxge-main.c
@@ -3537,7 +3537,7 @@ static void vxge_device_unregister(struct __vxge_hw_device *hldev)
vxge_debug_entryexit(vdev->level_trace, "%s: %s:%d", vdev->ndev->name,
__func__, __LINE__);
- strncpy(buf, dev->name, IFNAMSIZ);
+ strlcpy(buf, dev->name, IFNAMSIZ);
flush_work(&vdev->reset_task);
diff --git a/drivers/net/ethernet/netx-eth.c b/drivers/net/ethernet/netx-eth.c
index 31eb911..8176c8a 100644
--- a/drivers/net/ethernet/netx-eth.c
+++ b/drivers/net/ethernet/netx-eth.c
@@ -315,8 +315,6 @@ static int netx_eth_enable(struct net_device *ndev)
unsigned int mac4321, mac65;
int running, i;
- ether_setup(ndev);
-
ndev->netdev_ops = &netx_eth_netdev_ops;
ndev->watchdog_timeo = msecs_to_jiffies(5000);
diff --git a/drivers/net/ethernet/nuvoton/w90p910_ether.c b/drivers/net/ethernet/nuvoton/w90p910_ether.c
index 79645f7..379b7fb 100644
--- a/drivers/net/ethernet/nuvoton/w90p910_ether.c
+++ b/drivers/net/ethernet/nuvoton/w90p910_ether.c
@@ -943,7 +943,6 @@ static int w90p910_ether_setup(struct net_device *dev)
{
struct w90p910_ether *ether = netdev_priv(dev);
- ether_setup(dev);
dev->netdev_ops = &w90p910_ether_netdev_ops;
dev->ethtool_ops = &w90p910_ether_ethtool_ops;
diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c
index 925b296..f39cae6 100644
--- a/drivers/net/ethernet/nvidia/forcedeth.c
+++ b/drivers/net/ethernet/nvidia/forcedeth.c
@@ -1481,7 +1481,7 @@ static int phy_init(struct net_device *dev)
}
/* phy vendor specific configuration */
- if ((np->phy_oui == PHY_OUI_CICADA)) {
+ if (np->phy_oui == PHY_OUI_CICADA) {
if (init_cicada(dev, np, phyinterface)) {
netdev_info(dev, "%s: phy init failed\n",
pci_name(np->pci_dev));
diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c
index a44a03c..66fd868 100644
--- a/drivers/net/ethernet/nxp/lpc_eth.c
+++ b/drivers/net/ethernet/nxp/lpc_eth.c
@@ -1377,9 +1377,6 @@ static int lpc_eth_drv_probe(struct platform_device *pdev)
goto err_out_iounmap;
}
- /* Fill in the fields of the device structure with ethernet values. */
- ether_setup(ndev);
-
/* Setup driver functions */
ndev->netdev_ops = &lpc_netdev_ops;
ndev->ethtool_ops = &lpc_eth_ethtool_ops;
diff --git a/drivers/net/ethernet/packetengines/yellowfin.c b/drivers/net/ethernet/packetengines/yellowfin.c
index 2d6b148..fa2db41 100644
--- a/drivers/net/ethernet/packetengines/yellowfin.c
+++ b/drivers/net/ethernet/packetengines/yellowfin.c
@@ -693,11 +693,11 @@ static void yellowfin_tx_timeout(struct net_device *dev)
/* Note: these should be KERN_DEBUG. */
if (yellowfin_debug) {
int i;
- pr_warning(" Rx ring %p: ", yp->rx_ring);
+ pr_warn(" Rx ring %p: ", yp->rx_ring);
for (i = 0; i < RX_RING_SIZE; i++)
pr_cont(" %08x", yp->rx_ring[i].result_status);
pr_cont("\n");
- pr_warning(" Tx ring %p: ", yp->tx_ring);
+ pr_warn(" Tx ring %p: ", yp->tx_ring);
for (i = 0; i < TX_RING_SIZE; i++)
pr_cont(" %04x /%08x",
yp->tx_status[i].tx_errs,
diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
index 5ec5a2b..0b2a1cc 100644
--- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
+++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
@@ -1475,9 +1475,8 @@ netxen_nic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
u32 val;
if (pdev->revision >= NX_P3_A0 && pdev->revision <= NX_P3_B1) {
- pr_warning("%s: chip revisions between 0x%x-0x%x "
- "will not be enabled.\n",
- module_name(THIS_MODULE), NX_P3_A0, NX_P3_B1);
+ pr_warn("%s: chip revisions between 0x%x-0x%x will not be enabled\n",
+ module_name(THIS_MODULE), NX_P3_A0, NX_P3_B1);
return -ENODEV;
}
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
index b84f5ea..e56c1bb 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
@@ -39,8 +39,8 @@
#define _QLCNIC_LINUX_MAJOR 5
#define _QLCNIC_LINUX_MINOR 3
-#define _QLCNIC_LINUX_SUBVERSION 61
-#define QLCNIC_LINUX_VERSIONID "5.3.61"
+#define _QLCNIC_LINUX_SUBVERSION 62
+#define QLCNIC_LINUX_VERSIONID "5.3.62"
#define QLCNIC_DRV_IDC_VER 0x01
#define QLCNIC_DRIVER_VERSION ((_QLCNIC_LINUX_MAJOR << 16) |\
(_QLCNIC_LINUX_MINOR << 8) | (_QLCNIC_LINUX_SUBVERSION))
@@ -540,6 +540,8 @@ struct qlcnic_hardware_context {
u8 lb_mode;
u16 vxlan_port;
struct device *hwmon_dev;
+ u32 post_mode;
+ bool run_post;
};
struct qlcnic_adapter_stats {
@@ -2283,6 +2285,7 @@ extern const struct ethtool_ops qlcnic_ethtool_failed_ops;
#define PCI_DEVICE_ID_QLOGIC_QLE824X 0x8020
#define PCI_DEVICE_ID_QLOGIC_QLE834X 0x8030
+#define PCI_DEVICE_ID_QLOGIC_QLE8830 0x8830
#define PCI_DEVICE_ID_QLOGIC_VF_QLE834X 0x8430
#define PCI_DEVICE_ID_QLOGIC_QLE844X 0x8040
#define PCI_DEVICE_ID_QLOGIC_VF_QLE844X 0x8440
@@ -2307,6 +2310,7 @@ static inline bool qlcnic_83xx_check(struct qlcnic_adapter *adapter)
bool status;
status = ((device == PCI_DEVICE_ID_QLOGIC_QLE834X) ||
+ (device == PCI_DEVICE_ID_QLOGIC_QLE8830) ||
(device == PCI_DEVICE_ID_QLOGIC_QLE844X) ||
(device == PCI_DEVICE_ID_QLOGIC_VF_QLE844X) ||
(device == PCI_DEVICE_ID_QLOGIC_VF_QLE834X)) ? true : false;
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
index 476e499..840bf36 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
@@ -35,6 +35,35 @@ static void qlcnic_83xx_get_beacon_state(struct qlcnic_adapter *);
#define QLC_SKIP_INACTIVE_PCI_REGS 7
#define QLC_MAX_LEGACY_FUNC_SUPP 8
+/* 83xx Module type */
+#define QLC_83XX_MODULE_FIBRE_10GBASE_LRM 0x1 /* 10GBase-LRM */
+#define QLC_83XX_MODULE_FIBRE_10GBASE_LR 0x2 /* 10GBase-LR */
+#define QLC_83XX_MODULE_FIBRE_10GBASE_SR 0x3 /* 10GBase-SR */
+#define QLC_83XX_MODULE_DA_10GE_PASSIVE_CP 0x4 /* 10GE passive
+ * copper(compliant)
+ */
+#define QLC_83XX_MODULE_DA_10GE_ACTIVE_CP 0x5 /* 10GE active limiting
+ * copper(compliant)
+ */
+#define QLC_83XX_MODULE_DA_10GE_LEGACY_CP 0x6 /* 10GE passive copper
+ * (legacy, best effort)
+ */
+#define QLC_83XX_MODULE_FIBRE_1000BASE_SX 0x7 /* 1000Base-SX */
+#define QLC_83XX_MODULE_FIBRE_1000BASE_LX 0x8 /* 1000Base-LX */
+#define QLC_83XX_MODULE_FIBRE_1000BASE_CX 0x9 /* 1000Base-CX */
+#define QLC_83XX_MODULE_TP_1000BASE_T 0xa /* 1000Base-T*/
+#define QLC_83XX_MODULE_DA_1GE_PASSIVE_CP 0xb /* 1GE passive copper
+ * (legacy, best effort)
+ */
+#define QLC_83XX_MODULE_UNKNOWN 0xf /* Unknown module type */
+
+/* Port types */
+#define QLC_83XX_10_CAPABLE BIT_8
+#define QLC_83XX_100_CAPABLE BIT_9
+#define QLC_83XX_1G_CAPABLE BIT_10
+#define QLC_83XX_10G_CAPABLE BIT_11
+#define QLC_83XX_AUTONEG_ENABLE BIT_15
+
static const struct qlcnic_mailbox_metadata qlcnic_83xx_mbx_tbl[] = {
{QLCNIC_CMD_CONFIGURE_IP_ADDR, 6, 1},
{QLCNIC_CMD_CONFIG_INTRPT, 18, 34},
@@ -667,6 +696,7 @@ void qlcnic_83xx_write_crb(struct qlcnic_adapter *adapter, char *buf,
int qlcnic_83xx_get_port_info(struct qlcnic_adapter *adapter)
{
+ struct qlcnic_hardware_context *ahw = adapter->ahw;
int status;
status = qlcnic_83xx_get_port_config(adapter);
@@ -674,13 +704,20 @@ int qlcnic_83xx_get_port_info(struct qlcnic_adapter *adapter)
dev_err(&adapter->pdev->dev,
"Get Port Info failed\n");
} else {
- if (QLC_83XX_SFP_10G_CAPABLE(adapter->ahw->port_config))
- adapter->ahw->port_type = QLCNIC_XGBE;
- else
- adapter->ahw->port_type = QLCNIC_GBE;
- if (QLC_83XX_AUTONEG(adapter->ahw->port_config))
- adapter->ahw->link_autoneg = AUTONEG_ENABLE;
+ if (ahw->port_config & QLC_83XX_10G_CAPABLE) {
+ ahw->port_type = QLCNIC_XGBE;
+ } else if (ahw->port_config & QLC_83XX_10_CAPABLE ||
+ ahw->port_config & QLC_83XX_100_CAPABLE ||
+ ahw->port_config & QLC_83XX_1G_CAPABLE) {
+ ahw->port_type = QLCNIC_GBE;
+ } else {
+ ahw->port_type = QLCNIC_XGBE;
+ }
+
+ if (QLC_83XX_AUTONEG(ahw->port_config))
+ ahw->link_autoneg = AUTONEG_ENABLE;
+
}
return status;
}
@@ -2664,7 +2701,7 @@ static int qlcnic_83xx_poll_flash_status_reg(struct qlcnic_adapter *adapter)
QLC_83XX_FLASH_STATUS_READY)
break;
- msleep(QLC_83XX_FLASH_STATUS_REG_POLL_DELAY);
+ usleep_range(1000, 1100);
} while (--retries);
if (!retries)
@@ -3176,22 +3213,33 @@ int qlcnic_83xx_test_link(struct qlcnic_adapter *adapter)
break;
}
config = cmd.rsp.arg[3];
- if (QLC_83XX_SFP_PRESENT(config)) {
- switch (ahw->module_type) {
- case LINKEVENT_MODULE_OPTICAL_UNKNOWN:
- case LINKEVENT_MODULE_OPTICAL_SRLR:
- case LINKEVENT_MODULE_OPTICAL_LRM:
- case LINKEVENT_MODULE_OPTICAL_SFP_1G:
- ahw->supported_type = PORT_FIBRE;
- break;
- case LINKEVENT_MODULE_TWINAX_UNSUPPORTED_CABLE:
- case LINKEVENT_MODULE_TWINAX_UNSUPPORTED_CABLELEN:
- case LINKEVENT_MODULE_TWINAX:
- ahw->supported_type = PORT_TP;
- break;
- default:
- ahw->supported_type = PORT_OTHER;
- }
+ switch (QLC_83XX_SFP_MODULE_TYPE(config)) {
+ case QLC_83XX_MODULE_FIBRE_10GBASE_LRM:
+ case QLC_83XX_MODULE_FIBRE_10GBASE_LR:
+ case QLC_83XX_MODULE_FIBRE_10GBASE_SR:
+ ahw->supported_type = PORT_FIBRE;
+ ahw->port_type = QLCNIC_XGBE;
+ break;
+ case QLC_83XX_MODULE_FIBRE_1000BASE_SX:
+ case QLC_83XX_MODULE_FIBRE_1000BASE_LX:
+ case QLC_83XX_MODULE_FIBRE_1000BASE_CX:
+ ahw->supported_type = PORT_FIBRE;
+ ahw->port_type = QLCNIC_GBE;
+ break;
+ case QLC_83XX_MODULE_TP_1000BASE_T:
+ ahw->supported_type = PORT_TP;
+ ahw->port_type = QLCNIC_GBE;
+ break;
+ case QLC_83XX_MODULE_DA_10GE_PASSIVE_CP:
+ case QLC_83XX_MODULE_DA_10GE_ACTIVE_CP:
+ case QLC_83XX_MODULE_DA_10GE_LEGACY_CP:
+ case QLC_83XX_MODULE_DA_1GE_PASSIVE_CP:
+ ahw->supported_type = PORT_DA;
+ ahw->port_type = QLCNIC_XGBE;
+ break;
+ default:
+ ahw->supported_type = PORT_OTHER;
+ ahw->port_type = QLCNIC_XGBE;
}
if (config & 1)
err = 1;
@@ -3204,9 +3252,9 @@ out:
int qlcnic_83xx_get_settings(struct qlcnic_adapter *adapter,
struct ethtool_cmd *ecmd)
{
+ struct qlcnic_hardware_context *ahw = adapter->ahw;
u32 config = 0;
int status = 0;
- struct qlcnic_hardware_context *ahw = adapter->ahw;
if (!test_bit(__QLCNIC_MAINTENANCE_MODE, &adapter->state)) {
/* Get port configuration info */
@@ -3229,20 +3277,41 @@ int qlcnic_83xx_get_settings(struct qlcnic_adapter *adapter,
ecmd->autoneg = AUTONEG_DISABLE;
}
- if (ahw->port_type == QLCNIC_XGBE) {
- ecmd->supported = SUPPORTED_10000baseT_Full;
- ecmd->advertising = ADVERTISED_10000baseT_Full;
+ ecmd->supported = (SUPPORTED_10baseT_Full |
+ SUPPORTED_100baseT_Full |
+ SUPPORTED_1000baseT_Full |
+ SUPPORTED_10000baseT_Full |
+ SUPPORTED_Autoneg);
+
+ if (ecmd->autoneg == AUTONEG_ENABLE) {
+ if (ahw->port_config & QLC_83XX_10_CAPABLE)
+ ecmd->advertising |= SUPPORTED_10baseT_Full;
+ if (ahw->port_config & QLC_83XX_100_CAPABLE)
+ ecmd->advertising |= SUPPORTED_100baseT_Full;
+ if (ahw->port_config & QLC_83XX_1G_CAPABLE)
+ ecmd->advertising |= SUPPORTED_1000baseT_Full;
+ if (ahw->port_config & QLC_83XX_10G_CAPABLE)
+ ecmd->advertising |= SUPPORTED_10000baseT_Full;
+ if (ahw->port_config & QLC_83XX_AUTONEG_ENABLE)
+ ecmd->advertising |= ADVERTISED_Autoneg;
} else {
- ecmd->supported = (SUPPORTED_10baseT_Half |
- SUPPORTED_10baseT_Full |
- SUPPORTED_100baseT_Half |
- SUPPORTED_100baseT_Full |
- SUPPORTED_1000baseT_Half |
- SUPPORTED_1000baseT_Full);
- ecmd->advertising = (ADVERTISED_100baseT_Half |
- ADVERTISED_100baseT_Full |
- ADVERTISED_1000baseT_Half |
- ADVERTISED_1000baseT_Full);
+ switch (ahw->link_speed) {
+ case SPEED_10:
+ ecmd->advertising = SUPPORTED_10baseT_Full;
+ break;
+ case SPEED_100:
+ ecmd->advertising = SUPPORTED_100baseT_Full;
+ break;
+ case SPEED_1000:
+ ecmd->advertising = SUPPORTED_1000baseT_Full;
+ break;
+ case SPEED_10000:
+ ecmd->advertising = SUPPORTED_10000baseT_Full;
+ break;
+ default:
+ break;
+ }
+
}
switch (ahw->supported_type) {
@@ -3258,6 +3327,12 @@ int qlcnic_83xx_get_settings(struct qlcnic_adapter *adapter,
ecmd->port = PORT_TP;
ecmd->transceiver = XCVR_INTERNAL;
break;
+ case PORT_DA:
+ ecmd->supported |= SUPPORTED_FIBRE;
+ ecmd->advertising |= ADVERTISED_FIBRE;
+ ecmd->port = PORT_DA;
+ ecmd->transceiver = XCVR_EXTERNAL;
+ break;
default:
ecmd->supported |= SUPPORTED_FIBRE;
ecmd->advertising |= ADVERTISED_FIBRE;
@@ -3272,35 +3347,60 @@ int qlcnic_83xx_get_settings(struct qlcnic_adapter *adapter,
int qlcnic_83xx_set_settings(struct qlcnic_adapter *adapter,
struct ethtool_cmd *ecmd)
{
- int status = 0;
+ struct qlcnic_hardware_context *ahw = adapter->ahw;
u32 config = adapter->ahw->port_config;
+ int status = 0;
- if (ecmd->autoneg)
- adapter->ahw->port_config |= BIT_15;
-
- switch (ethtool_cmd_speed(ecmd)) {
- case SPEED_10:
- adapter->ahw->port_config |= BIT_8;
- break;
- case SPEED_100:
- adapter->ahw->port_config |= BIT_9;
- break;
- case SPEED_1000:
- adapter->ahw->port_config |= BIT_10;
- break;
- case SPEED_10000:
- adapter->ahw->port_config |= BIT_11;
- break;
- default:
- return -EINVAL;
+ /* 83xx devices do not support Half duplex */
+ if (ecmd->duplex == DUPLEX_HALF) {
+ netdev_info(adapter->netdev,
+ "Half duplex mode not supported\n");
+ return -EINVAL;
}
+ if (ecmd->autoneg) {
+ ahw->port_config |= QLC_83XX_AUTONEG_ENABLE;
+ ahw->port_config |= (QLC_83XX_100_CAPABLE |
+ QLC_83XX_1G_CAPABLE |
+ QLC_83XX_10G_CAPABLE);
+ } else { /* force speed */
+ ahw->port_config &= ~QLC_83XX_AUTONEG_ENABLE;
+ switch (ethtool_cmd_speed(ecmd)) {
+ case SPEED_10:
+ ahw->port_config &= ~(QLC_83XX_100_CAPABLE |
+ QLC_83XX_1G_CAPABLE |
+ QLC_83XX_10G_CAPABLE);
+ ahw->port_config |= QLC_83XX_10_CAPABLE;
+ break;
+ case SPEED_100:
+ ahw->port_config &= ~(QLC_83XX_10_CAPABLE |
+ QLC_83XX_1G_CAPABLE |
+ QLC_83XX_10G_CAPABLE);
+ ahw->port_config |= QLC_83XX_100_CAPABLE;
+ break;
+ case SPEED_1000:
+ ahw->port_config &= ~(QLC_83XX_10_CAPABLE |
+ QLC_83XX_100_CAPABLE |
+ QLC_83XX_10G_CAPABLE);
+ ahw->port_config |= QLC_83XX_1G_CAPABLE;
+ break;
+ case SPEED_10000:
+ ahw->port_config &= ~(QLC_83XX_10_CAPABLE |
+ QLC_83XX_100_CAPABLE |
+ QLC_83XX_1G_CAPABLE);
+ ahw->port_config |= QLC_83XX_10G_CAPABLE;
+ break;
+ default:
+ return -EINVAL;
+ }
+ }
status = qlcnic_83xx_set_port_config(adapter);
if (status) {
- dev_info(&adapter->pdev->dev,
- "Failed to Set Link Speed and autoneg.\n");
- adapter->ahw->port_config = config;
+ netdev_info(adapter->netdev,
+ "Failed to Set Link Speed and autoneg.\n");
+ ahw->port_config = config;
}
+
return status;
}
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
index 2bf101a..f3346a3 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
@@ -83,6 +83,7 @@
/* Firmware image definitions */
#define QLC_83XX_BOOTLOADER_FLASH_ADDR 0x10000
#define QLC_83XX_FW_FILE_NAME "83xx_fw.bin"
+#define QLC_83XX_POST_FW_FILE_NAME "83xx_post_fw.bin"
#define QLC_84XX_FW_FILE_NAME "84xx_fw.bin"
#define QLC_83XX_BOOT_FROM_FLASH 0
#define QLC_83XX_BOOT_FROM_FILE 0x12345678
@@ -360,7 +361,6 @@ enum qlcnic_83xx_states {
#define QLC_83XX_SFP_MODULE_TYPE(data) (((data) >> 4) & 0x1F)
#define QLC_83XX_SFP_CU_LENGTH(data) (LSB((data) >> 16))
#define QLC_83XX_SFP_TX_FAULT(data) ((data) & BIT_10)
-#define QLC_83XX_SFP_10G_CAPABLE(data) ((data) & BIT_11)
#define QLC_83XX_LINK_STATS(data) ((data) & BIT_0)
#define QLC_83XX_CURRENT_LINK_SPEED(data) (((data) >> 3) & 7)
#define QLC_83XX_LINK_PAUSE(data) (((data) >> 6) & 3)
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
index 3172cdf..2bb48d5 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
@@ -2074,6 +2074,121 @@ static void qlcnic_83xx_init_hw(struct qlcnic_adapter *p_dev)
dev_err(&p_dev->pdev->dev, "%s: failed\n", __func__);
}
+/* POST FW related definations*/
+#define QLC_83XX_POST_SIGNATURE_REG 0x41602014
+#define QLC_83XX_POST_MODE_REG 0x41602018
+#define QLC_83XX_POST_FAST_MODE 0
+#define QLC_83XX_POST_MEDIUM_MODE 1
+#define QLC_83XX_POST_SLOW_MODE 2
+
+/* POST Timeout values in milliseconds */
+#define QLC_83XX_POST_FAST_MODE_TIMEOUT 690
+#define QLC_83XX_POST_MED_MODE_TIMEOUT 2930
+#define QLC_83XX_POST_SLOW_MODE_TIMEOUT 7500
+
+/* POST result values */
+#define QLC_83XX_POST_PASS 0xfffffff0
+#define QLC_83XX_POST_ASIC_STRESS_TEST_FAIL 0xffffffff
+#define QLC_83XX_POST_DDR_TEST_FAIL 0xfffffffe
+#define QLC_83XX_POST_ASIC_MEMORY_TEST_FAIL 0xfffffffc
+#define QLC_83XX_POST_FLASH_TEST_FAIL 0xfffffff8
+
+static int qlcnic_83xx_run_post(struct qlcnic_adapter *adapter)
+{
+ struct qlc_83xx_fw_info *fw_info = adapter->ahw->fw_info;
+ struct device *dev = &adapter->pdev->dev;
+ int timeout, count, ret = 0;
+ u32 signature;
+
+ /* Set timeout values with extra 2 seconds of buffer */
+ switch (adapter->ahw->post_mode) {
+ case QLC_83XX_POST_FAST_MODE:
+ timeout = QLC_83XX_POST_FAST_MODE_TIMEOUT + 2000;
+ break;
+ case QLC_83XX_POST_MEDIUM_MODE:
+ timeout = QLC_83XX_POST_MED_MODE_TIMEOUT + 2000;
+ break;
+ case QLC_83XX_POST_SLOW_MODE:
+ timeout = QLC_83XX_POST_SLOW_MODE_TIMEOUT + 2000;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ strncpy(fw_info->fw_file_name, QLC_83XX_POST_FW_FILE_NAME,
+ QLC_FW_FILE_NAME_LEN);
+
+ ret = request_firmware(&fw_info->fw, fw_info->fw_file_name, dev);
+ if (ret) {
+ dev_err(dev, "POST firmware can not be loaded, skipping POST\n");
+ return 0;
+ }
+
+ ret = qlcnic_83xx_copy_fw_file(adapter);
+ if (ret)
+ return ret;
+
+ /* clear QLC_83XX_POST_SIGNATURE_REG register */
+ qlcnic_ind_wr(adapter, QLC_83XX_POST_SIGNATURE_REG, 0);
+
+ /* Set POST mode */
+ qlcnic_ind_wr(adapter, QLC_83XX_POST_MODE_REG,
+ adapter->ahw->post_mode);
+
+ QLC_SHARED_REG_WR32(adapter, QLCNIC_FW_IMG_VALID,
+ QLC_83XX_BOOT_FROM_FILE);
+
+ qlcnic_83xx_start_hw(adapter);
+
+ count = 0;
+ do {
+ msleep(100);
+ count += 100;
+
+ signature = qlcnic_ind_rd(adapter, QLC_83XX_POST_SIGNATURE_REG);
+ if (signature == QLC_83XX_POST_PASS)
+ break;
+ } while (timeout > count);
+
+ if (timeout <= count) {
+ dev_err(dev, "POST timed out, signature = 0x%08x\n", signature);
+ return -EIO;
+ }
+
+ switch (signature) {
+ case QLC_83XX_POST_PASS:
+ dev_info(dev, "POST passed, Signature = 0x%08x\n", signature);
+ break;
+ case QLC_83XX_POST_ASIC_STRESS_TEST_FAIL:
+ dev_err(dev, "POST failed, Test case : ASIC STRESS TEST, Signature = 0x%08x\n",
+ signature);
+ ret = -EIO;
+ break;
+ case QLC_83XX_POST_DDR_TEST_FAIL:
+ dev_err(dev, "POST failed, Test case : DDT TEST, Signature = 0x%08x\n",
+ signature);
+ ret = -EIO;
+ break;
+ case QLC_83XX_POST_ASIC_MEMORY_TEST_FAIL:
+ dev_err(dev, "POST failed, Test case : ASIC MEMORY TEST, Signature = 0x%08x\n",
+ signature);
+ ret = -EIO;
+ break;
+ case QLC_83XX_POST_FLASH_TEST_FAIL:
+ dev_err(dev, "POST failed, Test case : FLASH TEST, Signature = 0x%08x\n",
+ signature);
+ ret = -EIO;
+ break;
+ default:
+ dev_err(dev, "POST failed, Test case : INVALID, Signature = 0x%08x\n",
+ signature);
+ ret = -EIO;
+ break;
+ }
+
+ return ret;
+}
+
static int qlcnic_83xx_load_fw_image_from_host(struct qlcnic_adapter *adapter)
{
struct qlc_83xx_fw_info *fw_info = adapter->ahw->fw_info;
@@ -2118,8 +2233,27 @@ static int qlcnic_83xx_restart_hw(struct qlcnic_adapter *adapter)
if (qlcnic_83xx_copy_bootloader(adapter))
return err;
+
+ /* Check if POST needs to be run */
+ if (adapter->ahw->run_post) {
+ err = qlcnic_83xx_run_post(adapter);
+ if (err)
+ return err;
+
+ /* No need to run POST in next reset sequence */
+ adapter->ahw->run_post = false;
+
+ /* Again reset the adapter to load regular firmware */
+ qlcnic_83xx_stop_hw(adapter);
+ qlcnic_83xx_init_hw(adapter);
+
+ err = qlcnic_83xx_copy_bootloader(adapter);
+ if (err)
+ return err;
+ }
+
/* Boot either flash image or firmware image from host file system */
- if (qlcnic_load_fw_file) {
+ if (qlcnic_load_fw_file == 1) {
if (qlcnic_83xx_load_fw_image_from_host(adapter))
return err;
} else {
@@ -2283,6 +2417,7 @@ static int qlcnic_83xx_get_fw_info(struct qlcnic_adapter *adapter)
fw_info = ahw->fw_info;
switch (pdev->device) {
case PCI_DEVICE_ID_QLOGIC_QLE834X:
+ case PCI_DEVICE_ID_QLOGIC_QLE8830:
strncpy(fw_info->fw_file_name, QLC_83XX_FW_FILE_NAME,
QLC_FW_FILE_NAME_LEN);
break;
@@ -2327,6 +2462,25 @@ int qlcnic_83xx_init(struct qlcnic_adapter *adapter, int pci_using_dac)
adapter->rx_mac_learn = false;
ahw->msix_supported = !!qlcnic_use_msi_x;
+ /* Check if POST needs to be run */
+ switch (qlcnic_load_fw_file) {
+ case 2:
+ ahw->post_mode = QLC_83XX_POST_FAST_MODE;
+ ahw->run_post = true;
+ break;
+ case 3:
+ ahw->post_mode = QLC_83XX_POST_MEDIUM_MODE;
+ ahw->run_post = true;
+ break;
+ case 4:
+ ahw->post_mode = QLC_83XX_POST_SLOW_MODE;
+ ahw->run_post = true;
+ break;
+ default:
+ ahw->run_post = false;
+ break;
+ }
+
qlcnic_83xx_init_rings(adapter);
err = qlcnic_83xx_init_mailbox_work(adapter);
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c
index 03cd4c3..69b46c0 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c
@@ -341,7 +341,7 @@ qlcnic_pcie_sem_lock(struct qlcnic_adapter *adapter, int sem, u32 id_reg)
}
return -EIO;
}
- msleep(1);
+ usleep_range(1000, 1500);
}
if (id_reg)
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
index c4262c2..be41e4c 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
@@ -537,7 +537,7 @@ int qlcnic_pinit_from_rom(struct qlcnic_adapter *adapter)
QLCWR32(adapter, QLCNIC_CRB_PEG_NET_3 + 0xc, 0);
QLCWR32(adapter, QLCNIC_CRB_PEG_NET_4 + 0x8, 0);
QLCWR32(adapter, QLCNIC_CRB_PEG_NET_4 + 0xc, 0);
- msleep(1);
+ usleep_range(1000, 1500);
QLC_SHARED_REG_WR32(adapter, QLCNIC_PEG_HALT_STATUS1, 0);
QLC_SHARED_REG_WR32(adapter, QLCNIC_PEG_HALT_STATUS2, 0);
@@ -1198,7 +1198,7 @@ qlcnic_load_firmware(struct qlcnic_adapter *adapter)
flashaddr += 8;
}
}
- msleep(1);
+ usleep_range(1000, 1500);
QLCWR32(adapter, QLCNIC_CRB_PEG_NET_0 + 0x18, 0x1020);
QLCWR32(adapter, QLCNIC_ROMUSB_GLB_SW_RESET, 0x80001e);
@@ -1295,7 +1295,7 @@ next:
rc = qlcnic_validate_firmware(adapter);
if (rc != 0) {
release_firmware(adapter->fw);
- msleep(1);
+ usleep_range(1000, 1500);
goto next;
}
}
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
index e45bf09..18e5de7 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
@@ -1753,7 +1753,7 @@ qlcnic_83xx_process_rcv(struct qlcnic_adapter *adapter,
if (qlcnic_encap_length(sts_data[1]) &&
skb->ip_summed == CHECKSUM_UNNECESSARY) {
- skb->encapsulation = 1;
+ skb->csum_level = 1;
adapter->stats.encap_rx_csummed++;
}
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
index cf08b2d..f5e29f7 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
@@ -52,7 +52,7 @@ MODULE_PARM_DESC(auto_fw_reset, "Auto firmware reset (0=disabled, 1=enabled)");
module_param_named(auto_fw_reset, qlcnic_auto_fw_reset, int, 0644);
int qlcnic_load_fw_file;
-MODULE_PARM_DESC(load_fw_file, "Load firmware from (0=flash, 1=file)");
+MODULE_PARM_DESC(load_fw_file, "Load firmware from (0=flash, 1=file, 2=POST in fast mode, 3= POST in medium mode, 4=POST in slow mode)");
module_param_named(load_fw_file, qlcnic_load_fw_file, int, 0444);
static int qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent);
@@ -111,6 +111,7 @@ static u32 qlcnic_vlan_tx_check(struct qlcnic_adapter *adapter)
static const struct pci_device_id qlcnic_pci_tbl[] = {
ENTRY(PCI_DEVICE_ID_QLOGIC_QLE824X),
ENTRY(PCI_DEVICE_ID_QLOGIC_QLE834X),
+ ENTRY(PCI_DEVICE_ID_QLOGIC_QLE8830),
ENTRY(PCI_DEVICE_ID_QLOGIC_VF_QLE834X),
ENTRY(PCI_DEVICE_ID_QLOGIC_QLE844X),
ENTRY(PCI_DEVICE_ID_QLOGIC_VF_QLE844X),
@@ -228,6 +229,11 @@ static const struct qlcnic_board_info qlcnic_boards[] = {
PCI_DEVICE_ID_QLOGIC_QLE834X,
0x0, 0x0, "8300 Series 1/10GbE Controller" },
{ PCI_VENDOR_ID_QLOGIC,
+ PCI_DEVICE_ID_QLOGIC_QLE8830,
+ 0x0,
+ 0x0,
+ "8830 Series 1/10GbE Controller" },
+ { PCI_VENDOR_ID_QLOGIC,
PCI_DEVICE_ID_QLOGIC_QLE824X,
PCI_VENDOR_ID_QLOGIC,
0x203,
@@ -1131,6 +1137,7 @@ static void qlcnic_get_bar_length(u32 dev_id, ulong *bar)
*bar = QLCNIC_82XX_BAR0_LENGTH;
break;
case PCI_DEVICE_ID_QLOGIC_QLE834X:
+ case PCI_DEVICE_ID_QLOGIC_QLE8830:
case PCI_DEVICE_ID_QLOGIC_QLE844X:
case PCI_DEVICE_ID_QLOGIC_VF_QLE834X:
case PCI_DEVICE_ID_QLOGIC_VF_QLE844X:
@@ -2474,6 +2481,7 @@ qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
ahw->reg_tbl = (u32 *) qlcnic_reg_tbl;
break;
case PCI_DEVICE_ID_QLOGIC_QLE834X:
+ case PCI_DEVICE_ID_QLOGIC_QLE8830:
case PCI_DEVICE_ID_QLOGIC_QLE844X:
qlcnic_83xx_register_map(ahw);
break;
diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_main.c b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
index 3e96f26..6c904a6 100644
--- a/drivers/net/ethernet/qlogic/qlge/qlge_main.c
+++ b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
@@ -1922,7 +1922,7 @@ static struct sk_buff *ql_build_rx_skb(struct ql_adapter *qdev,
sbq_desc->p.skb = NULL;
skb_reserve(skb, NET_IP_ALIGN);
}
- while (length > 0) {
+ do {
lbq_desc = ql_get_curr_lchunk(qdev, rx_ring);
size = (length < rx_ring->lbq_buf_size) ? length :
rx_ring->lbq_buf_size;
@@ -1939,7 +1939,7 @@ static struct sk_buff *ql_build_rx_skb(struct ql_adapter *qdev,
skb->truesize += size;
length -= size;
i++;
- }
+ } while (length > 0);
ql_update_mac_hdr_len(qdev, ib_mac_rsp, lbq_desc->p.pg_chunk.va,
&hlen);
__pskb_pull_tail(skb, hlen);
diff --git a/drivers/net/ethernet/qualcomm/Kconfig b/drivers/net/ethernet/qualcomm/Kconfig
new file mode 100644
index 0000000..f3a4714
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/Kconfig
@@ -0,0 +1,30 @@
+#
+# Qualcomm network device configuration
+#
+
+config NET_VENDOR_QUALCOMM
+ bool "Qualcomm devices"
+ default y
+ depends on SPI_MASTER && OF_GPIO
+ ---help---
+ If you have a network (Ethernet) card belonging to this class, say Y
+ and read the Ethernet-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>.
+
+ Note that the answer to this question doesn't directly affect the
+ kernel: saying N will just cause the configurator to skip all
+ the questions about Qualcomm cards. If you say Y, you will be asked
+ for your specific card in the following questions.
+
+if NET_VENDOR_QUALCOMM
+
+config QCA7000
+ tristate "Qualcomm Atheros QCA7000 support"
+ depends on SPI_MASTER && OF_GPIO
+ ---help---
+ This SPI protocol driver supports the Qualcomm Atheros QCA7000.
+
+ To compile this driver as a module, choose M here. The module
+ will be called qcaspi.
+
+endif # NET_VENDOR_QUALCOMM
diff --git a/drivers/net/ethernet/qualcomm/Makefile b/drivers/net/ethernet/qualcomm/Makefile
new file mode 100644
index 0000000..9da2d75
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/Makefile
@@ -0,0 +1,6 @@
+#
+# Makefile for the Qualcomm network device drivers.
+#
+
+obj-$(CONFIG_QCA7000) += qcaspi.o
+qcaspi-objs := qca_spi.o qca_framing.o qca_7k.o qca_debug.o
diff --git a/drivers/net/ethernet/qualcomm/qca_7k.c b/drivers/net/ethernet/qualcomm/qca_7k.c
new file mode 100644
index 0000000..f0066fb
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_7k.c
@@ -0,0 +1,149 @@
+/*
+ *
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ *
+ */
+
+/* This module implements the Qualcomm Atheros SPI protocol for
+ * kernel-based SPI device.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/spi/spi.h>
+#include <linux/version.h>
+
+#include "qca_7k.h"
+
+void
+qcaspi_spi_error(struct qcaspi *qca)
+{
+ if (qca->sync != QCASPI_SYNC_READY)
+ return;
+
+ netdev_err(qca->net_dev, "spi error\n");
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ qca->stats.spi_err++;
+}
+
+int
+qcaspi_read_register(struct qcaspi *qca, u16 reg, u16 *result)
+{
+ __be16 rx_data;
+ __be16 tx_data;
+ struct spi_transfer *transfer;
+ struct spi_message *msg;
+ int ret;
+
+ tx_data = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_INTERNAL | reg);
+
+ if (qca->legacy_mode) {
+ msg = &qca->spi_msg1;
+ transfer = &qca->spi_xfer1;
+ transfer->tx_buf = &tx_data;
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ spi_sync(qca->spi_dev, msg);
+ } else {
+ msg = &qca->spi_msg2;
+ transfer = &qca->spi_xfer2[0];
+ transfer->tx_buf = &tx_data;
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ transfer = &qca->spi_xfer2[1];
+ }
+ transfer->tx_buf = NULL;
+ transfer->rx_buf = &rx_data;
+ transfer->len = QCASPI_CMD_LEN;
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (!ret)
+ ret = msg->status;
+
+ if (ret)
+ qcaspi_spi_error(qca);
+ else
+ *result = be16_to_cpu(rx_data);
+
+ return ret;
+}
+
+int
+qcaspi_write_register(struct qcaspi *qca, u16 reg, u16 value)
+{
+ __be16 tx_data[2];
+ struct spi_transfer *transfer;
+ struct spi_message *msg;
+ int ret;
+
+ tx_data[0] = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_INTERNAL | reg);
+ tx_data[1] = cpu_to_be16(value);
+
+ if (qca->legacy_mode) {
+ msg = &qca->spi_msg1;
+ transfer = &qca->spi_xfer1;
+ transfer->tx_buf = &tx_data[0];
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ spi_sync(qca->spi_dev, msg);
+ } else {
+ msg = &qca->spi_msg2;
+ transfer = &qca->spi_xfer2[0];
+ transfer->tx_buf = &tx_data[0];
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ transfer = &qca->spi_xfer2[1];
+ }
+ transfer->tx_buf = &tx_data[1];
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (!ret)
+ ret = msg->status;
+
+ if (ret)
+ qcaspi_spi_error(qca);
+
+ return ret;
+}
+
+int
+qcaspi_tx_cmd(struct qcaspi *qca, u16 cmd)
+{
+ __be16 tx_data;
+ struct spi_message *msg = &qca->spi_msg1;
+ struct spi_transfer *transfer = &qca->spi_xfer1;
+ int ret;
+
+ tx_data = cpu_to_be16(cmd);
+ transfer->len = sizeof(tx_data);
+ transfer->tx_buf = &tx_data;
+ transfer->rx_buf = NULL;
+
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (!ret)
+ ret = msg->status;
+
+ if (ret)
+ qcaspi_spi_error(qca);
+
+ return ret;
+}
diff --git a/drivers/net/ethernet/qualcomm/qca_7k.h b/drivers/net/ethernet/qualcomm/qca_7k.h
new file mode 100644
index 0000000..1cad851
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_7k.h
@@ -0,0 +1,72 @@
+/*
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ *
+ */
+
+/* Qualcomm Atheros SPI register definition.
+ *
+ * This module is designed to define the Qualcomm Atheros SPI
+ * register placeholders.
+ */
+
+#ifndef _QCA_7K_H
+#define _QCA_7K_H
+
+#include <linux/types.h>
+
+#include "qca_spi.h"
+
+#define QCA7K_SPI_READ (1 << 15)
+#define QCA7K_SPI_WRITE (0 << 15)
+#define QCA7K_SPI_INTERNAL (1 << 14)
+#define QCA7K_SPI_EXTERNAL (0 << 14)
+
+#define QCASPI_CMD_LEN 2
+#define QCASPI_HW_PKT_LEN 4
+#define QCASPI_HW_BUF_LEN 0xC5B
+
+/* SPI registers; */
+#define SPI_REG_BFR_SIZE 0x0100
+#define SPI_REG_WRBUF_SPC_AVA 0x0200
+#define SPI_REG_RDBUF_BYTE_AVA 0x0300
+#define SPI_REG_SPI_CONFIG 0x0400
+#define SPI_REG_SPI_STATUS 0x0500
+#define SPI_REG_INTR_CAUSE 0x0C00
+#define SPI_REG_INTR_ENABLE 0x0D00
+#define SPI_REG_RDBUF_WATERMARK 0x1200
+#define SPI_REG_WRBUF_WATERMARK 0x1300
+#define SPI_REG_SIGNATURE 0x1A00
+#define SPI_REG_ACTION_CTRL 0x1B00
+
+/* SPI_CONFIG register definition; */
+#define QCASPI_SLAVE_RESET_BIT (1 << 6)
+
+/* INTR_CAUSE/ENABLE register definition. */
+#define SPI_INT_WRBUF_BELOW_WM (1 << 10)
+#define SPI_INT_CPU_ON (1 << 6)
+#define SPI_INT_ADDR_ERR (1 << 3)
+#define SPI_INT_WRBUF_ERR (1 << 2)
+#define SPI_INT_RDBUF_ERR (1 << 1)
+#define SPI_INT_PKT_AVLBL (1 << 0)
+
+void qcaspi_spi_error(struct qcaspi *qca);
+int qcaspi_read_register(struct qcaspi *qca, u16 reg, u16 *result);
+int qcaspi_write_register(struct qcaspi *qca, u16 reg, u16 value);
+int qcaspi_tx_cmd(struct qcaspi *qca, u16 cmd);
+
+#endif /* _QCA_7K_H */
diff --git a/drivers/net/ethernet/qualcomm/qca_debug.c b/drivers/net/ethernet/qualcomm/qca_debug.c
new file mode 100644
index 0000000..8e28234
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_debug.c
@@ -0,0 +1,311 @@
+/*
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* This file contains debugging routines for use in the QCA7K driver.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/ethtool.h>
+#include <linux/seq_file.h>
+#include <linux/types.h>
+
+#include "qca_7k.h"
+#include "qca_debug.h"
+
+#define QCASPI_MAX_REGS 0x20
+
+static const u16 qcaspi_spi_regs[] = {
+ SPI_REG_BFR_SIZE,
+ SPI_REG_WRBUF_SPC_AVA,
+ SPI_REG_RDBUF_BYTE_AVA,
+ SPI_REG_SPI_CONFIG,
+ SPI_REG_SPI_STATUS,
+ SPI_REG_INTR_CAUSE,
+ SPI_REG_INTR_ENABLE,
+ SPI_REG_RDBUF_WATERMARK,
+ SPI_REG_WRBUF_WATERMARK,
+ SPI_REG_SIGNATURE,
+ SPI_REG_ACTION_CTRL
+};
+
+/* The order of these strings must match the order of the fields in
+ * struct qcaspi_stats
+ * See qca_spi.h
+ */
+static const char qcaspi_gstrings_stats[][ETH_GSTRING_LEN] = {
+ "Triggered resets",
+ "Device resets",
+ "Reset timeouts",
+ "Read errors",
+ "Write errors",
+ "Read buffer errors",
+ "Write buffer errors",
+ "Out of memory",
+ "Write buffer misses",
+ "Transmit ring full",
+ "SPI errors",
+};
+
+#ifdef CONFIG_DEBUG_FS
+
+static int
+qcaspi_info_show(struct seq_file *s, void *what)
+{
+ struct qcaspi *qca = s->private;
+
+ seq_printf(s, "RX buffer size : %lu\n",
+ (unsigned long)qca->buffer_size);
+
+ seq_puts(s, "TX ring state : ");
+
+ if (qca->txr.skb[qca->txr.head] == NULL)
+ seq_puts(s, "empty");
+ else if (qca->txr.skb[qca->txr.tail])
+ seq_puts(s, "full");
+ else
+ seq_puts(s, "in use");
+
+ seq_puts(s, "\n");
+
+ seq_printf(s, "TX ring size : %u\n",
+ qca->txr.size);
+
+ seq_printf(s, "Sync state : %u (",
+ (unsigned int)qca->sync);
+ switch (qca->sync) {
+ case QCASPI_SYNC_UNKNOWN:
+ seq_puts(s, "QCASPI_SYNC_UNKNOWN");
+ break;
+ case QCASPI_SYNC_RESET:
+ seq_puts(s, "QCASPI_SYNC_RESET");
+ break;
+ case QCASPI_SYNC_READY:
+ seq_puts(s, "QCASPI_SYNC_READY");
+ break;
+ default:
+ seq_puts(s, "INVALID");
+ break;
+ }
+ seq_puts(s, ")\n");
+
+ seq_printf(s, "IRQ : %d\n",
+ qca->spi_dev->irq);
+ seq_printf(s, "INTR REQ : %u\n",
+ qca->intr_req);
+ seq_printf(s, "INTR SVC : %u\n",
+ qca->intr_svc);
+
+ seq_printf(s, "SPI max speed : %lu\n",
+ (unsigned long)qca->spi_dev->max_speed_hz);
+ seq_printf(s, "SPI mode : %x\n",
+ qca->spi_dev->mode);
+ seq_printf(s, "SPI chip select : %u\n",
+ (unsigned int)qca->spi_dev->chip_select);
+ seq_printf(s, "SPI legacy mode : %u\n",
+ (unsigned int)qca->legacy_mode);
+ seq_printf(s, "SPI burst length : %u\n",
+ (unsigned int)qca->burst_len);
+
+ return 0;
+}
+
+static int
+qcaspi_info_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, qcaspi_info_show, inode->i_private);
+}
+
+static const struct file_operations qcaspi_info_ops = {
+ .open = qcaspi_info_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+void
+qcaspi_init_device_debugfs(struct qcaspi *qca)
+{
+ struct dentry *device_root;
+
+ device_root = debugfs_create_dir(dev_name(&qca->net_dev->dev), NULL);
+ qca->device_root = device_root;
+
+ if (IS_ERR(device_root) || !device_root) {
+ pr_warn("failed to create debugfs directory for %s\n",
+ dev_name(&qca->net_dev->dev));
+ return;
+ }
+ debugfs_create_file("info", S_IFREG | S_IRUGO, device_root, qca,
+ &qcaspi_info_ops);
+}
+
+void
+qcaspi_remove_device_debugfs(struct qcaspi *qca)
+{
+ debugfs_remove_recursive(qca->device_root);
+}
+
+#else /* CONFIG_DEBUG_FS */
+
+void
+qcaspi_init_device_debugfs(struct qcaspi *qca)
+{
+}
+
+void
+qcaspi_remove_device_debugfs(struct qcaspi *qca)
+{
+}
+
+#endif
+
+static void
+qcaspi_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *p)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ strlcpy(p->driver, QCASPI_DRV_NAME, sizeof(p->driver));
+ strlcpy(p->version, QCASPI_DRV_VERSION, sizeof(p->version));
+ strlcpy(p->fw_version, "QCA7000", sizeof(p->fw_version));
+ strlcpy(p->bus_info, dev_name(&qca->spi_dev->dev),
+ sizeof(p->bus_info));
+}
+
+static int
+qcaspi_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ cmd->transceiver = XCVR_INTERNAL;
+ cmd->supported = SUPPORTED_10baseT_Half;
+ ethtool_cmd_speed_set(cmd, SPEED_10);
+ cmd->duplex = DUPLEX_HALF;
+ cmd->port = PORT_OTHER;
+ cmd->autoneg = AUTONEG_DISABLE;
+
+ return 0;
+}
+
+static void
+qcaspi_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *estats, u64 *data)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+ struct qcaspi_stats *st = &qca->stats;
+
+ memcpy(data, st, ARRAY_SIZE(qcaspi_gstrings_stats) * sizeof(u64));
+}
+
+static void
+qcaspi_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
+{
+ switch (stringset) {
+ case ETH_SS_STATS:
+ memcpy(buf, &qcaspi_gstrings_stats,
+ sizeof(qcaspi_gstrings_stats));
+ break;
+ default:
+ WARN_ON(1);
+ break;
+ }
+}
+
+static int
+qcaspi_get_sset_count(struct net_device *dev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ return ARRAY_SIZE(qcaspi_gstrings_stats);
+ default:
+ return -EINVAL;
+ }
+}
+
+static int
+qcaspi_get_regs_len(struct net_device *dev)
+{
+ return sizeof(u32) * QCASPI_MAX_REGS;
+}
+
+static void
+qcaspi_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *p)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+ u32 *regs_buff = p;
+ unsigned int i;
+
+ regs->version = 1;
+ memset(regs_buff, 0, sizeof(u32) * QCASPI_MAX_REGS);
+
+ for (i = 0; i < ARRAY_SIZE(qcaspi_spi_regs); i++) {
+ u16 offset, value;
+
+ qcaspi_read_register(qca, qcaspi_spi_regs[i], &value);
+ offset = qcaspi_spi_regs[i] >> 8;
+ regs_buff[offset] = value;
+ }
+}
+
+static void
+qcaspi_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ring)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ ring->rx_max_pending = 4;
+ ring->tx_max_pending = TX_RING_MAX_LEN;
+ ring->rx_pending = 4;
+ ring->tx_pending = qca->txr.count;
+}
+
+static int
+qcaspi_set_ringparam(struct net_device *dev, struct ethtool_ringparam *ring)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ if ((ring->rx_pending) ||
+ (ring->rx_mini_pending) ||
+ (ring->rx_jumbo_pending))
+ return -EINVAL;
+
+ if (netif_running(dev))
+ qcaspi_netdev_close(dev);
+
+ qca->txr.count = max_t(u32, ring->tx_pending, TX_RING_MIN_LEN);
+ qca->txr.count = min_t(u16, qca->txr.count, TX_RING_MAX_LEN);
+
+ if (netif_running(dev))
+ qcaspi_netdev_open(dev);
+
+ return 0;
+}
+
+static const struct ethtool_ops qcaspi_ethtool_ops = {
+ .get_drvinfo = qcaspi_get_drvinfo,
+ .get_link = ethtool_op_get_link,
+ .get_settings = qcaspi_get_settings,
+ .get_ethtool_stats = qcaspi_get_ethtool_stats,
+ .get_strings = qcaspi_get_strings,
+ .get_sset_count = qcaspi_get_sset_count,
+ .get_regs_len = qcaspi_get_regs_len,
+ .get_regs = qcaspi_get_regs,
+ .get_ringparam = qcaspi_get_ringparam,
+ .set_ringparam = qcaspi_set_ringparam,
+};
+
+void qcaspi_set_ethtool_ops(struct net_device *dev)
+{
+ dev->ethtool_ops = &qcaspi_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/qualcomm/qca_debug.h b/drivers/net/ethernet/qualcomm/qca_debug.h
new file mode 100644
index 0000000..46a7858
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_debug.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* This file contains debugging routines for use in the QCA7K driver.
+ */
+
+#ifndef _QCA_DEBUG_H
+#define _QCA_DEBUG_H
+
+#include "qca_spi.h"
+
+void qcaspi_init_device_debugfs(struct qcaspi *qca);
+
+void qcaspi_remove_device_debugfs(struct qcaspi *qca);
+
+void qcaspi_set_ethtool_ops(struct net_device *dev);
+
+#endif /* _QCA_DEBUG_H */
diff --git a/drivers/net/ethernet/qualcomm/qca_framing.c b/drivers/net/ethernet/qualcomm/qca_framing.c
new file mode 100644
index 0000000..faa924c
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_framing.c
@@ -0,0 +1,156 @@
+/*
+ * Copyright (c) 2011, 2012, Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* Atheros ethernet framing. Every Ethernet frame is surrounded
+ * by an atheros frame while transmitted over a serial channel;
+ */
+
+#include <linux/kernel.h>
+
+#include "qca_framing.h"
+
+u16
+qcafrm_create_header(u8 *buf, u16 length)
+{
+ __le16 len;
+
+ if (!buf)
+ return 0;
+
+ len = cpu_to_le16(length);
+
+ buf[0] = 0xAA;
+ buf[1] = 0xAA;
+ buf[2] = 0xAA;
+ buf[3] = 0xAA;
+ buf[4] = len & 0xff;
+ buf[5] = (len >> 8) & 0xff;
+ buf[6] = 0;
+ buf[7] = 0;
+
+ return QCAFRM_HEADER_LEN;
+}
+
+u16
+qcafrm_create_footer(u8 *buf)
+{
+ if (!buf)
+ return 0;
+
+ buf[0] = 0x55;
+ buf[1] = 0x55;
+ return QCAFRM_FOOTER_LEN;
+}
+
+/* Gather received bytes and try to extract a full ethernet frame by
+ * following a simple state machine.
+ *
+ * Return: QCAFRM_GATHER No ethernet frame fully received yet.
+ * QCAFRM_NOHEAD Header expected but not found.
+ * QCAFRM_INVLEN Atheros frame length is invalid
+ * QCAFRM_NOTAIL Footer expected but not found.
+ * > 0 Number of byte in the fully received
+ * Ethernet frame
+ */
+
+s32
+qcafrm_fsm_decode(struct qcafrm_handle *handle, u8 *buf, u16 buf_len, u8 recv_byte)
+{
+ s32 ret = QCAFRM_GATHER;
+ u16 len;
+
+ switch (handle->state) {
+ case QCAFRM_HW_LEN0:
+ case QCAFRM_HW_LEN1:
+ /* by default, just go to next state */
+ handle->state--;
+
+ if (recv_byte != 0x00) {
+ /* first two bytes of length must be 0 */
+ handle->state = QCAFRM_HW_LEN0;
+ }
+ break;
+ case QCAFRM_HW_LEN2:
+ case QCAFRM_HW_LEN3:
+ handle->state--;
+ break;
+ /* 4 bytes header pattern */
+ case QCAFRM_WAIT_AA1:
+ case QCAFRM_WAIT_AA2:
+ case QCAFRM_WAIT_AA3:
+ case QCAFRM_WAIT_AA4:
+ if (recv_byte != 0xAA) {
+ ret = QCAFRM_NOHEAD;
+ handle->state = QCAFRM_HW_LEN0;
+ } else {
+ handle->state--;
+ }
+ break;
+ /* 2 bytes length. */
+ /* Borrow offset field to hold length for now. */
+ case QCAFRM_WAIT_LEN_BYTE0:
+ handle->offset = recv_byte;
+ handle->state = QCAFRM_WAIT_LEN_BYTE1;
+ break;
+ case QCAFRM_WAIT_LEN_BYTE1:
+ handle->offset = handle->offset | (recv_byte << 8);
+ handle->state = QCAFRM_WAIT_RSVD_BYTE1;
+ break;
+ case QCAFRM_WAIT_RSVD_BYTE1:
+ handle->state = QCAFRM_WAIT_RSVD_BYTE2;
+ break;
+ case QCAFRM_WAIT_RSVD_BYTE2:
+ len = handle->offset;
+ if (len > buf_len || len < QCAFRM_ETHMINLEN) {
+ ret = QCAFRM_INVLEN;
+ handle->state = QCAFRM_HW_LEN0;
+ } else {
+ handle->state = (enum qcafrm_state)(len + 1);
+ /* Remaining number of bytes. */
+ handle->offset = 0;
+ }
+ break;
+ default:
+ /* Receiving Ethernet frame itself. */
+ buf[handle->offset] = recv_byte;
+ handle->offset++;
+ handle->state--;
+ break;
+ case QCAFRM_WAIT_551:
+ if (recv_byte != 0x55) {
+ ret = QCAFRM_NOTAIL;
+ handle->state = QCAFRM_HW_LEN0;
+ } else {
+ handle->state = QCAFRM_WAIT_552;
+ }
+ break;
+ case QCAFRM_WAIT_552:
+ if (recv_byte != 0x55) {
+ ret = QCAFRM_NOTAIL;
+ handle->state = QCAFRM_HW_LEN0;
+ } else {
+ ret = handle->offset;
+ /* Frame is fully received. */
+ handle->state = QCAFRM_HW_LEN0;
+ }
+ break;
+ }
+
+ return ret;
+}
diff --git a/drivers/net/ethernet/qualcomm/qca_framing.h b/drivers/net/ethernet/qualcomm/qca_framing.h
new file mode 100644
index 0000000..5d96595
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_framing.h
@@ -0,0 +1,134 @@
+/*
+ * Copyright (c) 2011, 2012, Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* Atheros Ethernet framing. Every Ethernet frame is surrounded by an atheros
+ * frame while transmitted over a serial channel.
+ */
+
+#ifndef _QCA_FRAMING_H
+#define _QCA_FRAMING_H
+
+#include <linux/if_ether.h>
+#include <linux/if_vlan.h>
+#include <linux/types.h>
+
+/* Frame is currently being received */
+#define QCAFRM_GATHER 0
+
+/* No header byte while expecting it */
+#define QCAFRM_NOHEAD (QCAFRM_ERR_BASE - 1)
+
+/* No tailer byte while expecting it */
+#define QCAFRM_NOTAIL (QCAFRM_ERR_BASE - 2)
+
+/* Frame length is invalid */
+#define QCAFRM_INVLEN (QCAFRM_ERR_BASE - 3)
+
+/* Frame length is invalid */
+#define QCAFRM_INVFRAME (QCAFRM_ERR_BASE - 4)
+
+/* Min/Max Ethernet MTU */
+#define QCAFRM_ETHMINMTU 46
+#define QCAFRM_ETHMAXMTU 1500
+
+/* Min/Max frame lengths */
+#define QCAFRM_ETHMINLEN (QCAFRM_ETHMINMTU + ETH_HLEN)
+#define QCAFRM_ETHMAXLEN (QCAFRM_ETHMAXMTU + VLAN_ETH_HLEN)
+
+/* QCA7K header len */
+#define QCAFRM_HEADER_LEN 8
+
+/* QCA7K footer len */
+#define QCAFRM_FOOTER_LEN 2
+
+/* QCA7K Framing. */
+#define QCAFRM_ERR_BASE -1000
+
+enum qcafrm_state {
+ QCAFRM_HW_LEN0 = 0x8000,
+ QCAFRM_HW_LEN1 = QCAFRM_HW_LEN0 - 1,
+ QCAFRM_HW_LEN2 = QCAFRM_HW_LEN1 - 1,
+ QCAFRM_HW_LEN3 = QCAFRM_HW_LEN2 - 1,
+
+ /* Waiting first 0xAA of header */
+ QCAFRM_WAIT_AA1 = QCAFRM_HW_LEN3 - 1,
+
+ /* Waiting second 0xAA of header */
+ QCAFRM_WAIT_AA2 = QCAFRM_WAIT_AA1 - 1,
+
+ /* Waiting third 0xAA of header */
+ QCAFRM_WAIT_AA3 = QCAFRM_WAIT_AA2 - 1,
+
+ /* Waiting fourth 0xAA of header */
+ QCAFRM_WAIT_AA4 = QCAFRM_WAIT_AA3 - 1,
+
+ /* Waiting Byte 0-1 of length (litte endian) */
+ QCAFRM_WAIT_LEN_BYTE0 = QCAFRM_WAIT_AA4 - 1,
+ QCAFRM_WAIT_LEN_BYTE1 = QCAFRM_WAIT_AA4 - 2,
+
+ /* Reserved bytes */
+ QCAFRM_WAIT_RSVD_BYTE1 = QCAFRM_WAIT_AA4 - 3,
+ QCAFRM_WAIT_RSVD_BYTE2 = QCAFRM_WAIT_AA4 - 4,
+
+ /* The frame length is used as the state until
+ * the end of the Ethernet frame
+ * Waiting for first 0x55 of footer
+ */
+ QCAFRM_WAIT_551 = 1,
+
+ /* Waiting for second 0x55 of footer */
+ QCAFRM_WAIT_552 = QCAFRM_WAIT_551 - 1
+};
+
+/* Structure to maintain the frame decoding during reception. */
+
+struct qcafrm_handle {
+ /* Current decoding state */
+ enum qcafrm_state state;
+
+ /* Offset in buffer (borrowed for length too) */
+ s16 offset;
+
+ /* Frame length as kept by this module */
+ u16 len;
+};
+
+u16 qcafrm_create_header(u8 *buf, u16 len);
+
+u16 qcafrm_create_footer(u8 *buf);
+
+static inline void qcafrm_fsm_init(struct qcafrm_handle *handle)
+{
+ handle->state = QCAFRM_HW_LEN0;
+}
+
+/* Gather received bytes and try to extract a full Ethernet frame
+ * by following a simple state machine.
+ *
+ * Return: QCAFRM_GATHER No Ethernet frame fully received yet.
+ * QCAFRM_NOHEAD Header expected but not found.
+ * QCAFRM_INVLEN QCA7K frame length is invalid
+ * QCAFRM_NOTAIL Footer expected but not found.
+ * > 0 Number of byte in the fully received
+ * Ethernet frame
+ */
+
+s32 qcafrm_fsm_decode(struct qcafrm_handle *handle, u8 *buf, u16 buf_len, u8 recv_byte);
+
+#endif /* _QCA_FRAMING_H */
diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
new file mode 100644
index 0000000..2c811f6
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_spi.c
@@ -0,0 +1,991 @@
+/*
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* This module implements the Qualcomm Atheros SPI protocol for
+ * kernel-based SPI device; it is essentially an Ethernet-to-SPI
+ * serial converter;
+ */
+
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_arp.h>
+#include <linux/if_ether.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/netdevice.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_net.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/spi/spi.h>
+#include <linux/types.h>
+#include <linux/version.h>
+
+#include "qca_7k.h"
+#include "qca_debug.h"
+#include "qca_framing.h"
+#include "qca_spi.h"
+
+#define MAX_DMA_BURST_LEN 5000
+
+/* Modules parameters */
+#define QCASPI_CLK_SPEED_MIN 1000000
+#define QCASPI_CLK_SPEED_MAX 16000000
+#define QCASPI_CLK_SPEED 8000000
+static int qcaspi_clkspeed;
+module_param(qcaspi_clkspeed, int, 0);
+MODULE_PARM_DESC(qcaspi_clkspeed, "SPI bus clock speed (Hz). Use 1000000-16000000.");
+
+#define QCASPI_BURST_LEN_MIN 1
+#define QCASPI_BURST_LEN_MAX MAX_DMA_BURST_LEN
+static int qcaspi_burst_len = MAX_DMA_BURST_LEN;
+module_param(qcaspi_burst_len, int, 0);
+MODULE_PARM_DESC(qcaspi_burst_len, "Number of data bytes per burst. Use 1-5000.");
+
+#define QCASPI_PLUGGABLE_MIN 0
+#define QCASPI_PLUGGABLE_MAX 1
+static int qcaspi_pluggable = QCASPI_PLUGGABLE_MIN;
+module_param(qcaspi_pluggable, int, 0);
+MODULE_PARM_DESC(qcaspi_pluggable, "Pluggable SPI connection (yes/no).");
+
+#define QCASPI_MTU QCAFRM_ETHMAXMTU
+#define QCASPI_TX_TIMEOUT (1 * HZ)
+#define QCASPI_QCA7K_REBOOT_TIME_MS 1000
+
+static void
+start_spi_intr_handling(struct qcaspi *qca, u16 *intr_cause)
+{
+ *intr_cause = 0;
+
+ qcaspi_write_register(qca, SPI_REG_INTR_ENABLE, 0);
+ qcaspi_read_register(qca, SPI_REG_INTR_CAUSE, intr_cause);
+ netdev_dbg(qca->net_dev, "interrupts: 0x%04x\n", *intr_cause);
+}
+
+static void
+end_spi_intr_handling(struct qcaspi *qca, u16 intr_cause)
+{
+ u16 intr_enable = (SPI_INT_CPU_ON |
+ SPI_INT_PKT_AVLBL |
+ SPI_INT_RDBUF_ERR |
+ SPI_INT_WRBUF_ERR);
+
+ qcaspi_write_register(qca, SPI_REG_INTR_CAUSE, intr_cause);
+ qcaspi_write_register(qca, SPI_REG_INTR_ENABLE, intr_enable);
+ netdev_dbg(qca->net_dev, "acking int: 0x%04x\n", intr_cause);
+}
+
+static u32
+qcaspi_write_burst(struct qcaspi *qca, u8 *src, u32 len)
+{
+ __be16 cmd;
+ struct spi_message *msg = &qca->spi_msg2;
+ struct spi_transfer *transfer = &qca->spi_xfer2[0];
+ int ret;
+
+ cmd = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_EXTERNAL);
+ transfer->tx_buf = &cmd;
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ transfer = &qca->spi_xfer2[1];
+ transfer->tx_buf = src;
+ transfer->rx_buf = NULL;
+ transfer->len = len;
+
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
+ qcaspi_spi_error(qca);
+ return 0;
+ }
+
+ return len;
+}
+
+static u32
+qcaspi_write_legacy(struct qcaspi *qca, u8 *src, u32 len)
+{
+ struct spi_message *msg = &qca->spi_msg1;
+ struct spi_transfer *transfer = &qca->spi_xfer1;
+ int ret;
+
+ transfer->tx_buf = src;
+ transfer->rx_buf = NULL;
+ transfer->len = len;
+
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (ret || (msg->actual_length != len)) {
+ qcaspi_spi_error(qca);
+ return 0;
+ }
+
+ return len;
+}
+
+static u32
+qcaspi_read_burst(struct qcaspi *qca, u8 *dst, u32 len)
+{
+ struct spi_message *msg = &qca->spi_msg2;
+ __be16 cmd;
+ struct spi_transfer *transfer = &qca->spi_xfer2[0];
+ int ret;
+
+ cmd = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_EXTERNAL);
+ transfer->tx_buf = &cmd;
+ transfer->rx_buf = NULL;
+ transfer->len = QCASPI_CMD_LEN;
+ transfer = &qca->spi_xfer2[1];
+ transfer->tx_buf = NULL;
+ transfer->rx_buf = dst;
+ transfer->len = len;
+
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
+ qcaspi_spi_error(qca);
+ return 0;
+ }
+
+ return len;
+}
+
+static u32
+qcaspi_read_legacy(struct qcaspi *qca, u8 *dst, u32 len)
+{
+ struct spi_message *msg = &qca->spi_msg1;
+ struct spi_transfer *transfer = &qca->spi_xfer1;
+ int ret;
+
+ transfer->tx_buf = NULL;
+ transfer->rx_buf = dst;
+ transfer->len = len;
+
+ ret = spi_sync(qca->spi_dev, msg);
+
+ if (ret || (msg->actual_length != len)) {
+ qcaspi_spi_error(qca);
+ return 0;
+ }
+
+ return len;
+}
+
+static int
+qcaspi_tx_frame(struct qcaspi *qca, struct sk_buff *skb)
+{
+ u32 count;
+ u32 written;
+ u32 offset;
+ u32 len;
+
+ len = skb->len;
+
+ qcaspi_write_register(qca, SPI_REG_BFR_SIZE, len);
+ if (qca->legacy_mode)
+ qcaspi_tx_cmd(qca, QCA7K_SPI_WRITE | QCA7K_SPI_EXTERNAL);
+
+ offset = 0;
+ while (len) {
+ count = len;
+ if (count > qca->burst_len)
+ count = qca->burst_len;
+
+ if (qca->legacy_mode) {
+ written = qcaspi_write_legacy(qca,
+ skb->data + offset,
+ count);
+ } else {
+ written = qcaspi_write_burst(qca,
+ skb->data + offset,
+ count);
+ }
+
+ if (written != count)
+ return -1;
+
+ offset += count;
+ len -= count;
+ }
+
+ return 0;
+}
+
+static int
+qcaspi_transmit(struct qcaspi *qca)
+{
+ struct net_device_stats *n_stats = &qca->net_dev->stats;
+ u16 available = 0;
+ u32 pkt_len;
+ u16 new_head;
+ u16 packets = 0;
+
+ if (qca->txr.skb[qca->txr.head] == NULL)
+ return 0;
+
+ qcaspi_read_register(qca, SPI_REG_WRBUF_SPC_AVA, &available);
+
+ while (qca->txr.skb[qca->txr.head]) {
+ pkt_len = qca->txr.skb[qca->txr.head]->len + QCASPI_HW_PKT_LEN;
+
+ if (available < pkt_len) {
+ if (packets == 0)
+ qca->stats.write_buf_miss++;
+ break;
+ }
+
+ if (qcaspi_tx_frame(qca, qca->txr.skb[qca->txr.head]) == -1) {
+ qca->stats.write_err++;
+ return -1;
+ }
+
+ packets++;
+ n_stats->tx_packets++;
+ n_stats->tx_bytes += qca->txr.skb[qca->txr.head]->len;
+ available -= pkt_len;
+
+ /* remove the skb from the queue */
+ /* XXX After inconsistent lock states netif_tx_lock()
+ * has been replaced by netif_tx_lock_bh() and so on.
+ */
+ netif_tx_lock_bh(qca->net_dev);
+ dev_kfree_skb(qca->txr.skb[qca->txr.head]);
+ qca->txr.skb[qca->txr.head] = NULL;
+ qca->txr.size -= pkt_len;
+ new_head = qca->txr.head + 1;
+ if (new_head >= qca->txr.count)
+ new_head = 0;
+ qca->txr.head = new_head;
+ if (netif_queue_stopped(qca->net_dev))
+ netif_wake_queue(qca->net_dev);
+ netif_tx_unlock_bh(qca->net_dev);
+ }
+
+ return 0;
+}
+
+static int
+qcaspi_receive(struct qcaspi *qca)
+{
+ struct net_device *net_dev = qca->net_dev;
+ struct net_device_stats *n_stats = &net_dev->stats;
+ u16 available = 0;
+ u32 bytes_read;
+ u8 *cp;
+
+ /* Allocate rx SKB if we don't have one available. */
+ if (!qca->rx_skb) {
+ qca->rx_skb = netdev_alloc_skb(net_dev,
+ net_dev->mtu + VLAN_ETH_HLEN);
+ if (!qca->rx_skb) {
+ netdev_dbg(net_dev, "out of RX resources\n");
+ qca->stats.out_of_mem++;
+ return -1;
+ }
+ }
+
+ /* Read the packet size. */
+ qcaspi_read_register(qca, SPI_REG_RDBUF_BYTE_AVA, &available);
+ netdev_dbg(net_dev, "qcaspi_receive: SPI_REG_RDBUF_BYTE_AVA: Value: %08x\n",
+ available);
+
+ if (available == 0) {
+ netdev_dbg(net_dev, "qcaspi_receive called without any data being available!\n");
+ return -1;
+ }
+
+ qcaspi_write_register(qca, SPI_REG_BFR_SIZE, available);
+
+ if (qca->legacy_mode)
+ qcaspi_tx_cmd(qca, QCA7K_SPI_READ | QCA7K_SPI_EXTERNAL);
+
+ while (available) {
+ u32 count = available;
+
+ if (count > qca->burst_len)
+ count = qca->burst_len;
+
+ if (qca->legacy_mode) {
+ bytes_read = qcaspi_read_legacy(qca, qca->rx_buffer,
+ count);
+ } else {
+ bytes_read = qcaspi_read_burst(qca, qca->rx_buffer,
+ count);
+ }
+
+ netdev_dbg(net_dev, "available: %d, byte read: %d\n",
+ available, bytes_read);
+
+ if (bytes_read) {
+ available -= bytes_read;
+ } else {
+ qca->stats.read_err++;
+ return -1;
+ }
+
+ cp = qca->rx_buffer;
+
+ while ((bytes_read--) && (qca->rx_skb)) {
+ s32 retcode;
+
+ retcode = qcafrm_fsm_decode(&qca->frm_handle,
+ qca->rx_skb->data,
+ skb_tailroom(qca->rx_skb),
+ *cp);
+ cp++;
+ switch (retcode) {
+ case QCAFRM_GATHER:
+ case QCAFRM_NOHEAD:
+ break;
+ case QCAFRM_NOTAIL:
+ netdev_dbg(net_dev, "no RX tail\n");
+ n_stats->rx_errors++;
+ n_stats->rx_dropped++;
+ break;
+ case QCAFRM_INVLEN:
+ netdev_dbg(net_dev, "invalid RX length\n");
+ n_stats->rx_errors++;
+ n_stats->rx_dropped++;
+ break;
+ default:
+ qca->rx_skb->dev = qca->net_dev;
+ n_stats->rx_packets++;
+ n_stats->rx_bytes += retcode;
+ skb_put(qca->rx_skb, retcode);
+ qca->rx_skb->protocol = eth_type_trans(
+ qca->rx_skb, qca->rx_skb->dev);
+ qca->rx_skb->ip_summed = CHECKSUM_UNNECESSARY;
+ netif_rx_ni(qca->rx_skb);
+ qca->rx_skb = netdev_alloc_skb(net_dev,
+ net_dev->mtu + VLAN_ETH_HLEN);
+ if (!qca->rx_skb) {
+ netdev_dbg(net_dev, "out of RX resources\n");
+ n_stats->rx_errors++;
+ qca->stats.out_of_mem++;
+ break;
+ }
+ }
+ }
+ }
+
+ return 0;
+}
+
+/* Check that tx ring stores only so much bytes
+ * that fit into the internal QCA buffer.
+ */
+
+static int
+qcaspi_tx_ring_has_space(struct tx_ring *txr)
+{
+ if (txr->skb[txr->tail])
+ return 0;
+
+ return (txr->size + QCAFRM_ETHMAXLEN < QCASPI_HW_BUF_LEN) ? 1 : 0;
+}
+
+/* Flush the tx ring. This function is only safe to
+ * call from the qcaspi_spi_thread.
+ */
+
+static void
+qcaspi_flush_tx_ring(struct qcaspi *qca)
+{
+ int i;
+
+ /* XXX After inconsistent lock states netif_tx_lock()
+ * has been replaced by netif_tx_lock_bh() and so on.
+ */
+ netif_tx_lock_bh(qca->net_dev);
+ for (i = 0; i < TX_RING_MAX_LEN; i++) {
+ if (qca->txr.skb[i]) {
+ dev_kfree_skb(qca->txr.skb[i]);
+ qca->txr.skb[i] = NULL;
+ qca->net_dev->stats.tx_dropped++;
+ }
+ }
+ qca->txr.tail = 0;
+ qca->txr.head = 0;
+ qca->txr.size = 0;
+ netif_tx_unlock_bh(qca->net_dev);
+}
+
+static void
+qcaspi_qca7k_sync(struct qcaspi *qca, int event)
+{
+ u16 signature = 0;
+ u16 spi_config;
+ u16 wrbuf_space = 0;
+ static u16 reset_count;
+
+ if (event == QCASPI_EVENT_CPUON) {
+ /* Read signature twice, if not valid
+ * go back to unknown state.
+ */
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+ if (signature != QCASPI_GOOD_SIGNATURE) {
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ netdev_dbg(qca->net_dev, "sync: got CPU on, but signature was invalid, restart\n");
+ } else {
+ /* ensure that the WRBUF is empty */
+ qcaspi_read_register(qca, SPI_REG_WRBUF_SPC_AVA,
+ &wrbuf_space);
+ if (wrbuf_space != QCASPI_HW_BUF_LEN) {
+ netdev_dbg(qca->net_dev, "sync: got CPU on, but wrbuf not empty. reset!\n");
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ } else {
+ netdev_dbg(qca->net_dev, "sync: got CPU on, now in sync\n");
+ qca->sync = QCASPI_SYNC_READY;
+ return;
+ }
+ }
+ }
+
+ switch (qca->sync) {
+ case QCASPI_SYNC_READY:
+ /* Read signature, if not valid go to unknown state. */
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+ if (signature != QCASPI_GOOD_SIGNATURE) {
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ netdev_dbg(qca->net_dev, "sync: bad signature, restart\n");
+ /* don't reset right away */
+ return;
+ }
+ break;
+ case QCASPI_SYNC_UNKNOWN:
+ /* Read signature, if not valid stay in unknown state */
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+ if (signature != QCASPI_GOOD_SIGNATURE) {
+ netdev_dbg(qca->net_dev, "sync: could not read signature to reset device, retry.\n");
+ return;
+ }
+
+ /* TODO: use GPIO to reset QCA7000 in legacy mode*/
+ netdev_dbg(qca->net_dev, "sync: resetting device.\n");
+ qcaspi_read_register(qca, SPI_REG_SPI_CONFIG, &spi_config);
+ spi_config |= QCASPI_SLAVE_RESET_BIT;
+ qcaspi_write_register(qca, SPI_REG_SPI_CONFIG, spi_config);
+
+ qca->sync = QCASPI_SYNC_RESET;
+ qca->stats.trig_reset++;
+ reset_count = 0;
+ break;
+ case QCASPI_SYNC_RESET:
+ reset_count++;
+ netdev_dbg(qca->net_dev, "sync: waiting for CPU on, count %u.\n",
+ reset_count);
+ if (reset_count >= QCASPI_RESET_TIMEOUT) {
+ /* reset did not seem to take place, try again */
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ qca->stats.reset_timeout++;
+ netdev_dbg(qca->net_dev, "sync: reset timeout, restarting process.\n");
+ }
+ break;
+ }
+}
+
+static int
+qcaspi_spi_thread(void *data)
+{
+ struct qcaspi *qca = data;
+ u16 intr_cause = 0;
+
+ netdev_info(qca->net_dev, "SPI thread created\n");
+ while (!kthread_should_stop()) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if ((qca->intr_req == qca->intr_svc) &&
+ (qca->txr.skb[qca->txr.head] == NULL) &&
+ (qca->sync == QCASPI_SYNC_READY))
+ schedule();
+
+ set_current_state(TASK_RUNNING);
+
+ netdev_dbg(qca->net_dev, "have work to do. int: %d, tx_skb: %p\n",
+ qca->intr_req - qca->intr_svc,
+ qca->txr.skb[qca->txr.head]);
+
+ qcaspi_qca7k_sync(qca, QCASPI_EVENT_UPDATE);
+
+ if (qca->sync != QCASPI_SYNC_READY) {
+ netdev_dbg(qca->net_dev, "sync: not ready %u, turn off carrier and flush\n",
+ (unsigned int)qca->sync);
+ netif_stop_queue(qca->net_dev);
+ netif_carrier_off(qca->net_dev);
+ qcaspi_flush_tx_ring(qca);
+ msleep(QCASPI_QCA7K_REBOOT_TIME_MS);
+ }
+
+ if (qca->intr_svc != qca->intr_req) {
+ qca->intr_svc = qca->intr_req;
+ start_spi_intr_handling(qca, &intr_cause);
+
+ if (intr_cause & SPI_INT_CPU_ON) {
+ qcaspi_qca7k_sync(qca, QCASPI_EVENT_CPUON);
+
+ /* not synced. */
+ if (qca->sync != QCASPI_SYNC_READY)
+ continue;
+
+ qca->stats.device_reset++;
+ netif_wake_queue(qca->net_dev);
+ netif_carrier_on(qca->net_dev);
+ }
+
+ if (intr_cause & SPI_INT_RDBUF_ERR) {
+ /* restart sync */
+ netdev_dbg(qca->net_dev, "===> rdbuf error!\n");
+ qca->stats.read_buf_err++;
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ continue;
+ }
+
+ if (intr_cause & SPI_INT_WRBUF_ERR) {
+ /* restart sync */
+ netdev_dbg(qca->net_dev, "===> wrbuf error!\n");
+ qca->stats.write_buf_err++;
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ continue;
+ }
+
+ /* can only handle other interrupts
+ * if sync has occured
+ */
+ if (qca->sync == QCASPI_SYNC_READY) {
+ if (intr_cause & SPI_INT_PKT_AVLBL)
+ qcaspi_receive(qca);
+ }
+
+ end_spi_intr_handling(qca, intr_cause);
+ }
+
+ if (qca->sync == QCASPI_SYNC_READY)
+ qcaspi_transmit(qca);
+ }
+ set_current_state(TASK_RUNNING);
+ netdev_info(qca->net_dev, "SPI thread exit\n");
+
+ return 0;
+}
+
+static irqreturn_t
+qcaspi_intr_handler(int irq, void *data)
+{
+ struct qcaspi *qca = data;
+
+ qca->intr_req++;
+ if (qca->spi_thread &&
+ qca->spi_thread->state != TASK_RUNNING)
+ wake_up_process(qca->spi_thread);
+
+ return IRQ_HANDLED;
+}
+
+int
+qcaspi_netdev_open(struct net_device *dev)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+ int ret = 0;
+
+ if (!qca)
+ return -EINVAL;
+
+ qca->intr_req = 1;
+ qca->intr_svc = 0;
+ qca->sync = QCASPI_SYNC_UNKNOWN;
+ qcafrm_fsm_init(&qca->frm_handle);
+
+ qca->spi_thread = kthread_run((void *)qcaspi_spi_thread,
+ qca, "%s", dev->name);
+
+ if (IS_ERR(qca->spi_thread)) {
+ netdev_err(dev, "%s: unable to start kernel thread.\n",
+ QCASPI_DRV_NAME);
+ return PTR_ERR(qca->spi_thread);
+ }
+
+ ret = request_irq(qca->spi_dev->irq, qcaspi_intr_handler, 0,
+ dev->name, qca);
+ if (ret) {
+ netdev_err(dev, "%s: unable to get IRQ %d (irqval=%d).\n",
+ QCASPI_DRV_NAME, qca->spi_dev->irq, ret);
+ kthread_stop(qca->spi_thread);
+ return ret;
+ }
+
+ netif_start_queue(qca->net_dev);
+
+ return 0;
+}
+
+int
+qcaspi_netdev_close(struct net_device *dev)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ netif_stop_queue(dev);
+
+ qcaspi_write_register(qca, SPI_REG_INTR_ENABLE, 0);
+ free_irq(qca->spi_dev->irq, qca);
+
+ kthread_stop(qca->spi_thread);
+ qca->spi_thread = NULL;
+ qcaspi_flush_tx_ring(qca);
+
+ return 0;
+}
+
+static netdev_tx_t
+qcaspi_netdev_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ u32 frame_len;
+ u8 *ptmp;
+ struct qcaspi *qca = netdev_priv(dev);
+ u16 new_tail;
+ struct sk_buff *tskb;
+ u8 pad_len = 0;
+
+ if (skb->len < QCAFRM_ETHMINLEN)
+ pad_len = QCAFRM_ETHMINLEN - skb->len;
+
+ if (qca->txr.skb[qca->txr.tail]) {
+ netdev_warn(qca->net_dev, "queue was unexpectedly full!\n");
+ netif_stop_queue(qca->net_dev);
+ qca->stats.ring_full++;
+ return NETDEV_TX_BUSY;
+ }
+
+ if ((skb_headroom(skb) < QCAFRM_HEADER_LEN) ||
+ (skb_tailroom(skb) < QCAFRM_FOOTER_LEN + pad_len)) {
+ tskb = skb_copy_expand(skb, QCAFRM_HEADER_LEN,
+ QCAFRM_FOOTER_LEN + pad_len, GFP_ATOMIC);
+ if (!tskb) {
+ netdev_dbg(qca->net_dev, "could not allocate tx_buff\n");
+ qca->stats.out_of_mem++;
+ return NETDEV_TX_BUSY;
+ }
+ dev_kfree_skb(skb);
+ skb = tskb;
+ }
+
+ frame_len = skb->len + pad_len;
+
+ ptmp = skb_push(skb, QCAFRM_HEADER_LEN);
+ qcafrm_create_header(ptmp, frame_len);
+
+ if (pad_len) {
+ ptmp = skb_put(skb, pad_len);
+ memset(ptmp, 0, pad_len);
+ }
+
+ ptmp = skb_put(skb, QCAFRM_FOOTER_LEN);
+ qcafrm_create_footer(ptmp);
+
+ netdev_dbg(qca->net_dev, "Tx-ing packet: Size: 0x%08x\n",
+ skb->len);
+
+ qca->txr.size += skb->len + QCASPI_HW_PKT_LEN;
+
+ new_tail = qca->txr.tail + 1;
+ if (new_tail >= qca->txr.count)
+ new_tail = 0;
+
+ qca->txr.skb[qca->txr.tail] = skb;
+ qca->txr.tail = new_tail;
+
+ if (!qcaspi_tx_ring_has_space(&qca->txr)) {
+ netif_stop_queue(qca->net_dev);
+ qca->stats.ring_full++;
+ }
+
+ dev->trans_start = jiffies;
+
+ if (qca->spi_thread &&
+ qca->spi_thread->state != TASK_RUNNING)
+ wake_up_process(qca->spi_thread);
+
+ return NETDEV_TX_OK;
+}
+
+static void
+qcaspi_netdev_tx_timeout(struct net_device *dev)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ netdev_info(qca->net_dev, "Transmit timeout at %ld, latency %ld\n",
+ jiffies, jiffies - dev->trans_start);
+ qca->net_dev->stats.tx_errors++;
+ /* wake the queue if there is room */
+ if (qcaspi_tx_ring_has_space(&qca->txr))
+ netif_wake_queue(dev);
+}
+
+static int
+qcaspi_netdev_init(struct net_device *dev)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ dev->mtu = QCASPI_MTU;
+ dev->type = ARPHRD_ETHER;
+ qca->clkspeed = qcaspi_clkspeed;
+ qca->burst_len = qcaspi_burst_len;
+ qca->spi_thread = NULL;
+ qca->buffer_size = (dev->mtu + VLAN_ETH_HLEN + QCAFRM_HEADER_LEN +
+ QCAFRM_FOOTER_LEN + 4) * 4;
+
+ memset(&qca->stats, 0, sizeof(struct qcaspi_stats));
+
+ qca->rx_buffer = kmalloc(qca->buffer_size, GFP_KERNEL);
+ if (!qca->rx_buffer)
+ return -ENOBUFS;
+
+ qca->rx_skb = netdev_alloc_skb(dev, qca->net_dev->mtu + VLAN_ETH_HLEN);
+ if (!qca->rx_skb) {
+ kfree(qca->rx_buffer);
+ netdev_info(qca->net_dev, "Failed to allocate RX sk_buff.\n");
+ return -ENOBUFS;
+ }
+
+ return 0;
+}
+
+static void
+qcaspi_netdev_uninit(struct net_device *dev)
+{
+ struct qcaspi *qca = netdev_priv(dev);
+
+ kfree(qca->rx_buffer);
+ qca->buffer_size = 0;
+ if (qca->rx_skb)
+ dev_kfree_skb(qca->rx_skb);
+}
+
+static int
+qcaspi_netdev_change_mtu(struct net_device *dev, int new_mtu)
+{
+ if ((new_mtu < QCAFRM_ETHMINMTU) || (new_mtu > QCAFRM_ETHMAXMTU))
+ return -EINVAL;
+
+ dev->mtu = new_mtu;
+
+ return 0;
+}
+
+static const struct net_device_ops qcaspi_netdev_ops = {
+ .ndo_init = qcaspi_netdev_init,
+ .ndo_uninit = qcaspi_netdev_uninit,
+ .ndo_open = qcaspi_netdev_open,
+ .ndo_stop = qcaspi_netdev_close,
+ .ndo_start_xmit = qcaspi_netdev_xmit,
+ .ndo_change_mtu = qcaspi_netdev_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_tx_timeout = qcaspi_netdev_tx_timeout,
+ .ndo_validate_addr = eth_validate_addr,
+};
+
+static void
+qcaspi_netdev_setup(struct net_device *dev)
+{
+ struct qcaspi *qca = NULL;
+
+ dev->netdev_ops = &qcaspi_netdev_ops;
+ qcaspi_set_ethtool_ops(dev);
+ dev->watchdog_timeo = QCASPI_TX_TIMEOUT;
+ dev->flags = IFF_MULTICAST;
+ dev->tx_queue_len = 100;
+
+ qca = netdev_priv(dev);
+ memset(qca, 0, sizeof(struct qcaspi));
+
+ memset(&qca->spi_xfer1, 0, sizeof(struct spi_transfer));
+ memset(&qca->spi_xfer2, 0, sizeof(struct spi_transfer) * 2);
+
+ spi_message_init(&qca->spi_msg1);
+ spi_message_add_tail(&qca->spi_xfer1, &qca->spi_msg1);
+
+ spi_message_init(&qca->spi_msg2);
+ spi_message_add_tail(&qca->spi_xfer2[0], &qca->spi_msg2);
+ spi_message_add_tail(&qca->spi_xfer2[1], &qca->spi_msg2);
+
+ memset(&qca->txr, 0, sizeof(qca->txr));
+ qca->txr.count = TX_RING_MAX_LEN;
+}
+
+static const struct of_device_id qca_spi_of_match[] = {
+ { .compatible = "qca,qca7000" },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, qca_spi_of_match);
+
+static int
+qca_spi_probe(struct spi_device *spi_device)
+{
+ struct qcaspi *qca = NULL;
+ struct net_device *qcaspi_devs = NULL;
+ u8 legacy_mode = 0;
+ u16 signature;
+ const char *mac;
+
+ if (!spi_device->dev.of_node) {
+ dev_err(&spi_device->dev, "Missing device tree\n");
+ return -EINVAL;
+ }
+
+ legacy_mode = of_property_read_bool(spi_device->dev.of_node,
+ "qca,legacy-mode");
+
+ if (qcaspi_clkspeed == 0) {
+ if (spi_device->max_speed_hz)
+ qcaspi_clkspeed = spi_device->max_speed_hz;
+ else
+ qcaspi_clkspeed = QCASPI_CLK_SPEED;
+ }
+
+ if ((qcaspi_clkspeed < QCASPI_CLK_SPEED_MIN) ||
+ (qcaspi_clkspeed > QCASPI_CLK_SPEED_MAX)) {
+ dev_info(&spi_device->dev, "Invalid clkspeed: %d\n",
+ qcaspi_clkspeed);
+ return -EINVAL;
+ }
+
+ if ((qcaspi_burst_len < QCASPI_BURST_LEN_MIN) ||
+ (qcaspi_burst_len > QCASPI_BURST_LEN_MAX)) {
+ dev_info(&spi_device->dev, "Invalid burst len: %d\n",
+ qcaspi_burst_len);
+ return -EINVAL;
+ }
+
+ if ((qcaspi_pluggable < QCASPI_PLUGGABLE_MIN) ||
+ (qcaspi_pluggable > QCASPI_PLUGGABLE_MAX)) {
+ dev_info(&spi_device->dev, "Invalid pluggable: %d\n",
+ qcaspi_pluggable);
+ return -EINVAL;
+ }
+
+ dev_info(&spi_device->dev, "ver=%s, clkspeed=%d, burst_len=%d, pluggable=%d\n",
+ QCASPI_DRV_VERSION,
+ qcaspi_clkspeed,
+ qcaspi_burst_len,
+ qcaspi_pluggable);
+
+ spi_device->mode = SPI_MODE_3;
+ spi_device->max_speed_hz = qcaspi_clkspeed;
+ if (spi_setup(spi_device) < 0) {
+ dev_err(&spi_device->dev, "Unable to setup SPI device\n");
+ return -EFAULT;
+ }
+
+ qcaspi_devs = alloc_etherdev(sizeof(struct qcaspi));
+ if (!qcaspi_devs)
+ return -ENOMEM;
+
+ qcaspi_netdev_setup(qcaspi_devs);
+
+ qca = netdev_priv(qcaspi_devs);
+ if (!qca) {
+ free_netdev(qcaspi_devs);
+ dev_err(&spi_device->dev, "Fail to retrieve private structure\n");
+ return -ENOMEM;
+ }
+ qca->net_dev = qcaspi_devs;
+ qca->spi_dev = spi_device;
+ qca->legacy_mode = legacy_mode;
+
+ mac = of_get_mac_address(spi_device->dev.of_node);
+
+ if (mac)
+ ether_addr_copy(qca->net_dev->dev_addr, mac);
+
+ if (!is_valid_ether_addr(qca->net_dev->dev_addr)) {
+ eth_hw_addr_random(qca->net_dev);
+ dev_info(&spi_device->dev, "Using random MAC address: %pM\n",
+ qca->net_dev->dev_addr);
+ }
+
+ netif_carrier_off(qca->net_dev);
+
+ if (!qcaspi_pluggable) {
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature);
+
+ if (signature != QCASPI_GOOD_SIGNATURE) {
+ dev_err(&spi_device->dev, "Invalid signature (0x%04X)\n",
+ signature);
+ free_netdev(qcaspi_devs);
+ return -EFAULT;
+ }
+ }
+
+ if (register_netdev(qcaspi_devs)) {
+ dev_info(&spi_device->dev, "Unable to register net device %s\n",
+ qcaspi_devs->name);
+ free_netdev(qcaspi_devs);
+ return -EFAULT;
+ }
+
+ spi_set_drvdata(spi_device, qcaspi_devs);
+
+ qcaspi_init_device_debugfs(qca);
+
+ return 0;
+}
+
+static int
+qca_spi_remove(struct spi_device *spi_device)
+{
+ struct net_device *qcaspi_devs = spi_get_drvdata(spi_device);
+ struct qcaspi *qca = netdev_priv(qcaspi_devs);
+
+ qcaspi_remove_device_debugfs(qca);
+
+ unregister_netdev(qcaspi_devs);
+ free_netdev(qcaspi_devs);
+
+ return 0;
+}
+
+static const struct spi_device_id qca_spi_id[] = {
+ { "qca7000", 0 },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(spi, qca_spi_id);
+
+static struct spi_driver qca_spi_driver = {
+ .driver = {
+ .name = QCASPI_DRV_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = qca_spi_of_match,
+ },
+ .id_table = qca_spi_id,
+ .probe = qca_spi_probe,
+ .remove = qca_spi_remove,
+};
+module_spi_driver(qca_spi_driver);
+
+MODULE_DESCRIPTION("Qualcomm Atheros SPI Driver");
+MODULE_AUTHOR("Qualcomm Atheros Communications");
+MODULE_AUTHOR("Stefan Wahren <stefan.wahren@i2se.com>");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_VERSION(QCASPI_DRV_VERSION);
diff --git a/drivers/net/ethernet/qualcomm/qca_spi.h b/drivers/net/ethernet/qualcomm/qca_spi.h
new file mode 100644
index 0000000..6e31a0e
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/qca_spi.h
@@ -0,0 +1,114 @@
+/*
+ * Copyright (c) 2011, 2012, Qualcomm Atheros Communications Inc.
+ * Copyright (c) 2014, I2SE GmbH
+ *
+ * Permission to use, copy, modify, and/or distribute this software
+ * for any purpose with or without fee is hereby granted, provided
+ * that the above copyright notice and this permission notice appear
+ * in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
+ * WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+ * THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
+ * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+ * NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/* Qualcomm Atheros SPI register definition.
+ *
+ * This module is designed to define the Qualcomm Atheros SPI register
+ * placeholders;
+ */
+
+#ifndef _QCA_SPI_H
+#define _QCA_SPI_H
+
+#include <linux/netdevice.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/spi/spi.h>
+#include <linux/types.h>
+
+#include "qca_framing.h"
+
+#define QCASPI_DRV_VERSION "0.2.7-i"
+#define QCASPI_DRV_NAME "qcaspi"
+
+#define QCASPI_GOOD_SIGNATURE 0xAA55
+
+#define TX_RING_MAX_LEN 10
+#define TX_RING_MIN_LEN 2
+
+/* sync related constants */
+#define QCASPI_SYNC_UNKNOWN 0
+#define QCASPI_SYNC_RESET 1
+#define QCASPI_SYNC_READY 2
+
+#define QCASPI_RESET_TIMEOUT 10
+
+/* sync events */
+#define QCASPI_EVENT_UPDATE 0
+#define QCASPI_EVENT_CPUON 1
+
+struct tx_ring {
+ struct sk_buff *skb[TX_RING_MAX_LEN];
+ u16 head;
+ u16 tail;
+ u16 size;
+ u16 count;
+};
+
+struct qcaspi_stats {
+ u64 trig_reset;
+ u64 device_reset;
+ u64 reset_timeout;
+ u64 read_err;
+ u64 write_err;
+ u64 read_buf_err;
+ u64 write_buf_err;
+ u64 out_of_mem;
+ u64 write_buf_miss;
+ u64 ring_full;
+ u64 spi_err;
+};
+
+struct qcaspi {
+ struct net_device *net_dev;
+ struct spi_device *spi_dev;
+ struct task_struct *spi_thread;
+
+ struct tx_ring txr;
+ struct qcaspi_stats stats;
+
+ struct spi_message spi_msg1;
+ struct spi_message spi_msg2;
+ struct spi_transfer spi_xfer1;
+ struct spi_transfer spi_xfer2[2];
+
+ u8 *rx_buffer;
+ u32 buffer_size;
+ u8 sync;
+
+ struct qcafrm_handle frm_handle;
+ struct sk_buff *rx_skb;
+
+ unsigned int intr_req;
+ unsigned int intr_svc;
+
+#ifdef CONFIG_DEBUG_FS
+ struct dentry *device_root;
+#endif
+
+ /* user configurable options */
+ u32 clkspeed;
+ u8 legacy_mode;
+ u16 burst_len;
+};
+
+int qcaspi_netdev_open(struct net_device *dev);
+int qcaspi_netdev_close(struct net_device *dev);
+
+#endif /* _QCA_SPI_H */
diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
index 0921302..cf154f7 100644
--- a/drivers/net/ethernet/realtek/r8169.c
+++ b/drivers/net/ethernet/realtek/r8169.c
@@ -52,6 +52,10 @@
#define FIRMWARE_8106E_2 "rtl_nic/rtl8106e-2.fw"
#define FIRMWARE_8168G_2 "rtl_nic/rtl8168g-2.fw"
#define FIRMWARE_8168G_3 "rtl_nic/rtl8168g-3.fw"
+#define FIRMWARE_8168H_1 "rtl_nic/rtl8168h-1.fw"
+#define FIRMWARE_8168H_2 "rtl_nic/rtl8168h-2.fw"
+#define FIRMWARE_8107E_1 "rtl_nic/rtl8107e-1.fw"
+#define FIRMWARE_8107E_2 "rtl_nic/rtl8107e-2.fw"
#ifdef RTL8169_DEBUG
#define assert(expr) \
@@ -147,6 +151,13 @@ enum mac_version {
RTL_GIGA_MAC_VER_42,
RTL_GIGA_MAC_VER_43,
RTL_GIGA_MAC_VER_44,
+ RTL_GIGA_MAC_VER_45,
+ RTL_GIGA_MAC_VER_46,
+ RTL_GIGA_MAC_VER_47,
+ RTL_GIGA_MAC_VER_48,
+ RTL_GIGA_MAC_VER_49,
+ RTL_GIGA_MAC_VER_50,
+ RTL_GIGA_MAC_VER_51,
RTL_GIGA_MAC_NONE = 0xff,
};
@@ -282,6 +293,27 @@ static const struct {
[RTL_GIGA_MAC_VER_44] =
_R("RTL8411", RTL_TD_1, FIRMWARE_8411_2,
JUMBO_9K, false),
+ [RTL_GIGA_MAC_VER_45] =
+ _R("RTL8168h/8111h", RTL_TD_1, FIRMWARE_8168H_1,
+ JUMBO_9K, false),
+ [RTL_GIGA_MAC_VER_46] =
+ _R("RTL8168h/8111h", RTL_TD_1, FIRMWARE_8168H_2,
+ JUMBO_9K, false),
+ [RTL_GIGA_MAC_VER_47] =
+ _R("RTL8107e", RTL_TD_1, FIRMWARE_8107E_1,
+ JUMBO_1K, false),
+ [RTL_GIGA_MAC_VER_48] =
+ _R("RTL8107e", RTL_TD_1, FIRMWARE_8107E_2,
+ JUMBO_1K, false),
+ [RTL_GIGA_MAC_VER_49] =
+ _R("RTL8168ep/8111ep", RTL_TD_1, NULL,
+ JUMBO_9K, false),
+ [RTL_GIGA_MAC_VER_50] =
+ _R("RTL8168ep/8111ep", RTL_TD_1, NULL,
+ JUMBO_9K, false),
+ [RTL_GIGA_MAC_VER_51] =
+ _R("RTL8168ep/8111ep", RTL_TD_1, NULL,
+ JUMBO_9K, false),
};
#undef _R
@@ -380,6 +412,10 @@ enum rtl_registers {
FuncEvent = 0xf0,
FuncEventMask = 0xf4,
FuncPresetState = 0xf8,
+ IBCR0 = 0xf8,
+ IBCR2 = 0xf9,
+ IBIMR0 = 0xfa,
+ IBISR0 = 0xfb,
FuncForceEvent = 0xfc,
};
@@ -410,6 +446,7 @@ enum rtl8168_8101_registers {
#define EPHYAR_DATA_MASK 0xffff
DLLPR = 0xd0,
#define PFM_EN (1 << 6)
+#define TX_10M_PS_EN (1 << 7)
DBG_REG = 0xd1,
#define FIX_NAK_1 (1 << 4)
#define FIX_NAK_2 (1 << 3)
@@ -429,6 +466,8 @@ enum rtl8168_8101_registers {
#define EFUSEAR_REG_MASK 0x03ff
#define EFUSEAR_REG_SHIFT 8
#define EFUSEAR_DATA_MASK 0xff
+ MISC_1 = 0xf2,
+#define PFM_D3COLD_EN (1 << 6)
};
enum rtl8168_registers {
@@ -444,9 +483,11 @@ enum rtl8168_registers {
#define ERIAR_EXGMAC (0x00 << ERIAR_TYPE_SHIFT)
#define ERIAR_MSIX (0x01 << ERIAR_TYPE_SHIFT)
#define ERIAR_ASF (0x02 << ERIAR_TYPE_SHIFT)
+#define ERIAR_OOB (0x02 << ERIAR_TYPE_SHIFT)
#define ERIAR_MASK_SHIFT 12
#define ERIAR_MASK_0001 (0x1 << ERIAR_MASK_SHIFT)
#define ERIAR_MASK_0011 (0x3 << ERIAR_MASK_SHIFT)
+#define ERIAR_MASK_0100 (0x4 << ERIAR_MASK_SHIFT)
#define ERIAR_MASK_0101 (0x5 << ERIAR_MASK_SHIFT)
#define ERIAR_MASK_1111 (0xf << ERIAR_MASK_SHIFT)
EPHY_RXER_NUM = 0x7c,
@@ -598,6 +639,9 @@ enum rtl_register_content {
/* DumpCounterCommand */
CounterDump = 0x8,
+
+ /* magic enable v2 */
+ MagicPacket_v2 = (1 << 16), /* Wake up when receives a Magic Packet */
};
enum rtl_desc_bit {
@@ -823,6 +867,10 @@ MODULE_FIRMWARE(FIRMWARE_8106E_1);
MODULE_FIRMWARE(FIRMWARE_8106E_2);
MODULE_FIRMWARE(FIRMWARE_8168G_2);
MODULE_FIRMWARE(FIRMWARE_8168G_3);
+MODULE_FIRMWARE(FIRMWARE_8168H_1);
+MODULE_FIRMWARE(FIRMWARE_8168H_2);
+MODULE_FIRMWARE(FIRMWARE_8107E_1);
+MODULE_FIRMWARE(FIRMWARE_8107E_2);
static void rtl_lock_work(struct rtl8169_private *tp)
{
@@ -904,93 +952,6 @@ static const struct rtl_cond name = { \
\
static bool name ## _check(struct rtl8169_private *tp)
-DECLARE_RTL_COND(rtl_ocpar_cond)
-{
- void __iomem *ioaddr = tp->mmio_addr;
-
- return RTL_R32(OCPAR) & OCPAR_FLAG;
-}
-
-static u32 ocp_read(struct rtl8169_private *tp, u8 mask, u16 reg)
-{
- void __iomem *ioaddr = tp->mmio_addr;
-
- RTL_W32(OCPAR, ((u32)mask & 0x0f) << 12 | (reg & 0x0fff));
-
- return rtl_udelay_loop_wait_high(tp, &rtl_ocpar_cond, 100, 20) ?
- RTL_R32(OCPDR) : ~0;
-}
-
-static void ocp_write(struct rtl8169_private *tp, u8 mask, u16 reg, u32 data)
-{
- void __iomem *ioaddr = tp->mmio_addr;
-
- RTL_W32(OCPDR, data);
- RTL_W32(OCPAR, OCPAR_FLAG | ((u32)mask & 0x0f) << 12 | (reg & 0x0fff));
-
- rtl_udelay_loop_wait_low(tp, &rtl_ocpar_cond, 100, 20);
-}
-
-DECLARE_RTL_COND(rtl_eriar_cond)
-{
- void __iomem *ioaddr = tp->mmio_addr;
-
- return RTL_R32(ERIAR) & ERIAR_FLAG;
-}
-
-static void rtl8168_oob_notify(struct rtl8169_private *tp, u8 cmd)
-{
- void __iomem *ioaddr = tp->mmio_addr;
-
- RTL_W8(ERIDR, cmd);
- RTL_W32(ERIAR, 0x800010e8);
- msleep(2);
-
- if (!rtl_udelay_loop_wait_low(tp, &rtl_eriar_cond, 100, 5))
- return;
-
- ocp_write(tp, 0x1, 0x30, 0x00000001);
-}
-
-#define OOB_CMD_RESET 0x00
-#define OOB_CMD_DRIVER_START 0x05
-#define OOB_CMD_DRIVER_STOP 0x06
-
-static u16 rtl8168_get_ocp_reg(struct rtl8169_private *tp)
-{
- return (tp->mac_version == RTL_GIGA_MAC_VER_31) ? 0xb8 : 0x10;
-}
-
-DECLARE_RTL_COND(rtl_ocp_read_cond)
-{
- u16 reg;
-
- reg = rtl8168_get_ocp_reg(tp);
-
- return ocp_read(tp, 0x0f, reg) & 0x00000800;
-}
-
-static void rtl8168_driver_start(struct rtl8169_private *tp)
-{
- rtl8168_oob_notify(tp, OOB_CMD_DRIVER_START);
-
- rtl_msleep_loop_wait_high(tp, &rtl_ocp_read_cond, 10, 10);
-}
-
-static void rtl8168_driver_stop(struct rtl8169_private *tp)
-{
- rtl8168_oob_notify(tp, OOB_CMD_DRIVER_STOP);
-
- rtl_msleep_loop_wait_low(tp, &rtl_ocp_read_cond, 10, 10);
-}
-
-static int r8168dp_check_dash(struct rtl8169_private *tp)
-{
- u16 reg = rtl8168_get_ocp_reg(tp);
-
- return (ocp_read(tp, 0x0f, reg) & 0x00008000) ? 1 : 0;
-}
-
static bool rtl_ocp_reg_failure(struct rtl8169_private *tp, u32 reg)
{
if (reg & 0xffff0001) {
@@ -1132,6 +1093,13 @@ static int r8169_mdio_read(struct rtl8169_private *tp, int reg)
return value;
}
+DECLARE_RTL_COND(rtl_ocpar_cond)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ return RTL_R32(OCPAR) & OCPAR_FLAG;
+}
+
static void r8168dp_1_mdio_access(struct rtl8169_private *tp, int reg, u32 data)
{
void __iomem *ioaddr = tp->mmio_addr;
@@ -1215,12 +1183,12 @@ static void rtl_patchphy(struct rtl8169_private *tp, int reg_addr, int value)
rtl_writephy(tp, reg_addr, rtl_readphy(tp, reg_addr) | value);
}
-static void rtl_w1w0_phy(struct rtl8169_private *tp, int reg_addr, int p, int m)
+static void rtl_w0w1_phy(struct rtl8169_private *tp, int reg_addr, int p, int m)
{
int val;
val = rtl_readphy(tp, reg_addr);
- rtl_writephy(tp, reg_addr, (val | p) & ~m);
+ rtl_writephy(tp, reg_addr, (val & ~m) | p);
}
static void rtl_mdio_write(struct net_device *dev, int phy_id, int location,
@@ -1267,6 +1235,13 @@ static u16 rtl_ephy_read(struct rtl8169_private *tp, int reg_addr)
RTL_R32(EPHYAR) & EPHYAR_DATA_MASK : ~0;
}
+DECLARE_RTL_COND(rtl_eriar_cond)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ return RTL_R32(ERIAR) & ERIAR_FLAG;
+}
+
static void rtl_eri_write(struct rtl8169_private *tp, int addr, u32 mask,
u32 val, int type)
{
@@ -1289,7 +1264,7 @@ static u32 rtl_eri_read(struct rtl8169_private *tp, int addr, int type)
RTL_R32(ERIDR) : ~0;
}
-static void rtl_w1w0_eri(struct rtl8169_private *tp, int addr, u32 mask, u32 p,
+static void rtl_w0w1_eri(struct rtl8169_private *tp, int addr, u32 mask, u32 p,
u32 m, int type)
{
u32 val;
@@ -1298,6 +1273,208 @@ static void rtl_w1w0_eri(struct rtl8169_private *tp, int addr, u32 mask, u32 p,
rtl_eri_write(tp, addr, mask, (val & ~m) | p, type);
}
+static u32 r8168dp_ocp_read(struct rtl8169_private *tp, u8 mask, u16 reg)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ RTL_W32(OCPAR, ((u32)mask & 0x0f) << 12 | (reg & 0x0fff));
+ return rtl_udelay_loop_wait_high(tp, &rtl_ocpar_cond, 100, 20) ?
+ RTL_R32(OCPDR) : ~0;
+}
+
+static u32 r8168ep_ocp_read(struct rtl8169_private *tp, u8 mask, u16 reg)
+{
+ return rtl_eri_read(tp, reg, ERIAR_OOB);
+}
+
+static u32 ocp_read(struct rtl8169_private *tp, u8 mask, u16 reg)
+{
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_27:
+ case RTL_GIGA_MAC_VER_28:
+ case RTL_GIGA_MAC_VER_31:
+ return r8168dp_ocp_read(tp, mask, reg);
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
+ return r8168ep_ocp_read(tp, mask, reg);
+ default:
+ BUG();
+ return ~0;
+ }
+}
+
+static void r8168dp_ocp_write(struct rtl8169_private *tp, u8 mask, u16 reg,
+ u32 data)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ RTL_W32(OCPDR, data);
+ RTL_W32(OCPAR, OCPAR_FLAG | ((u32)mask & 0x0f) << 12 | (reg & 0x0fff));
+ rtl_udelay_loop_wait_low(tp, &rtl_ocpar_cond, 100, 20);
+}
+
+static void r8168ep_ocp_write(struct rtl8169_private *tp, u8 mask, u16 reg,
+ u32 data)
+{
+ rtl_eri_write(tp, reg, ((u32)mask & 0x0f) << ERIAR_MASK_SHIFT,
+ data, ERIAR_OOB);
+}
+
+static void ocp_write(struct rtl8169_private *tp, u8 mask, u16 reg, u32 data)
+{
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_27:
+ case RTL_GIGA_MAC_VER_28:
+ case RTL_GIGA_MAC_VER_31:
+ r8168dp_ocp_write(tp, mask, reg, data);
+ break;
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
+ r8168ep_ocp_write(tp, mask, reg, data);
+ break;
+ default:
+ BUG();
+ break;
+ }
+}
+
+static void rtl8168_oob_notify(struct rtl8169_private *tp, u8 cmd)
+{
+ rtl_eri_write(tp, 0xe8, ERIAR_MASK_0001, cmd, ERIAR_EXGMAC);
+
+ ocp_write(tp, 0x1, 0x30, 0x00000001);
+}
+
+#define OOB_CMD_RESET 0x00
+#define OOB_CMD_DRIVER_START 0x05
+#define OOB_CMD_DRIVER_STOP 0x06
+
+static u16 rtl8168_get_ocp_reg(struct rtl8169_private *tp)
+{
+ return (tp->mac_version == RTL_GIGA_MAC_VER_31) ? 0xb8 : 0x10;
+}
+
+DECLARE_RTL_COND(rtl_ocp_read_cond)
+{
+ u16 reg;
+
+ reg = rtl8168_get_ocp_reg(tp);
+
+ return ocp_read(tp, 0x0f, reg) & 0x00000800;
+}
+
+DECLARE_RTL_COND(rtl_ep_ocp_read_cond)
+{
+ return ocp_read(tp, 0x0f, 0x124) & 0x00000001;
+}
+
+DECLARE_RTL_COND(rtl_ocp_tx_cond)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ return RTL_R8(IBISR0) & 0x02;
+}
+
+static void rtl8168dp_driver_start(struct rtl8169_private *tp)
+{
+ rtl8168_oob_notify(tp, OOB_CMD_DRIVER_START);
+ rtl_msleep_loop_wait_high(tp, &rtl_ocp_read_cond, 10, 10);
+}
+
+static void rtl8168ep_driver_start(struct rtl8169_private *tp)
+{
+ ocp_write(tp, 0x01, 0x180, OOB_CMD_DRIVER_START);
+ ocp_write(tp, 0x01, 0x30, ocp_read(tp, 0x01, 0x30) | 0x01);
+ rtl_msleep_loop_wait_high(tp, &rtl_ep_ocp_read_cond, 10, 10);
+}
+
+static void rtl8168_driver_start(struct rtl8169_private *tp)
+{
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_27:
+ case RTL_GIGA_MAC_VER_28:
+ case RTL_GIGA_MAC_VER_31:
+ rtl8168dp_driver_start(tp);
+ break;
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
+ rtl8168ep_driver_start(tp);
+ break;
+ default:
+ BUG();
+ break;
+ }
+}
+
+static void rtl8168dp_driver_stop(struct rtl8169_private *tp)
+{
+ rtl8168_oob_notify(tp, OOB_CMD_DRIVER_STOP);
+ rtl_msleep_loop_wait_low(tp, &rtl_ocp_read_cond, 10, 10);
+}
+
+static void rtl8168ep_driver_stop(struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ RTL_W8(IBCR2, RTL_R8(IBCR2) & ~0x01);
+ rtl_msleep_loop_wait_low(tp, &rtl_ocp_tx_cond, 50, 2000);
+ RTL_W8(IBISR0, RTL_R8(IBISR0) | 0x20);
+ RTL_W8(IBCR0, RTL_R8(IBCR0) & ~0x01);
+ ocp_write(tp, 0x01, 0x180, OOB_CMD_DRIVER_STOP);
+ ocp_write(tp, 0x01, 0x30, ocp_read(tp, 0x01, 0x30) | 0x01);
+ rtl_msleep_loop_wait_low(tp, &rtl_ep_ocp_read_cond, 10, 10);
+}
+
+static void rtl8168_driver_stop(struct rtl8169_private *tp)
+{
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_27:
+ case RTL_GIGA_MAC_VER_28:
+ case RTL_GIGA_MAC_VER_31:
+ rtl8168dp_driver_stop(tp);
+ break;
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
+ rtl8168ep_driver_stop(tp);
+ break;
+ default:
+ BUG();
+ break;
+ }
+}
+
+static int r8168dp_check_dash(struct rtl8169_private *tp)
+{
+ u16 reg = rtl8168_get_ocp_reg(tp);
+
+ return (ocp_read(tp, 0x0f, reg) & 0x00008000) ? 1 : 0;
+}
+
+static int r8168ep_check_dash(struct rtl8169_private *tp)
+{
+ return (ocp_read(tp, 0x0f, 0x128) & 0x00000001) ? 1 : 0;
+}
+
+static int r8168_check_dash(struct rtl8169_private *tp)
+{
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_27:
+ case RTL_GIGA_MAC_VER_28:
+ case RTL_GIGA_MAC_VER_31:
+ return r8168dp_check_dash(tp);
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
+ return r8168ep_check_dash(tp);
+ default:
+ return 0;
+ }
+}
+
struct exgmac_reg {
u16 addr;
u16 mask;
@@ -1442,9 +1619,9 @@ static void rtl_link_chg_patch(struct rtl8169_private *tp)
ERIAR_EXGMAC);
}
/* Reset packet filter */
- rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01,
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01,
ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00,
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00,
ERIAR_EXGMAC);
} else if (tp->mac_version == RTL_GIGA_MAC_VER_35 ||
tp->mac_version == RTL_GIGA_MAC_VER_36) {
@@ -1514,8 +1691,32 @@ static u32 __rtl8169_get_wol(struct rtl8169_private *tp)
options = RTL_R8(Config3);
if (options & LinkUp)
wolopts |= WAKE_PHY;
- if (options & MagicPacket)
- wolopts |= WAKE_MAGIC;
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_34:
+ case RTL_GIGA_MAC_VER_35:
+ case RTL_GIGA_MAC_VER_36:
+ case RTL_GIGA_MAC_VER_37:
+ case RTL_GIGA_MAC_VER_38:
+ case RTL_GIGA_MAC_VER_40:
+ case RTL_GIGA_MAC_VER_41:
+ case RTL_GIGA_MAC_VER_42:
+ case RTL_GIGA_MAC_VER_43:
+ case RTL_GIGA_MAC_VER_44:
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_47:
+ case RTL_GIGA_MAC_VER_48:
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
+ if (rtl_eri_read(tp, 0xdc, ERIAR_EXGMAC) & MagicPacket_v2)
+ wolopts |= WAKE_MAGIC;
+ break;
+ default:
+ if (options & MagicPacket)
+ wolopts |= WAKE_MAGIC;
+ break;
+ }
options = RTL_R8(Config5);
if (options & UWF)
@@ -1543,24 +1744,63 @@ static void rtl8169_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
static void __rtl8169_set_wol(struct rtl8169_private *tp, u32 wolopts)
{
void __iomem *ioaddr = tp->mmio_addr;
- unsigned int i;
+ unsigned int i, tmp;
static const struct {
u32 opt;
u16 reg;
u8 mask;
} cfg[] = {
{ WAKE_PHY, Config3, LinkUp },
- { WAKE_MAGIC, Config3, MagicPacket },
{ WAKE_UCAST, Config5, UWF },
{ WAKE_BCAST, Config5, BWF },
{ WAKE_MCAST, Config5, MWF },
- { WAKE_ANY, Config5, LanWake }
+ { WAKE_ANY, Config5, LanWake },
+ { WAKE_MAGIC, Config3, MagicPacket }
};
u8 options;
RTL_W8(Cfg9346, Cfg9346_Unlock);
- for (i = 0; i < ARRAY_SIZE(cfg); i++) {
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_34:
+ case RTL_GIGA_MAC_VER_35:
+ case RTL_GIGA_MAC_VER_36:
+ case RTL_GIGA_MAC_VER_37:
+ case RTL_GIGA_MAC_VER_38:
+ case RTL_GIGA_MAC_VER_40:
+ case RTL_GIGA_MAC_VER_41:
+ case RTL_GIGA_MAC_VER_42:
+ case RTL_GIGA_MAC_VER_43:
+ case RTL_GIGA_MAC_VER_44:
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_47:
+ case RTL_GIGA_MAC_VER_48:
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
+ tmp = ARRAY_SIZE(cfg) - 1;
+ if (wolopts & WAKE_MAGIC)
+ rtl_w0w1_eri(tp,
+ 0x0dc,
+ ERIAR_MASK_0100,
+ MagicPacket_v2,
+ 0x0000,
+ ERIAR_EXGMAC);
+ else
+ rtl_w0w1_eri(tp,
+ 0x0dc,
+ ERIAR_MASK_0100,
+ 0x0000,
+ MagicPacket_v2,
+ ERIAR_EXGMAC);
+ break;
+ default:
+ tmp = ARRAY_SIZE(cfg);
+ break;
+ }
+
+ for (i = 0; i < tmp; i++) {
options = RTL_R8(cfg[i].reg) & ~cfg[i].mask;
if (wolopts & cfg[i].opt)
options |= cfg[i].mask;
@@ -2045,6 +2285,15 @@ static void rtl8169_get_mac_version(struct rtl8169_private *tp,
u32 val;
int mac_version;
} mac_info[] = {
+ /* 8168EP family. */
+ { 0x7cf00000, 0x50200000, RTL_GIGA_MAC_VER_51 },
+ { 0x7cf00000, 0x50100000, RTL_GIGA_MAC_VER_50 },
+ { 0x7cf00000, 0x50000000, RTL_GIGA_MAC_VER_49 },
+
+ /* 8168H family. */
+ { 0x7cf00000, 0x54100000, RTL_GIGA_MAC_VER_46 },
+ { 0x7cf00000, 0x54000000, RTL_GIGA_MAC_VER_45 },
+
/* 8168G family. */
{ 0x7cf00000, 0x5c800000, RTL_GIGA_MAC_VER_44 },
{ 0x7cf00000, 0x50900000, RTL_GIGA_MAC_VER_42 },
@@ -2140,6 +2389,14 @@ static void rtl8169_get_mac_version(struct rtl8169_private *tp,
tp->mac_version = tp->mii.supports_gmii ?
RTL_GIGA_MAC_VER_42 :
RTL_GIGA_MAC_VER_43;
+ } else if (tp->mac_version == RTL_GIGA_MAC_VER_45) {
+ tp->mac_version = tp->mii.supports_gmii ?
+ RTL_GIGA_MAC_VER_45 :
+ RTL_GIGA_MAC_VER_47;
+ } else if (tp->mac_version == RTL_GIGA_MAC_VER_46) {
+ tp->mac_version = tp->mii.supports_gmii ?
+ RTL_GIGA_MAC_VER_46 :
+ RTL_GIGA_MAC_VER_48;
}
}
@@ -2801,8 +3058,8 @@ static void rtl8168d_1_hw_phy_config(struct rtl8169_private *tp)
* Fine Tune Switching regulator parameter
*/
rtl_writephy(tp, 0x1f, 0x0002);
- rtl_w1w0_phy(tp, 0x0b, 0x0010, 0x00ef);
- rtl_w1w0_phy(tp, 0x0c, 0xa200, 0x5d00);
+ rtl_w0w1_phy(tp, 0x0b, 0x0010, 0x00ef);
+ rtl_w0w1_phy(tp, 0x0c, 0xa200, 0x5d00);
if (rtl8168d_efuse_read(tp, 0x01) == 0xb1) {
static const struct phy_reg phy_reg_init[] = {
@@ -2851,8 +3108,8 @@ static void rtl8168d_1_hw_phy_config(struct rtl8169_private *tp)
/* Fine tune PLL performance */
rtl_writephy(tp, 0x1f, 0x0002);
- rtl_w1w0_phy(tp, 0x02, 0x0100, 0x0600);
- rtl_w1w0_phy(tp, 0x03, 0x0000, 0xe000);
+ rtl_w0w1_phy(tp, 0x02, 0x0100, 0x0600);
+ rtl_w0w1_phy(tp, 0x03, 0x0000, 0xe000);
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x001b);
@@ -2949,8 +3206,8 @@ static void rtl8168d_2_hw_phy_config(struct rtl8169_private *tp)
/* Fine tune PLL performance */
rtl_writephy(tp, 0x1f, 0x0002);
- rtl_w1w0_phy(tp, 0x02, 0x0100, 0x0600);
- rtl_w1w0_phy(tp, 0x03, 0x0000, 0xe000);
+ rtl_w0w1_phy(tp, 0x02, 0x0100, 0x0600);
+ rtl_w0w1_phy(tp, 0x03, 0x0000, 0xe000);
/* Switching regulator Slew rate */
rtl_writephy(tp, 0x1f, 0x0002);
@@ -3078,32 +3335,32 @@ static void rtl8168e_1_hw_phy_config(struct rtl8169_private *tp)
/* DCO enable for 10M IDLE Power */
rtl_writephy(tp, 0x1f, 0x0007);
rtl_writephy(tp, 0x1e, 0x0023);
- rtl_w1w0_phy(tp, 0x17, 0x0006, 0x0000);
+ rtl_w0w1_phy(tp, 0x17, 0x0006, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
/* For impedance matching */
rtl_writephy(tp, 0x1f, 0x0002);
- rtl_w1w0_phy(tp, 0x08, 0x8000, 0x7f00);
+ rtl_w0w1_phy(tp, 0x08, 0x8000, 0x7f00);
rtl_writephy(tp, 0x1f, 0x0000);
/* PHY auto speed down */
rtl_writephy(tp, 0x1f, 0x0007);
rtl_writephy(tp, 0x1e, 0x002d);
- rtl_w1w0_phy(tp, 0x18, 0x0050, 0x0000);
+ rtl_w0w1_phy(tp, 0x18, 0x0050, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
- rtl_w1w0_phy(tp, 0x14, 0x8000, 0x0000);
+ rtl_w0w1_phy(tp, 0x14, 0x8000, 0x0000);
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b86);
- rtl_w1w0_phy(tp, 0x06, 0x0001, 0x0000);
+ rtl_w0w1_phy(tp, 0x06, 0x0001, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b85);
- rtl_w1w0_phy(tp, 0x06, 0x0000, 0x2000);
+ rtl_w0w1_phy(tp, 0x06, 0x0000, 0x2000);
rtl_writephy(tp, 0x1f, 0x0007);
rtl_writephy(tp, 0x1e, 0x0020);
- rtl_w1w0_phy(tp, 0x15, 0x0000, 0x1100);
+ rtl_w0w1_phy(tp, 0x15, 0x0000, 0x1100);
rtl_writephy(tp, 0x1f, 0x0006);
rtl_writephy(tp, 0x00, 0x5a00);
rtl_writephy(tp, 0x1f, 0x0000);
@@ -3167,39 +3424,39 @@ static void rtl8168e_2_hw_phy_config(struct rtl8169_private *tp)
/* For 4-corner performance improve */
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b80);
- rtl_w1w0_phy(tp, 0x17, 0x0006, 0x0000);
+ rtl_w0w1_phy(tp, 0x17, 0x0006, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
/* PHY auto speed down */
rtl_writephy(tp, 0x1f, 0x0004);
rtl_writephy(tp, 0x1f, 0x0007);
rtl_writephy(tp, 0x1e, 0x002d);
- rtl_w1w0_phy(tp, 0x18, 0x0010, 0x0000);
+ rtl_w0w1_phy(tp, 0x18, 0x0010, 0x0000);
rtl_writephy(tp, 0x1f, 0x0002);
rtl_writephy(tp, 0x1f, 0x0000);
- rtl_w1w0_phy(tp, 0x14, 0x8000, 0x0000);
+ rtl_w0w1_phy(tp, 0x14, 0x8000, 0x0000);
/* improve 10M EEE waveform */
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b86);
- rtl_w1w0_phy(tp, 0x06, 0x0001, 0x0000);
+ rtl_w0w1_phy(tp, 0x06, 0x0001, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
/* Improve 2-pair detection performance */
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b85);
- rtl_w1w0_phy(tp, 0x06, 0x4000, 0x0000);
+ rtl_w0w1_phy(tp, 0x06, 0x4000, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
/* EEE setting */
- rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_1111, 0x0000, 0x0003, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0x1b0, ERIAR_MASK_1111, 0x0000, 0x0003, ERIAR_EXGMAC);
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b85);
- rtl_w1w0_phy(tp, 0x06, 0x0000, 0x2000);
+ rtl_w0w1_phy(tp, 0x06, 0x0000, 0x2000);
rtl_writephy(tp, 0x1f, 0x0004);
rtl_writephy(tp, 0x1f, 0x0007);
rtl_writephy(tp, 0x1e, 0x0020);
- rtl_w1w0_phy(tp, 0x15, 0x0000, 0x0100);
+ rtl_w0w1_phy(tp, 0x15, 0x0000, 0x0100);
rtl_writephy(tp, 0x1f, 0x0002);
rtl_writephy(tp, 0x1f, 0x0000);
rtl_writephy(tp, 0x0d, 0x0007);
@@ -3210,8 +3467,8 @@ static void rtl8168e_2_hw_phy_config(struct rtl8169_private *tp)
/* Green feature */
rtl_writephy(tp, 0x1f, 0x0003);
- rtl_w1w0_phy(tp, 0x19, 0x0000, 0x0001);
- rtl_w1w0_phy(tp, 0x10, 0x0000, 0x0400);
+ rtl_w0w1_phy(tp, 0x19, 0x0000, 0x0001);
+ rtl_w0w1_phy(tp, 0x10, 0x0000, 0x0400);
rtl_writephy(tp, 0x1f, 0x0000);
/* Broken BIOS workaround: feed GigaMAC registers with MAC address. */
@@ -3223,20 +3480,20 @@ static void rtl8168f_hw_phy_config(struct rtl8169_private *tp)
/* For 4-corner performance improve */
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b80);
- rtl_w1w0_phy(tp, 0x06, 0x0006, 0x0000);
+ rtl_w0w1_phy(tp, 0x06, 0x0006, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
/* PHY auto speed down */
rtl_writephy(tp, 0x1f, 0x0007);
rtl_writephy(tp, 0x1e, 0x002d);
- rtl_w1w0_phy(tp, 0x18, 0x0010, 0x0000);
+ rtl_w0w1_phy(tp, 0x18, 0x0010, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
- rtl_w1w0_phy(tp, 0x14, 0x8000, 0x0000);
+ rtl_w0w1_phy(tp, 0x14, 0x8000, 0x0000);
/* Improve 10M EEE waveform */
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b86);
- rtl_w1w0_phy(tp, 0x06, 0x0001, 0x0000);
+ rtl_w0w1_phy(tp, 0x06, 0x0001, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
}
@@ -3286,7 +3543,7 @@ static void rtl8168f_1_hw_phy_config(struct rtl8169_private *tp)
/* Improve 2-pair detection performance */
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b85);
- rtl_w1w0_phy(tp, 0x06, 0x4000, 0x0000);
+ rtl_w0w1_phy(tp, 0x06, 0x4000, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
}
@@ -3342,7 +3599,7 @@ static void rtl8411_hw_phy_config(struct rtl8169_private *tp)
/* Improve 2-pair detection performance */
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b85);
- rtl_w1w0_phy(tp, 0x06, 0x4000, 0x0000);
+ rtl_w0w1_phy(tp, 0x06, 0x4000, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
rtl_writephy_batch(tp, phy_reg_init, ARRAY_SIZE(phy_reg_init));
@@ -3350,36 +3607,36 @@ static void rtl8411_hw_phy_config(struct rtl8169_private *tp)
/* Modify green table for giga */
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b54);
- rtl_w1w0_phy(tp, 0x06, 0x0000, 0x0800);
+ rtl_w0w1_phy(tp, 0x06, 0x0000, 0x0800);
rtl_writephy(tp, 0x05, 0x8b5d);
- rtl_w1w0_phy(tp, 0x06, 0x0000, 0x0800);
+ rtl_w0w1_phy(tp, 0x06, 0x0000, 0x0800);
rtl_writephy(tp, 0x05, 0x8a7c);
- rtl_w1w0_phy(tp, 0x06, 0x0000, 0x0100);
+ rtl_w0w1_phy(tp, 0x06, 0x0000, 0x0100);
rtl_writephy(tp, 0x05, 0x8a7f);
- rtl_w1w0_phy(tp, 0x06, 0x0100, 0x0000);
+ rtl_w0w1_phy(tp, 0x06, 0x0100, 0x0000);
rtl_writephy(tp, 0x05, 0x8a82);
- rtl_w1w0_phy(tp, 0x06, 0x0000, 0x0100);
+ rtl_w0w1_phy(tp, 0x06, 0x0000, 0x0100);
rtl_writephy(tp, 0x05, 0x8a85);
- rtl_w1w0_phy(tp, 0x06, 0x0000, 0x0100);
+ rtl_w0w1_phy(tp, 0x06, 0x0000, 0x0100);
rtl_writephy(tp, 0x05, 0x8a88);
- rtl_w1w0_phy(tp, 0x06, 0x0000, 0x0100);
+ rtl_w0w1_phy(tp, 0x06, 0x0000, 0x0100);
rtl_writephy(tp, 0x1f, 0x0000);
/* uc same-seed solution */
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b85);
- rtl_w1w0_phy(tp, 0x06, 0x8000, 0x0000);
+ rtl_w0w1_phy(tp, 0x06, 0x8000, 0x0000);
rtl_writephy(tp, 0x1f, 0x0000);
/* eee setting */
- rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_0001, 0x00, 0x03, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0x1b0, ERIAR_MASK_0001, 0x00, 0x03, ERIAR_EXGMAC);
rtl_writephy(tp, 0x1f, 0x0005);
rtl_writephy(tp, 0x05, 0x8b85);
- rtl_w1w0_phy(tp, 0x06, 0x0000, 0x2000);
+ rtl_w0w1_phy(tp, 0x06, 0x0000, 0x2000);
rtl_writephy(tp, 0x1f, 0x0004);
rtl_writephy(tp, 0x1f, 0x0007);
rtl_writephy(tp, 0x1e, 0x0020);
- rtl_w1w0_phy(tp, 0x15, 0x0000, 0x0100);
+ rtl_w0w1_phy(tp, 0x15, 0x0000, 0x0100);
rtl_writephy(tp, 0x1f, 0x0000);
rtl_writephy(tp, 0x0d, 0x0007);
rtl_writephy(tp, 0x0e, 0x003c);
@@ -3389,8 +3646,8 @@ static void rtl8411_hw_phy_config(struct rtl8169_private *tp)
/* Green feature */
rtl_writephy(tp, 0x1f, 0x0003);
- rtl_w1w0_phy(tp, 0x19, 0x0000, 0x0001);
- rtl_w1w0_phy(tp, 0x10, 0x0000, 0x0400);
+ rtl_w0w1_phy(tp, 0x19, 0x0000, 0x0001);
+ rtl_w0w1_phy(tp, 0x10, 0x0000, 0x0400);
rtl_writephy(tp, 0x1f, 0x0000);
}
@@ -3401,45 +3658,45 @@ static void rtl8168g_1_hw_phy_config(struct rtl8169_private *tp)
rtl_writephy(tp, 0x1f, 0x0a46);
if (rtl_readphy(tp, 0x10) & 0x0100) {
rtl_writephy(tp, 0x1f, 0x0bcc);
- rtl_w1w0_phy(tp, 0x12, 0x0000, 0x8000);
+ rtl_w0w1_phy(tp, 0x12, 0x0000, 0x8000);
} else {
rtl_writephy(tp, 0x1f, 0x0bcc);
- rtl_w1w0_phy(tp, 0x12, 0x8000, 0x0000);
+ rtl_w0w1_phy(tp, 0x12, 0x8000, 0x0000);
}
rtl_writephy(tp, 0x1f, 0x0a46);
if (rtl_readphy(tp, 0x13) & 0x0100) {
rtl_writephy(tp, 0x1f, 0x0c41);
- rtl_w1w0_phy(tp, 0x15, 0x0002, 0x0000);
+ rtl_w0w1_phy(tp, 0x15, 0x0002, 0x0000);
} else {
rtl_writephy(tp, 0x1f, 0x0c41);
- rtl_w1w0_phy(tp, 0x15, 0x0000, 0x0002);
+ rtl_w0w1_phy(tp, 0x15, 0x0000, 0x0002);
}
/* Enable PHY auto speed down */
rtl_writephy(tp, 0x1f, 0x0a44);
- rtl_w1w0_phy(tp, 0x11, 0x000c, 0x0000);
+ rtl_w0w1_phy(tp, 0x11, 0x000c, 0x0000);
rtl_writephy(tp, 0x1f, 0x0bcc);
- rtl_w1w0_phy(tp, 0x14, 0x0100, 0x0000);
+ rtl_w0w1_phy(tp, 0x14, 0x0100, 0x0000);
rtl_writephy(tp, 0x1f, 0x0a44);
- rtl_w1w0_phy(tp, 0x11, 0x00c0, 0x0000);
+ rtl_w0w1_phy(tp, 0x11, 0x00c0, 0x0000);
rtl_writephy(tp, 0x1f, 0x0a43);
rtl_writephy(tp, 0x13, 0x8084);
- rtl_w1w0_phy(tp, 0x14, 0x0000, 0x6000);
- rtl_w1w0_phy(tp, 0x10, 0x1003, 0x0000);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x6000);
+ rtl_w0w1_phy(tp, 0x10, 0x1003, 0x0000);
/* EEE auto-fallback function */
rtl_writephy(tp, 0x1f, 0x0a4b);
- rtl_w1w0_phy(tp, 0x11, 0x0004, 0x0000);
+ rtl_w0w1_phy(tp, 0x11, 0x0004, 0x0000);
/* Enable UC LPF tune function */
rtl_writephy(tp, 0x1f, 0x0a43);
rtl_writephy(tp, 0x13, 0x8012);
- rtl_w1w0_phy(tp, 0x14, 0x8000, 0x0000);
+ rtl_w0w1_phy(tp, 0x14, 0x8000, 0x0000);
rtl_writephy(tp, 0x1f, 0x0c42);
- rtl_w1w0_phy(tp, 0x11, 0x4000, 0x2000);
+ rtl_w0w1_phy(tp, 0x11, 0x4000, 0x2000);
/* Improve SWR Efficiency */
rtl_writephy(tp, 0x1f, 0x0bcd);
@@ -3455,7 +3712,7 @@ static void rtl8168g_1_hw_phy_config(struct rtl8169_private *tp)
/* Check ALDPS bit, disable it if enabled */
rtl_writephy(tp, 0x1f, 0x0a43);
if (rtl_readphy(tp, 0x10) & 0x0004)
- rtl_w1w0_phy(tp, 0x10, 0x0000, 0x0004);
+ rtl_w0w1_phy(tp, 0x10, 0x0000, 0x0004);
rtl_writephy(tp, 0x1f, 0x0000);
}
@@ -3465,6 +3722,322 @@ static void rtl8168g_2_hw_phy_config(struct rtl8169_private *tp)
rtl_apply_firmware(tp);
}
+static void rtl8168h_1_hw_phy_config(struct rtl8169_private *tp)
+{
+ u16 dout_tapbin;
+ u32 data;
+
+ rtl_apply_firmware(tp);
+
+ /* CHN EST parameters adjust - giga master */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x809b);
+ rtl_w0w1_phy(tp, 0x14, 0x8000, 0xf800);
+ rtl_writephy(tp, 0x13, 0x80a2);
+ rtl_w0w1_phy(tp, 0x14, 0x8000, 0xff00);
+ rtl_writephy(tp, 0x13, 0x80a4);
+ rtl_w0w1_phy(tp, 0x14, 0x8500, 0xff00);
+ rtl_writephy(tp, 0x13, 0x809c);
+ rtl_w0w1_phy(tp, 0x14, 0xbd00, 0xff00);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* CHN EST parameters adjust - giga slave */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x80ad);
+ rtl_w0w1_phy(tp, 0x14, 0x7000, 0xf800);
+ rtl_writephy(tp, 0x13, 0x80b4);
+ rtl_w0w1_phy(tp, 0x14, 0x5000, 0xff00);
+ rtl_writephy(tp, 0x13, 0x80ac);
+ rtl_w0w1_phy(tp, 0x14, 0x4000, 0xff00);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* CHN EST parameters adjust - fnet */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x808e);
+ rtl_w0w1_phy(tp, 0x14, 0x1200, 0xff00);
+ rtl_writephy(tp, 0x13, 0x8090);
+ rtl_w0w1_phy(tp, 0x14, 0xe500, 0xff00);
+ rtl_writephy(tp, 0x13, 0x8092);
+ rtl_w0w1_phy(tp, 0x14, 0x9f00, 0xff00);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* enable R-tune & PGA-retune function */
+ dout_tapbin = 0;
+ rtl_writephy(tp, 0x1f, 0x0a46);
+ data = rtl_readphy(tp, 0x13);
+ data &= 3;
+ data <<= 2;
+ dout_tapbin |= data;
+ data = rtl_readphy(tp, 0x12);
+ data &= 0xc000;
+ data >>= 14;
+ dout_tapbin |= data;
+ dout_tapbin = ~(dout_tapbin^0x08);
+ dout_tapbin <<= 12;
+ dout_tapbin &= 0xf000;
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x827a);
+ rtl_w0w1_phy(tp, 0x14, dout_tapbin, 0xf000);
+ rtl_writephy(tp, 0x13, 0x827b);
+ rtl_w0w1_phy(tp, 0x14, dout_tapbin, 0xf000);
+ rtl_writephy(tp, 0x13, 0x827c);
+ rtl_w0w1_phy(tp, 0x14, dout_tapbin, 0xf000);
+ rtl_writephy(tp, 0x13, 0x827d);
+ rtl_w0w1_phy(tp, 0x14, dout_tapbin, 0xf000);
+
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x0811);
+ rtl_w0w1_phy(tp, 0x14, 0x0800, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0a42);
+ rtl_w0w1_phy(tp, 0x16, 0x0002, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* enable GPHY 10M */
+ rtl_writephy(tp, 0x1f, 0x0a44);
+ rtl_w0w1_phy(tp, 0x11, 0x0800, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* SAR ADC performance */
+ rtl_writephy(tp, 0x1f, 0x0bca);
+ rtl_w0w1_phy(tp, 0x17, 0x4000, 0x3000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x803f);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x3000);
+ rtl_writephy(tp, 0x13, 0x8047);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x3000);
+ rtl_writephy(tp, 0x13, 0x804f);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x3000);
+ rtl_writephy(tp, 0x13, 0x8057);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x3000);
+ rtl_writephy(tp, 0x13, 0x805f);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x3000);
+ rtl_writephy(tp, 0x13, 0x8067);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x3000);
+ rtl_writephy(tp, 0x13, 0x806f);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x3000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* disable phy pfm mode */
+ rtl_writephy(tp, 0x1f, 0x0a44);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x0080);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* Check ALDPS bit, disable it if enabled */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ if (rtl_readphy(tp, 0x10) & 0x0004)
+ rtl_w0w1_phy(tp, 0x10, 0x0000, 0x0004);
+
+ rtl_writephy(tp, 0x1f, 0x0000);
+}
+
+static void rtl8168h_2_hw_phy_config(struct rtl8169_private *tp)
+{
+ u16 ioffset_p3, ioffset_p2, ioffset_p1, ioffset_p0;
+ u16 rlen;
+ u32 data;
+
+ rtl_apply_firmware(tp);
+
+ /* CHIN EST parameter update */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x808a);
+ rtl_w0w1_phy(tp, 0x14, 0x000a, 0x003f);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* enable R-tune & PGA-retune function */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x0811);
+ rtl_w0w1_phy(tp, 0x14, 0x0800, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0a42);
+ rtl_w0w1_phy(tp, 0x16, 0x0002, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* enable GPHY 10M */
+ rtl_writephy(tp, 0x1f, 0x0a44);
+ rtl_w0w1_phy(tp, 0x11, 0x0800, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ r8168_mac_ocp_write(tp, 0xdd02, 0x807d);
+ data = r8168_mac_ocp_read(tp, 0xdd02);
+ ioffset_p3 = ((data & 0x80)>>7);
+ ioffset_p3 <<= 3;
+
+ data = r8168_mac_ocp_read(tp, 0xdd00);
+ ioffset_p3 |= ((data & (0xe000))>>13);
+ ioffset_p2 = ((data & (0x1e00))>>9);
+ ioffset_p1 = ((data & (0x01e0))>>5);
+ ioffset_p0 = ((data & 0x0010)>>4);
+ ioffset_p0 <<= 3;
+ ioffset_p0 |= (data & (0x07));
+ data = (ioffset_p3<<12)|(ioffset_p2<<8)|(ioffset_p1<<4)|(ioffset_p0);
+
+ if ((ioffset_p3 != 0x0f) || (ioffset_p2 != 0x0f) ||
+ (ioffset_p1 != 0x0f) || (ioffset_p0 == 0x0f)) {
+ rtl_writephy(tp, 0x1f, 0x0bcf);
+ rtl_writephy(tp, 0x16, data);
+ rtl_writephy(tp, 0x1f, 0x0000);
+ }
+
+ /* Modify rlen (TX LPF corner frequency) level */
+ rtl_writephy(tp, 0x1f, 0x0bcd);
+ data = rtl_readphy(tp, 0x16);
+ data &= 0x000f;
+ rlen = 0;
+ if (data > 3)
+ rlen = data - 3;
+ data = rlen | (rlen<<4) | (rlen<<8) | (rlen<<12);
+ rtl_writephy(tp, 0x17, data);
+ rtl_writephy(tp, 0x1f, 0x0bcd);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* disable phy pfm mode */
+ rtl_writephy(tp, 0x1f, 0x0a44);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x0080);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* Check ALDPS bit, disable it if enabled */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ if (rtl_readphy(tp, 0x10) & 0x0004)
+ rtl_w0w1_phy(tp, 0x10, 0x0000, 0x0004);
+
+ rtl_writephy(tp, 0x1f, 0x0000);
+}
+
+static void rtl8168ep_1_hw_phy_config(struct rtl8169_private *tp)
+{
+ /* Enable PHY auto speed down */
+ rtl_writephy(tp, 0x1f, 0x0a44);
+ rtl_w0w1_phy(tp, 0x11, 0x000c, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* patch 10M & ALDPS */
+ rtl_writephy(tp, 0x1f, 0x0bcc);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x0100);
+ rtl_writephy(tp, 0x1f, 0x0a44);
+ rtl_w0w1_phy(tp, 0x11, 0x00c0, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x8084);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x6000);
+ rtl_w0w1_phy(tp, 0x10, 0x1003, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* Enable EEE auto-fallback function */
+ rtl_writephy(tp, 0x1f, 0x0a4b);
+ rtl_w0w1_phy(tp, 0x11, 0x0004, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* Enable UC LPF tune function */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x8012);
+ rtl_w0w1_phy(tp, 0x14, 0x8000, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* set rg_sel_sdm_rate */
+ rtl_writephy(tp, 0x1f, 0x0c42);
+ rtl_w0w1_phy(tp, 0x11, 0x4000, 0x2000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* Check ALDPS bit, disable it if enabled */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ if (rtl_readphy(tp, 0x10) & 0x0004)
+ rtl_w0w1_phy(tp, 0x10, 0x0000, 0x0004);
+
+ rtl_writephy(tp, 0x1f, 0x0000);
+}
+
+static void rtl8168ep_2_hw_phy_config(struct rtl8169_private *tp)
+{
+ /* patch 10M & ALDPS */
+ rtl_writephy(tp, 0x1f, 0x0bcc);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x0100);
+ rtl_writephy(tp, 0x1f, 0x0a44);
+ rtl_w0w1_phy(tp, 0x11, 0x00c0, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x8084);
+ rtl_w0w1_phy(tp, 0x14, 0x0000, 0x6000);
+ rtl_w0w1_phy(tp, 0x10, 0x1003, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* Enable UC LPF tune function */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x8012);
+ rtl_w0w1_phy(tp, 0x14, 0x8000, 0x0000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* Set rg_sel_sdm_rate */
+ rtl_writephy(tp, 0x1f, 0x0c42);
+ rtl_w0w1_phy(tp, 0x11, 0x4000, 0x2000);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* Channel estimation parameters */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x80f3);
+ rtl_w0w1_phy(tp, 0x14, 0x8b00, ~0x8bff);
+ rtl_writephy(tp, 0x13, 0x80f0);
+ rtl_w0w1_phy(tp, 0x14, 0x3a00, ~0x3aff);
+ rtl_writephy(tp, 0x13, 0x80ef);
+ rtl_w0w1_phy(tp, 0x14, 0x0500, ~0x05ff);
+ rtl_writephy(tp, 0x13, 0x80f6);
+ rtl_w0w1_phy(tp, 0x14, 0x6e00, ~0x6eff);
+ rtl_writephy(tp, 0x13, 0x80ec);
+ rtl_w0w1_phy(tp, 0x14, 0x6800, ~0x68ff);
+ rtl_writephy(tp, 0x13, 0x80ed);
+ rtl_w0w1_phy(tp, 0x14, 0x7c00, ~0x7cff);
+ rtl_writephy(tp, 0x13, 0x80f2);
+ rtl_w0w1_phy(tp, 0x14, 0xf400, ~0xf4ff);
+ rtl_writephy(tp, 0x13, 0x80f4);
+ rtl_w0w1_phy(tp, 0x14, 0x8500, ~0x85ff);
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x8110);
+ rtl_w0w1_phy(tp, 0x14, 0xa800, ~0xa8ff);
+ rtl_writephy(tp, 0x13, 0x810f);
+ rtl_w0w1_phy(tp, 0x14, 0x1d00, ~0x1dff);
+ rtl_writephy(tp, 0x13, 0x8111);
+ rtl_w0w1_phy(tp, 0x14, 0xf500, ~0xf5ff);
+ rtl_writephy(tp, 0x13, 0x8113);
+ rtl_w0w1_phy(tp, 0x14, 0x6100, ~0x61ff);
+ rtl_writephy(tp, 0x13, 0x8115);
+ rtl_w0w1_phy(tp, 0x14, 0x9200, ~0x92ff);
+ rtl_writephy(tp, 0x13, 0x810e);
+ rtl_w0w1_phy(tp, 0x14, 0x0400, ~0x04ff);
+ rtl_writephy(tp, 0x13, 0x810c);
+ rtl_w0w1_phy(tp, 0x14, 0x7c00, ~0x7cff);
+ rtl_writephy(tp, 0x13, 0x810b);
+ rtl_w0w1_phy(tp, 0x14, 0x5a00, ~0x5aff);
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ rtl_writephy(tp, 0x13, 0x80d1);
+ rtl_w0w1_phy(tp, 0x14, 0xff00, ~0xffff);
+ rtl_writephy(tp, 0x13, 0x80cd);
+ rtl_w0w1_phy(tp, 0x14, 0x9e00, ~0x9eff);
+ rtl_writephy(tp, 0x13, 0x80d3);
+ rtl_w0w1_phy(tp, 0x14, 0x0e00, ~0x0eff);
+ rtl_writephy(tp, 0x13, 0x80d5);
+ rtl_w0w1_phy(tp, 0x14, 0xca00, ~0xcaff);
+ rtl_writephy(tp, 0x13, 0x80d7);
+ rtl_w0w1_phy(tp, 0x14, 0x8400, ~0x84ff);
+
+ /* Force PWM-mode */
+ rtl_writephy(tp, 0x1f, 0x0bcd);
+ rtl_writephy(tp, 0x14, 0x5065);
+ rtl_writephy(tp, 0x14, 0xd065);
+ rtl_writephy(tp, 0x1f, 0x0bc8);
+ rtl_writephy(tp, 0x12, 0x00ed);
+ rtl_writephy(tp, 0x1f, 0x0bcd);
+ rtl_writephy(tp, 0x14, 0x1065);
+ rtl_writephy(tp, 0x14, 0x9065);
+ rtl_writephy(tp, 0x14, 0x1065);
+ rtl_writephy(tp, 0x1f, 0x0000);
+
+ /* Check ALDPS bit, disable it if enabled */
+ rtl_writephy(tp, 0x1f, 0x0a43);
+ if (rtl_readphy(tp, 0x10) & 0x0004)
+ rtl_w0w1_phy(tp, 0x10, 0x0000, 0x0004);
+
+ rtl_writephy(tp, 0x1f, 0x0000);
+}
+
static void rtl8102e_hw_phy_config(struct rtl8169_private *tp)
{
static const struct phy_reg phy_reg_init[] = {
@@ -3655,6 +4228,22 @@ static void rtl_hw_phy_config(struct net_device *dev)
case RTL_GIGA_MAC_VER_44:
rtl8168g_2_hw_phy_config(tp);
break;
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_47:
+ rtl8168h_1_hw_phy_config(tp);
+ break;
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_48:
+ rtl8168h_2_hw_phy_config(tp);
+ break;
+
+ case RTL_GIGA_MAC_VER_49:
+ rtl8168ep_1_hw_phy_config(tp);
+ break;
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
+ rtl8168ep_2_hw_phy_config(tp);
+ break;
case RTL_GIGA_MAC_VER_41:
default:
@@ -3866,6 +4455,13 @@ static void rtl_init_mdio_ops(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_42:
case RTL_GIGA_MAC_VER_43:
case RTL_GIGA_MAC_VER_44:
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_47:
+ case RTL_GIGA_MAC_VER_48:
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
ops->write = r8168g_mdio_write;
ops->read = r8168g_mdio_read;
break;
@@ -3920,6 +4516,13 @@ static void rtl_wol_suspend_quirk(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_42:
case RTL_GIGA_MAC_VER_43:
case RTL_GIGA_MAC_VER_44:
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_47:
+ case RTL_GIGA_MAC_VER_48:
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
RTL_W32(RxConfig, RTL_R32(RxConfig) |
AcceptBroadcast | AcceptMulticast | AcceptMyPhys);
break;
@@ -3988,6 +4591,10 @@ static void r810x_pll_power_up(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_13:
case RTL_GIGA_MAC_VER_16:
break;
+ case RTL_GIGA_MAC_VER_47:
+ case RTL_GIGA_MAC_VER_48:
+ RTL_W8(PMCH, RTL_R8(PMCH) | 0xc0);
+ break;
default:
RTL_W8(PMCH, RTL_R8(PMCH) | 0x80);
break;
@@ -4060,8 +4667,11 @@ static void r8168_pll_power_down(struct rtl8169_private *tp)
if ((tp->mac_version == RTL_GIGA_MAC_VER_27 ||
tp->mac_version == RTL_GIGA_MAC_VER_28 ||
- tp->mac_version == RTL_GIGA_MAC_VER_31) &&
- r8168dp_check_dash(tp)) {
+ tp->mac_version == RTL_GIGA_MAC_VER_31 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_49 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_50 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_51) &&
+ r8168_check_dash(tp)) {
return;
}
@@ -4088,12 +4698,19 @@ static void r8168_pll_power_down(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_31:
case RTL_GIGA_MAC_VER_32:
case RTL_GIGA_MAC_VER_33:
+ case RTL_GIGA_MAC_VER_44:
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
RTL_W8(PMCH, RTL_R8(PMCH) & ~0x80);
break;
case RTL_GIGA_MAC_VER_40:
case RTL_GIGA_MAC_VER_41:
- rtl_w1w0_eri(tp, 0x1a8, ERIAR_MASK_1111, 0x00000000,
+ case RTL_GIGA_MAC_VER_49:
+ rtl_w0w1_eri(tp, 0x1a8, ERIAR_MASK_1111, 0x00000000,
0xfc000000, ERIAR_EXGMAC);
+ RTL_W8(PMCH, RTL_R8(PMCH) & ~0x80);
break;
}
}
@@ -4112,9 +4729,18 @@ static void r8168_pll_power_up(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_33:
RTL_W8(PMCH, RTL_R8(PMCH) | 0x80);
break;
+ case RTL_GIGA_MAC_VER_44:
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
+ RTL_W8(PMCH, RTL_R8(PMCH) | 0xc0);
+ break;
case RTL_GIGA_MAC_VER_40:
case RTL_GIGA_MAC_VER_41:
- rtl_w1w0_eri(tp, 0x1a8, ERIAR_MASK_1111, 0xfc000000,
+ case RTL_GIGA_MAC_VER_49:
+ RTL_W8(PMCH, RTL_R8(PMCH) | 0xc0);
+ rtl_w0w1_eri(tp, 0x1a8, ERIAR_MASK_1111, 0xfc000000,
0x00000000, ERIAR_EXGMAC);
break;
}
@@ -4154,6 +4780,8 @@ static void rtl_init_pll_power_ops(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_37:
case RTL_GIGA_MAC_VER_39:
case RTL_GIGA_MAC_VER_43:
+ case RTL_GIGA_MAC_VER_47:
+ case RTL_GIGA_MAC_VER_48:
ops->down = r810x_pll_power_down;
ops->up = r810x_pll_power_up;
break;
@@ -4183,6 +4811,11 @@ static void rtl_init_pll_power_ops(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_41:
case RTL_GIGA_MAC_VER_42:
case RTL_GIGA_MAC_VER_44:
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
ops->down = r8168_pll_power_down;
ops->up = r8168_pll_power_up;
break;
@@ -4233,6 +4866,13 @@ static void rtl_init_rxcfg(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_42:
case RTL_GIGA_MAC_VER_43:
case RTL_GIGA_MAC_VER_44:
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_47:
+ case RTL_GIGA_MAC_VER_48:
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
RTL_W32(RxConfig, RX128_INT_EN | RX_DMA_BURST | RX_EARLY_OFF);
break;
default:
@@ -4394,6 +5034,13 @@ static void rtl_init_jumbo_ops(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_42:
case RTL_GIGA_MAC_VER_43:
case RTL_GIGA_MAC_VER_44:
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_47:
+ case RTL_GIGA_MAC_VER_48:
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
default:
ops->disable = NULL;
ops->enable = NULL;
@@ -4415,6 +5062,8 @@ static void rtl_hw_reset(struct rtl8169_private *tp)
RTL_W8(ChipCmd, CmdReset);
rtl_udelay_loop_wait_low(tp, &rtl_chipcmd_cond, 100, 100);
+
+ netdev_reset_queue(tp->dev);
}
static void rtl_request_uncached_firmware(struct rtl8169_private *tp)
@@ -4496,15 +5145,22 @@ static void rtl8169_hw_reset(struct rtl8169_private *tp)
tp->mac_version == RTL_GIGA_MAC_VER_31) {
rtl_udelay_loop_wait_low(tp, &rtl_npq_cond, 20, 42*42);
} else if (tp->mac_version == RTL_GIGA_MAC_VER_34 ||
- tp->mac_version == RTL_GIGA_MAC_VER_35 ||
- tp->mac_version == RTL_GIGA_MAC_VER_36 ||
- tp->mac_version == RTL_GIGA_MAC_VER_37 ||
- tp->mac_version == RTL_GIGA_MAC_VER_40 ||
- tp->mac_version == RTL_GIGA_MAC_VER_41 ||
- tp->mac_version == RTL_GIGA_MAC_VER_42 ||
- tp->mac_version == RTL_GIGA_MAC_VER_43 ||
- tp->mac_version == RTL_GIGA_MAC_VER_44 ||
- tp->mac_version == RTL_GIGA_MAC_VER_38) {
+ tp->mac_version == RTL_GIGA_MAC_VER_35 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_36 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_37 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_38 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_40 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_41 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_42 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_43 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_44 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_45 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_46 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_47 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_48 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_49 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_50 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_51) {
RTL_W8(ChipCmd, RTL_R8(ChipCmd) | StopReq);
rtl_udelay_loop_wait_high(tp, &rtl_txcfg_empty_cond, 100, 666);
} else {
@@ -4674,7 +5330,7 @@ static void rtl_hw_start_8169(struct net_device *dev)
if (tp->mac_version == RTL_GIGA_MAC_VER_02 ||
tp->mac_version == RTL_GIGA_MAC_VER_03) {
- dprintk("Set MAC Reg C+CR Offset 0xE0. "
+ dprintk("Set MAC Reg C+CR Offset 0xe0. "
"Bit-3 and bit-14 MUST be 1\n");
tp->cp_cmd |= (1 << 14);
}
@@ -4709,7 +5365,7 @@ static void rtl_hw_start_8169(struct net_device *dev)
rtl_set_rx_mode(dev);
/* no early-rx interrupts */
- RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xf000);
}
static void rtl_csi_write(struct rtl8169_private *tp, int addr, int value)
@@ -5172,8 +5828,8 @@ static void rtl_hw_start_8168e_2(struct rtl8169_private *tp)
rtl_eri_write(tp, 0xe8, ERIAR_MASK_1111, 0x00100006, ERIAR_EXGMAC);
rtl_eri_write(tp, 0xcc, ERIAR_MASK_1111, 0x00000050, ERIAR_EXGMAC);
rtl_eri_write(tp, 0xd0, ERIAR_MASK_1111, 0x07ff0060, ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0xff00, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0x1b0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0xff00, ERIAR_EXGMAC);
RTL_W8(MaxTxPacketSize, EarlySize);
@@ -5203,10 +5859,10 @@ static void rtl_hw_start_8168f(struct rtl8169_private *tp)
rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC);
rtl_eri_write(tp, 0xc8, ERIAR_MASK_1111, 0x00100002, ERIAR_EXGMAC);
rtl_eri_write(tp, 0xe8, ERIAR_MASK_1111, 0x00100006, ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0x1d0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0x1b0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0x1d0, ERIAR_MASK_0001, 0x10, 0x00, ERIAR_EXGMAC);
rtl_eri_write(tp, 0xcc, ERIAR_MASK_1111, 0x00000050, ERIAR_EXGMAC);
rtl_eri_write(tp, 0xd0, ERIAR_MASK_1111, 0x00000060, ERIAR_EXGMAC);
@@ -5235,7 +5891,7 @@ static void rtl_hw_start_8168f_1(struct rtl8169_private *tp)
rtl_ephy_init(tp, e_info_8168f_1, ARRAY_SIZE(e_info_8168f_1));
- rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0xff00, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0xff00, ERIAR_EXGMAC);
/* Adjust EEE LED frequency */
RTL_W8(EEE_LED, RTL_R8(EEE_LED) & ~0x07);
@@ -5255,7 +5911,7 @@ static void rtl_hw_start_8411(struct rtl8169_private *tp)
rtl_ephy_init(tp, e_info_8168f_1, ARRAY_SIZE(e_info_8168f_1));
- rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0x0000, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0x0000, ERIAR_EXGMAC);
}
static void rtl_hw_start_8168g_1(struct rtl8169_private *tp)
@@ -5274,8 +5930,8 @@ static void rtl_hw_start_8168g_1(struct rtl8169_private *tp)
rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
- rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC);
rtl_eri_write(tp, 0x2f8, ERIAR_MASK_0011, 0x1d8f, ERIAR_EXGMAC);
RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
@@ -5288,8 +5944,8 @@ static void rtl_hw_start_8168g_1(struct rtl8169_private *tp)
/* Adjust EEE LED frequency */
RTL_W8(EEE_LED, RTL_R8(EEE_LED) & ~0x07);
- rtl_w1w0_eri(tp, 0x2fc, ERIAR_MASK_0001, 0x01, 0x06, ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0x1b0, ERIAR_MASK_0011, 0x0000, 0x1000, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0x2fc, ERIAR_MASK_0001, 0x01, 0x06, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0x1b0, ERIAR_MASK_0011, 0x0000, 0x1000, ERIAR_EXGMAC);
rtl_pcie_state_l2l3_enable(tp, false);
}
@@ -5331,6 +5987,219 @@ static void rtl_hw_start_8411_2(struct rtl8169_private *tp)
rtl_ephy_init(tp, e_info_8411_2, ARRAY_SIZE(e_info_8411_2));
}
+static void rtl_hw_start_8168h_1(struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+ u16 rg_saw_cnt;
+ u32 data;
+ static const struct ephy_info e_info_8168h_1[] = {
+ { 0x1e, 0x0800, 0x0001 },
+ { 0x1d, 0x0000, 0x0800 },
+ { 0x05, 0xffff, 0x2089 },
+ { 0x06, 0xffff, 0x5881 },
+ { 0x04, 0xffff, 0x154a },
+ { 0x01, 0xffff, 0x068b }
+ };
+
+ /* disable aspm and clock request before access ephy */
+ RTL_W8(Config2, RTL_R8(Config2) & ~ClkReqEn);
+ RTL_W8(Config5, RTL_R8(Config5) & ~ASPM_en);
+ rtl_ephy_init(tp, e_info_8168h_1, ARRAY_SIZE(e_info_8168h_1));
+
+ RTL_W32(TxConfig, RTL_R32(TxConfig) | TXCFG_AUTO_FIFO);
+
+ rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x00080002, ERIAR_EXGMAC);
+ rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x38, ERIAR_EXGMAC);
+ rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x48, ERIAR_EXGMAC);
+ rtl_eri_write(tp, 0xe8, ERIAR_MASK_1111, 0x00100006, ERIAR_EXGMAC);
+
+ rtl_csi_access_enable_1(tp);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC);
+
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_1111, 0x0010, 0x00, ERIAR_EXGMAC);
+
+ rtl_w0w1_eri(tp, 0xd4, ERIAR_MASK_1111, 0x1f00, 0x00, ERIAR_EXGMAC);
+
+ rtl_eri_write(tp, 0x5f0, ERIAR_MASK_0011, 0x4f87, ERIAR_EXGMAC);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+ RTL_W32(MISC, RTL_R32(MISC) & ~RXDV_GATED_EN);
+ RTL_W8(MaxTxPacketSize, EarlySize);
+
+ rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC);
+ rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC);
+
+ /* Adjust EEE LED frequency */
+ RTL_W8(EEE_LED, RTL_R8(EEE_LED) & ~0x07);
+
+ RTL_W8(DLLPR, RTL_R8(DLLPR) & ~PFM_EN);
+ RTL_W8(DLLPR, RTL_R8(MISC_1) & ~PFM_D3COLD_EN);
+
+ RTL_W8(DLLPR, RTL_R8(DLLPR) & ~TX_10M_PS_EN);
+
+ rtl_w0w1_eri(tp, 0x1b0, ERIAR_MASK_0011, 0x0000, 0x1000, ERIAR_EXGMAC);
+
+ rtl_pcie_state_l2l3_enable(tp, false);
+
+ rtl_writephy(tp, 0x1f, 0x0c42);
+ rg_saw_cnt = rtl_readphy(tp, 0x13);
+ rtl_writephy(tp, 0x1f, 0x0000);
+ if (rg_saw_cnt > 0) {
+ u16 sw_cnt_1ms_ini;
+
+ sw_cnt_1ms_ini = 16000000/rg_saw_cnt;
+ sw_cnt_1ms_ini &= 0x0fff;
+ data = r8168_mac_ocp_read(tp, 0xd412);
+ data &= 0x0fff;
+ data |= sw_cnt_1ms_ini;
+ r8168_mac_ocp_write(tp, 0xd412, data);
+ }
+
+ data = r8168_mac_ocp_read(tp, 0xe056);
+ data &= 0xf0;
+ data |= 0x07;
+ r8168_mac_ocp_write(tp, 0xe056, data);
+
+ data = r8168_mac_ocp_read(tp, 0xe052);
+ data &= 0x8008;
+ data |= 0x6000;
+ r8168_mac_ocp_write(tp, 0xe052, data);
+
+ data = r8168_mac_ocp_read(tp, 0xe0d6);
+ data &= 0x01ff;
+ data |= 0x017f;
+ r8168_mac_ocp_write(tp, 0xe0d6, data);
+
+ data = r8168_mac_ocp_read(tp, 0xd420);
+ data &= 0x0fff;
+ data |= 0x047f;
+ r8168_mac_ocp_write(tp, 0xd420, data);
+
+ r8168_mac_ocp_write(tp, 0xe63e, 0x0001);
+ r8168_mac_ocp_write(tp, 0xe63e, 0x0000);
+ r8168_mac_ocp_write(tp, 0xc094, 0x0000);
+ r8168_mac_ocp_write(tp, 0xc09e, 0x0000);
+}
+
+static void rtl_hw_start_8168ep(struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ struct pci_dev *pdev = tp->pci_dev;
+
+ RTL_W32(TxConfig, RTL_R32(TxConfig) | TXCFG_AUTO_FIFO);
+
+ rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x00080002, ERIAR_EXGMAC);
+ rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x2f, ERIAR_EXGMAC);
+ rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x5f, ERIAR_EXGMAC);
+ rtl_eri_write(tp, 0xe8, ERIAR_MASK_1111, 0x00100006, ERIAR_EXGMAC);
+
+ rtl_csi_access_enable_1(tp);
+
+ rtl_tx_performance_tweak(pdev, 0x5 << MAX_READ_REQUEST_SHIFT);
+
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC);
+
+ rtl_w0w1_eri(tp, 0xd4, ERIAR_MASK_1111, 0x1f80, 0x00, ERIAR_EXGMAC);
+
+ rtl_eri_write(tp, 0x5f0, ERIAR_MASK_0011, 0x4f87, ERIAR_EXGMAC);
+
+ RTL_W8(ChipCmd, CmdTxEnb | CmdRxEnb);
+ RTL_W32(MISC, RTL_R32(MISC) & ~RXDV_GATED_EN);
+ RTL_W8(MaxTxPacketSize, EarlySize);
+
+ rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC);
+ rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC);
+
+ /* Adjust EEE LED frequency */
+ RTL_W8(EEE_LED, RTL_R8(EEE_LED) & ~0x07);
+
+ rtl_w0w1_eri(tp, 0x2fc, ERIAR_MASK_0001, 0x01, 0x06, ERIAR_EXGMAC);
+
+ RTL_W8(DLLPR, RTL_R8(DLLPR) & ~TX_10M_PS_EN);
+
+ rtl_pcie_state_l2l3_enable(tp, false);
+}
+
+static void rtl_hw_start_8168ep_1(struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ static const struct ephy_info e_info_8168ep_1[] = {
+ { 0x00, 0xffff, 0x10ab },
+ { 0x06, 0xffff, 0xf030 },
+ { 0x08, 0xffff, 0x2006 },
+ { 0x0d, 0xffff, 0x1666 },
+ { 0x0c, 0x3ff0, 0x0000 }
+ };
+
+ /* disable aspm and clock request before access ephy */
+ RTL_W8(Config2, RTL_R8(Config2) & ~ClkReqEn);
+ RTL_W8(Config5, RTL_R8(Config5) & ~ASPM_en);
+ rtl_ephy_init(tp, e_info_8168ep_1, ARRAY_SIZE(e_info_8168ep_1));
+
+ rtl_hw_start_8168ep(tp);
+}
+
+static void rtl_hw_start_8168ep_2(struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ static const struct ephy_info e_info_8168ep_2[] = {
+ { 0x00, 0xffff, 0x10a3 },
+ { 0x19, 0xffff, 0xfc00 },
+ { 0x1e, 0xffff, 0x20ea }
+ };
+
+ /* disable aspm and clock request before access ephy */
+ RTL_W8(Config2, RTL_R8(Config2) & ~ClkReqEn);
+ RTL_W8(Config5, RTL_R8(Config5) & ~ASPM_en);
+ rtl_ephy_init(tp, e_info_8168ep_2, ARRAY_SIZE(e_info_8168ep_2));
+
+ rtl_hw_start_8168ep(tp);
+
+ RTL_W8(DLLPR, RTL_R8(DLLPR) & ~PFM_EN);
+ RTL_W8(DLLPR, RTL_R8(MISC_1) & ~PFM_D3COLD_EN);
+}
+
+static void rtl_hw_start_8168ep_3(struct rtl8169_private *tp)
+{
+ void __iomem *ioaddr = tp->mmio_addr;
+ u32 data;
+ static const struct ephy_info e_info_8168ep_3[] = {
+ { 0x00, 0xffff, 0x10a3 },
+ { 0x19, 0xffff, 0x7c00 },
+ { 0x1e, 0xffff, 0x20eb },
+ { 0x0d, 0xffff, 0x1666 }
+ };
+
+ /* disable aspm and clock request before access ephy */
+ RTL_W8(Config2, RTL_R8(Config2) & ~ClkReqEn);
+ RTL_W8(Config5, RTL_R8(Config5) & ~ASPM_en);
+ rtl_ephy_init(tp, e_info_8168ep_3, ARRAY_SIZE(e_info_8168ep_3));
+
+ rtl_hw_start_8168ep(tp);
+
+ RTL_W8(DLLPR, RTL_R8(DLLPR) & ~PFM_EN);
+ RTL_W8(DLLPR, RTL_R8(MISC_1) & ~PFM_D3COLD_EN);
+
+ data = r8168_mac_ocp_read(tp, 0xd3e2);
+ data &= 0xf000;
+ data |= 0x0271;
+ r8168_mac_ocp_write(tp, 0xd3e2, data);
+
+ data = r8168_mac_ocp_read(tp, 0xd3e4);
+ data &= 0xff00;
+ r8168_mac_ocp_write(tp, 0xd3e4, data);
+
+ data = r8168_mac_ocp_read(tp, 0xe860);
+ data |= 0x0080;
+ r8168_mac_ocp_write(tp, 0xe860, data);
+}
+
static void rtl_hw_start_8168(struct net_device *dev)
{
struct rtl8169_private *tp = netdev_priv(dev);
@@ -5441,6 +6310,23 @@ static void rtl_hw_start_8168(struct net_device *dev)
rtl_hw_start_8411_2(tp);
break;
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ rtl_hw_start_8168h_1(tp);
+ break;
+
+ case RTL_GIGA_MAC_VER_49:
+ rtl_hw_start_8168ep_1(tp);
+ break;
+
+ case RTL_GIGA_MAC_VER_50:
+ rtl_hw_start_8168ep_2(tp);
+ break;
+
+ case RTL_GIGA_MAC_VER_51:
+ rtl_hw_start_8168ep_3(tp);
+ break;
+
default:
printk(KERN_ERR PFX "%s: unknown chipset (mac_version = %d).\n",
dev->name, tp->mac_version);
@@ -5453,7 +6339,7 @@ static void rtl_hw_start_8168(struct net_device *dev)
rtl_set_rx_mode(dev);
- RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xF000);
+ RTL_W16(MultiIntr, RTL_R16(MultiIntr) & 0xf000);
}
#define R810X_CPCMD_QUIRK_MASK (\
@@ -5576,11 +6462,11 @@ static void rtl_hw_start_8402(struct rtl8169_private *tp)
rtl_eri_write(tp, 0xc8, ERIAR_MASK_1111, 0x00000002, ERIAR_EXGMAC);
rtl_eri_write(tp, 0xe8, ERIAR_MASK_1111, 0x00000006, ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x00, 0x01, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0xdc, ERIAR_MASK_0001, 0x01, 0x00, ERIAR_EXGMAC);
rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC);
rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000, ERIAR_EXGMAC);
- rtl_w1w0_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0e00, 0xff00, ERIAR_EXGMAC);
+ rtl_w0w1_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0e00, 0xff00, ERIAR_EXGMAC);
rtl_pcie_state_l2l3_enable(tp, false);
}
@@ -5656,6 +6542,10 @@ static void rtl_hw_start_8101(struct net_device *dev)
case RTL_GIGA_MAC_VER_43:
rtl_hw_start_8168g_2(tp);
break;
+ case RTL_GIGA_MAC_VER_47:
+ case RTL_GIGA_MAC_VER_48:
+ rtl_hw_start_8168h_1(tp);
+ break;
}
RTL_W8(Cfg9346, Cfg9346_Lock);
@@ -5896,7 +6786,7 @@ static int rtl8169_xmit_frags(struct rtl8169_private *tp, struct sk_buff *skb,
{
struct skb_shared_info *info = skb_shinfo(skb);
unsigned int cur_frag, entry;
- struct TxDesc * uninitialized_var(txd);
+ struct TxDesc *uninitialized_var(txd);
struct device *d = &tp->pci_dev->dev;
entry = tp->cur_tx;
@@ -6183,6 +7073,8 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
txd->opts2 = cpu_to_le32(opts[1]);
+ netdev_sent_queue(dev, skb->len);
+
skb_tx_timestamp(skb);
wmb();
@@ -6282,6 +7174,7 @@ static void rtl8169_pcierr_interrupt(struct net_device *dev)
static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp)
{
unsigned int dirty_tx, tx_left;
+ unsigned int bytes_compl = 0, pkts_compl = 0;
dirty_tx = tp->dirty_tx;
smp_rmb();
@@ -6300,10 +7193,8 @@ static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp)
rtl8169_unmap_tx_skb(&tp->pci_dev->dev, tx_skb,
tp->TxDescArray + entry);
if (status & LastFrag) {
- u64_stats_update_begin(&tp->tx_stats.syncp);
- tp->tx_stats.packets++;
- tp->tx_stats.bytes += tx_skb->skb->len;
- u64_stats_update_end(&tp->tx_stats.syncp);
+ pkts_compl++;
+ bytes_compl += tx_skb->skb->len;
dev_kfree_skb_any(tx_skb->skb);
tx_skb->skb = NULL;
}
@@ -6312,6 +7203,13 @@ static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp)
}
if (tp->dirty_tx != dirty_tx) {
+ netdev_completed_queue(tp->dev, pkts_compl, bytes_compl);
+
+ u64_stats_update_begin(&tp->tx_stats.syncp);
+ tp->tx_stats.packets += pkts_compl;
+ tp->tx_stats.bytes += bytes_compl;
+ u64_stats_update_end(&tp->tx_stats.syncp);
+
tp->dirty_tx = dirty_tx;
/* Sync with rtl8169_start_xmit:
* - publish dirty_tx ring index (write barrier)
@@ -6957,9 +7855,13 @@ static void rtl_remove_one(struct pci_dev *pdev)
struct net_device *dev = pci_get_drvdata(pdev);
struct rtl8169_private *tp = netdev_priv(dev);
- if (tp->mac_version == RTL_GIGA_MAC_VER_27 ||
- tp->mac_version == RTL_GIGA_MAC_VER_28 ||
- tp->mac_version == RTL_GIGA_MAC_VER_31) {
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_27 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_28 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_31 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_49 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_50 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_51) &&
+ r8168_check_dash(tp)) {
rtl8168_driver_stop(tp);
}
@@ -7111,6 +8013,13 @@ static void rtl_hw_initialize(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_42:
case RTL_GIGA_MAC_VER_43:
case RTL_GIGA_MAC_VER_44:
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_47:
+ case RTL_GIGA_MAC_VER_48:
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
rtl_hw_init_8168g(tp);
break;
@@ -7248,8 +8157,34 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
RTL_W8(Cfg9346, Cfg9346_Unlock);
RTL_W8(Config1, RTL_R8(Config1) | PMEnable);
RTL_W8(Config5, RTL_R8(Config5) & (BWF | MWF | UWF | LanWake | PMEStatus));
- if ((RTL_R8(Config3) & (LinkUp | MagicPacket)) != 0)
- tp->features |= RTL_FEATURE_WOL;
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_34:
+ case RTL_GIGA_MAC_VER_35:
+ case RTL_GIGA_MAC_VER_36:
+ case RTL_GIGA_MAC_VER_37:
+ case RTL_GIGA_MAC_VER_38:
+ case RTL_GIGA_MAC_VER_40:
+ case RTL_GIGA_MAC_VER_41:
+ case RTL_GIGA_MAC_VER_42:
+ case RTL_GIGA_MAC_VER_43:
+ case RTL_GIGA_MAC_VER_44:
+ case RTL_GIGA_MAC_VER_45:
+ case RTL_GIGA_MAC_VER_46:
+ case RTL_GIGA_MAC_VER_47:
+ case RTL_GIGA_MAC_VER_48:
+ case RTL_GIGA_MAC_VER_49:
+ case RTL_GIGA_MAC_VER_50:
+ case RTL_GIGA_MAC_VER_51:
+ if (rtl_eri_read(tp, 0xdc, ERIAR_EXGMAC) & MagicPacket_v2)
+ tp->features |= RTL_FEATURE_WOL;
+ if ((RTL_R8(Config3) & LinkUp) != 0)
+ tp->features |= RTL_FEATURE_WOL;
+ break;
+ default:
+ if ((RTL_R8(Config3) & (LinkUp | MagicPacket)) != 0)
+ tp->features |= RTL_FEATURE_WOL;
+ break;
+ }
if ((RTL_R8(Config5) & (UWF | BWF | MWF)) != 0)
tp->features |= RTL_FEATURE_WOL;
tp->features |= rtl_try_msi(tp, cfg);
@@ -7276,6 +8211,30 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
u64_stats_init(&tp->tx_stats.syncp);
/* Get MAC address */
+ if (tp->mac_version == RTL_GIGA_MAC_VER_35 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_36 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_37 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_38 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_40 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_41 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_42 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_43 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_44 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_45 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_46 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_47 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_48 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_49 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_50 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_51) {
+ u16 mac_addr[3];
+
+ *(u32 *)&mac_addr[0] = rtl_eri_read(tp, 0xe0, ERIAR_EXGMAC);
+ *(u16 *)&mac_addr[2] = rtl_eri_read(tp, 0xe4, ERIAR_EXGMAC);
+
+ if (is_valid_ether_addr((u8 *)mac_addr))
+ rtl_rar_set(tp, (u8 *)mac_addr);
+ }
for (i = 0; i < ETH_ALEN; i++)
dev->dev_addr[i] = RTL_R8(MAC0 + i);
@@ -7344,9 +8303,13 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
rtl_chip_infos[chipset].jumbo_tx_csum ? "ok" : "ko");
}
- if (tp->mac_version == RTL_GIGA_MAC_VER_27 ||
- tp->mac_version == RTL_GIGA_MAC_VER_28 ||
- tp->mac_version == RTL_GIGA_MAC_VER_31) {
+ if ((tp->mac_version == RTL_GIGA_MAC_VER_27 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_28 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_31 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_49 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_50 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_51) &&
+ r8168_check_dash(tp)) {
rtl8168_driver_start(tp);
}
diff --git a/drivers/net/ethernet/sfc/tx.c b/drivers/net/ethernet/sfc/tx.c
index 65c220f..3206098 100644
--- a/drivers/net/ethernet/sfc/tx.c
+++ b/drivers/net/ethernet/sfc/tx.c
@@ -78,7 +78,7 @@ static void efx_dequeue_buffer(struct efx_tx_queue *tx_queue,
if (buffer->flags & EFX_TX_BUF_SKB) {
(*pkts_compl)++;
(*bytes_compl) += buffer->skb->len;
- dev_kfree_skb_any((struct sk_buff *) buffer->skb);
+ dev_consume_skb_any((struct sk_buff *)buffer->skb);
netif_vdbg(tx_queue->efx, tx_done, tx_queue->efx->net_dev,
"TX queue %d transmission id %x complete\n",
tx_queue->queue, tx_queue->read_count);
diff --git a/drivers/net/ethernet/smsc/smc911x.c b/drivers/net/ethernet/smsc/smc911x.c
index 9778cba..e88df9c 100644
--- a/drivers/net/ethernet/smsc/smc911x.c
+++ b/drivers/net/ethernet/smsc/smc911x.c
@@ -1927,9 +1927,6 @@ static int smc911x_probe(struct net_device *dev)
}
dev->irq = irq_canonicalize(dev->irq);
- /* Fill in the fields of the device structure with ethernet values. */
- ether_setup(dev);
-
dev->netdev_ops = &smc911x_netdev_ops;
dev->watchdog_timeo = msecs_to_jiffies(watchdog);
dev->ethtool_ops = &smc911x_ethtool_ops;
diff --git a/drivers/net/ethernet/smsc/smc91x.c b/drivers/net/ethernet/smsc/smc91x.c
index bcaa41a..5e94d00 100644
--- a/drivers/net/ethernet/smsc/smc91x.c
+++ b/drivers/net/ethernet/smsc/smc91x.c
@@ -1967,9 +1967,6 @@ static int smc_probe(struct net_device *dev, void __iomem *ioaddr,
}
dev->irq = irq_canonicalize(dev->irq);
- /* Fill in the fields of the device structure with ethernet values. */
- ether_setup(dev);
-
dev->watchdog_timeo = msecs_to_jiffies(watchdog);
dev->netdev_ops = &smc_netdev_ops;
dev->ethtool_ops = &smc_ethtool_ops;
diff --git a/drivers/net/ethernet/smsc/smsc911x.c b/drivers/net/ethernet/smsc/smsc911x.c
index 5e13fa5..affb29d 100644
--- a/drivers/net/ethernet/smsc/smsc911x.c
+++ b/drivers/net/ethernet/smsc/smsc911x.c
@@ -2255,7 +2255,6 @@ static int smsc911x_init(struct net_device *dev)
if (smsc911x_soft_reset(pdata))
return -ENODEV;
- ether_setup(dev);
dev->flags |= IFF_MULTICAST;
netif_napi_add(dev, &pdata->napi, smsc911x_poll, SMSC_NAPI_WEIGHT);
dev->netdev_ops = &smsc911x_netdev_ops;
diff --git a/drivers/net/ethernet/stmicro/stmmac/Kconfig b/drivers/net/ethernet/stmicro/stmmac/Kconfig
index 2d09c11..b02d4a3 100644
--- a/drivers/net/ethernet/stmicro/stmmac/Kconfig
+++ b/drivers/net/ethernet/stmicro/stmmac/Kconfig
@@ -26,6 +26,16 @@ config STMMAC_PLATFORM
If unsure, say N.
+config DWMAC_MESON
+ bool "Amlogic Meson dwmac support"
+ depends on STMMAC_PLATFORM && ARCH_MESON
+ help
+ Support for Ethernet controller on Amlogic Meson SoCs.
+
+ This selects the Amlogic Meson SoC glue layer support for
+ the stmmac device driver. This driver is used for Meson6 and
+ Meson8 SoCs.
+
config DWMAC_SOCFPGA
bool "SOCFPGA dwmac support"
depends on STMMAC_PLATFORM && MFD_SYSCON && (ARCH_SOCFPGA || COMPILE_TEST)
diff --git a/drivers/net/ethernet/stmicro/stmmac/Makefile b/drivers/net/ethernet/stmicro/stmmac/Makefile
index 18695eb..0533d0b 100644
--- a/drivers/net/ethernet/stmicro/stmmac/Makefile
+++ b/drivers/net/ethernet/stmicro/stmmac/Makefile
@@ -1,6 +1,7 @@
obj-$(CONFIG_STMMAC_ETH) += stmmac.o
stmmac-$(CONFIG_STMMAC_PLATFORM) += stmmac_platform.o
stmmac-$(CONFIG_STMMAC_PCI) += stmmac_pci.o
+stmmac-$(CONFIG_DWMAC_MESON) += dwmac-meson.o
stmmac-$(CONFIG_DWMAC_SUNXI) += dwmac-sunxi.o
stmmac-$(CONFIG_DWMAC_STI) += dwmac-sti.o
stmmac-$(CONFIG_DWMAC_SOCFPGA) += dwmac-socfpga.o
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c
new file mode 100644
index 0000000..d225a60
--- /dev/null
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c
@@ -0,0 +1,67 @@
+/*
+ * Amlogic Meson DWMAC glue layer
+ *
+ * Copyright (C) 2014 Beniamino Galvani <b.galvani@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/device.h>
+#include <linux/ethtool.h>
+#include <linux/io.h>
+#include <linux/ioport.h>
+#include <linux/platform_device.h>
+#include <linux/stmmac.h>
+
+#define ETHMAC_SPEED_100 BIT(1)
+
+struct meson_dwmac {
+ struct device *dev;
+ void __iomem *reg;
+};
+
+static void meson6_dwmac_fix_mac_speed(void *priv, unsigned int speed)
+{
+ struct meson_dwmac *dwmac = priv;
+ unsigned int val;
+
+ val = readl(dwmac->reg);
+
+ switch (speed) {
+ case SPEED_10:
+ val &= ~ETHMAC_SPEED_100;
+ break;
+ case SPEED_100:
+ val |= ETHMAC_SPEED_100;
+ break;
+ }
+
+ writel(val, dwmac->reg);
+}
+
+static void *meson6_dwmac_setup(struct platform_device *pdev)
+{
+ struct meson_dwmac *dwmac;
+ struct resource *res;
+
+ dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL);
+ if (!dwmac)
+ return ERR_PTR(-ENOMEM);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ dwmac->reg = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(dwmac->reg))
+ return dwmac->reg;
+
+ return dwmac;
+}
+
+const struct stmmac_of_data meson6_dwmac_data = {
+ .setup = meson6_dwmac_setup,
+ .fix_mac_speed = meson6_dwmac_fix_mac_speed,
+};
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
index ec632e6..3aad413 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
@@ -17,6 +17,7 @@
#include <linux/mfd/syscon.h>
#include <linux/of.h>
+#include <linux/of_address.h>
#include <linux/of_net.h>
#include <linux/phy.h>
#include <linux/regmap.h>
@@ -30,6 +31,12 @@
#define SYSMGR_EMACGRP_CTRL_PHYSEL_WIDTH 2
#define SYSMGR_EMACGRP_CTRL_PHYSEL_MASK 0x00000003
+#define EMAC_SPLITTER_CTRL_REG 0x0
+#define EMAC_SPLITTER_CTRL_SPEED_MASK 0x3
+#define EMAC_SPLITTER_CTRL_SPEED_10 0x2
+#define EMAC_SPLITTER_CTRL_SPEED_100 0x3
+#define EMAC_SPLITTER_CTRL_SPEED_1000 0x0
+
struct socfpga_dwmac {
int interface;
u32 reg_offset;
@@ -37,14 +44,46 @@ struct socfpga_dwmac {
struct device *dev;
struct regmap *sys_mgr_base_addr;
struct reset_control *stmmac_rst;
+ void __iomem *splitter_base;
};
+static void socfpga_dwmac_fix_mac_speed(void *priv, unsigned int speed)
+{
+ struct socfpga_dwmac *dwmac = (struct socfpga_dwmac *)priv;
+ void __iomem *splitter_base = dwmac->splitter_base;
+ u32 val;
+
+ if (!splitter_base)
+ return;
+
+ val = readl(splitter_base + EMAC_SPLITTER_CTRL_REG);
+ val &= ~EMAC_SPLITTER_CTRL_SPEED_MASK;
+
+ switch (speed) {
+ case 1000:
+ val |= EMAC_SPLITTER_CTRL_SPEED_1000;
+ break;
+ case 100:
+ val |= EMAC_SPLITTER_CTRL_SPEED_100;
+ break;
+ case 10:
+ val |= EMAC_SPLITTER_CTRL_SPEED_10;
+ break;
+ default:
+ return;
+ }
+
+ writel(val, splitter_base + EMAC_SPLITTER_CTRL_REG);
+}
+
static int socfpga_dwmac_parse_data(struct socfpga_dwmac *dwmac, struct device *dev)
{
struct device_node *np = dev->of_node;
struct regmap *sys_mgr_base_addr;
u32 reg_offset, reg_shift;
int ret;
+ struct device_node *np_splitter;
+ struct resource res_splitter;
dwmac->stmmac_rst = devm_reset_control_get(dev,
STMMAC_RESOURCE_NAME);
@@ -73,6 +112,20 @@ static int socfpga_dwmac_parse_data(struct socfpga_dwmac *dwmac, struct device *
return -EINVAL;
}
+ np_splitter = of_parse_phandle(np, "altr,emac-splitter", 0);
+ if (np_splitter) {
+ if (of_address_to_resource(np_splitter, 0, &res_splitter)) {
+ dev_info(dev, "Missing emac splitter address\n");
+ return -EINVAL;
+ }
+
+ dwmac->splitter_base = devm_ioremap_resource(dev, &res_splitter);
+ if (IS_ERR(dwmac->splitter_base)) {
+ dev_info(dev, "Failed to mapping emac splitter\n");
+ return PTR_ERR(dwmac->splitter_base);
+ }
+ }
+
dwmac->reg_offset = reg_offset;
dwmac->reg_shift = reg_shift;
dwmac->sys_mgr_base_addr = sys_mgr_base_addr;
@@ -91,6 +144,7 @@ static int socfpga_dwmac_setup(struct socfpga_dwmac *dwmac)
switch (phymode) {
case PHY_INTERFACE_MODE_RGMII:
+ case PHY_INTERFACE_MODE_RGMII_ID:
val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RGMII;
break;
case PHY_INTERFACE_MODE_MII:
@@ -102,6 +156,13 @@ static int socfpga_dwmac_setup(struct socfpga_dwmac *dwmac)
return -EINVAL;
}
+ /* Overwrite val to GMII if splitter core is enabled. The phymode here
+ * is the actual phy mode on phy hardware, but phy interface from
+ * EMAC core is GMII.
+ */
+ if (dwmac->splitter_base)
+ val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII;
+
regmap_read(sys_mgr_base_addr, reg_offset, &ctrl);
ctrl &= ~(SYSMGR_EMACGRP_CTRL_PHYSEL_MASK << reg_shift);
ctrl |= val << reg_shift;
@@ -196,4 +257,5 @@ const struct stmmac_of_data socfpga_gmac_data = {
.setup = socfpga_dwmac_probe,
.init = socfpga_dwmac_init,
.exit = socfpga_dwmac_exit,
+ .fix_mac_speed = socfpga_dwmac_fix_mac_speed,
};
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
index 58097c0..4452889 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
@@ -137,6 +137,9 @@ void stmmac_disable_eee_mode(struct stmmac_priv *priv);
bool stmmac_eee_init(struct stmmac_priv *priv);
#ifdef CONFIG_STMMAC_PLATFORM
+#ifdef CONFIG_DWMAC_MESON
+extern const struct stmmac_of_data meson6_dwmac_data;
+#endif
#ifdef CONFIG_DWMAC_SUNXI
extern const struct stmmac_of_data sun7i_gmac_data;
#endif
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
index cf4f38d..3a08a1f 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
@@ -261,11 +261,11 @@ static int stmmac_ethtool_getsettings(struct net_device *dev,
ethtool_cmd_speed_set(cmd, priv->xstats.pcs_speed);
/* Get and convert ADV/LP_ADV from the HW AN registers */
- if (priv->hw->mac->get_adv)
- priv->hw->mac->get_adv(priv->hw, &adv);
- else
+ if (!priv->hw->mac->get_adv)
return -EOPNOTSUPP; /* should never happen indeed */
+ priv->hw->mac->get_adv(priv->hw, &adv);
+
/* Encoding of PSE bits is defined in 802.3z, 37.2.1.4 */
if (adv.pause & STMMAC_PCS_PAUSE)
@@ -340,19 +340,17 @@ static int stmmac_ethtool_setsettings(struct net_device *dev,
if (cmd->autoneg != AUTONEG_ENABLE)
return -EINVAL;
- if (cmd->autoneg == AUTONEG_ENABLE) {
- mask &= (ADVERTISED_1000baseT_Half |
+ mask &= (ADVERTISED_1000baseT_Half |
ADVERTISED_1000baseT_Full |
ADVERTISED_100baseT_Half |
ADVERTISED_100baseT_Full |
ADVERTISED_10baseT_Half |
ADVERTISED_10baseT_Full);
- spin_lock(&priv->lock);
- if (priv->hw->mac->ctrl_ane)
- priv->hw->mac->ctrl_ane(priv->hw, 1);
- spin_unlock(&priv->lock);
- }
+ spin_lock(&priv->lock);
+ if (priv->hw->mac->ctrl_ane)
+ priv->hw->mac->ctrl_ane(priv->hw, 1);
+ spin_unlock(&priv->lock);
return 0;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index b0c1521..6f77a46 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -834,7 +834,7 @@ static int stmmac_init_phy(struct net_device *dev)
/* Stop Advertising 1000BASE Capability if interface is not GMII */
if ((interface == PHY_INTERFACE_MODE_MII) ||
(interface == PHY_INTERFACE_MODE_RMII) ||
- (max_speed < 1000 && max_speed > 0))
+ (max_speed < 1000 && max_speed > 0))
phydev->advertising &= ~(SUPPORTED_1000baseT_Half |
SUPPORTED_1000baseT_Full);
@@ -2765,8 +2765,6 @@ struct stmmac_priv *stmmac_dvr_probe(struct device *device,
priv->device = device;
priv->dev = ndev;
- ether_setup(ndev);
-
stmmac_set_ethtool_ops(ndev);
priv->pause = pause;
priv->plat = plat_dat;
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
index a5b1e1b..b735fa2 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
@@ -253,7 +253,7 @@ int stmmac_mdio_register(struct net_device *ndev)
}
/*
- * If we're going to bind the MAC to this PHY bus,
+ * If we're going to bind the MAC to this PHY bus,
* and no PHY number was provided to the MAC,
* use the one probed here.
*/
@@ -282,7 +282,7 @@ int stmmac_mdio_register(struct net_device *ndev)
}
if (!found) {
- pr_warning("%s: No PHY found\n", ndev->name);
+ pr_warn("%s: No PHY found\n", ndev->name);
mdiobus_unregister(new_bus);
mdiobus_free(new_bus);
return -ENODEV;
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
index bb524a9..6521717 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
@@ -30,6 +30,9 @@
#include "stmmac.h"
static const struct of_device_id stmmac_dt_ids[] = {
+#ifdef CONFIG_DWMAC_MESON
+ { .compatible = "amlogic,meson6-dwmac", .data = &meson6_dwmac_data},
+#endif
#ifdef CONFIG_DWMAC_SUNXI
{ .compatible = "allwinner,sun7i-a20-gmac", .data = &sun7i_gmac_data},
#endif
diff --git a/drivers/net/ethernet/sun/cassini.c b/drivers/net/ethernet/sun/cassini.c
index 37f87ff..02d370e 100644
--- a/drivers/net/ethernet/sun/cassini.c
+++ b/drivers/net/ethernet/sun/cassini.c
@@ -4962,7 +4962,7 @@ static int cas_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_cmd |= PCI_COMMAND_PARITY;
pci_write_config_word(pdev, PCI_COMMAND, pci_cmd);
if (pci_try_set_mwi(pdev))
- pr_warning("Could not enable MWI for %s\n", pci_name(pdev));
+ pr_warn("Could not enable MWI for %s\n", pci_name(pdev));
cas_program_bridge(pdev);
diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
index 8216be4..904fd1a 100644
--- a/drivers/net/ethernet/sun/niu.c
+++ b/drivers/net/ethernet/sun/niu.c
@@ -8717,8 +8717,8 @@ static void niu_divide_channels(struct niu_parent *parent,
parent->txchan_per_port[i] = 1;
}
if (tot_rx < NIU_NUM_RXCHAN || tot_tx < NIU_NUM_TXCHAN) {
- pr_warning("niu%d: Driver bug, wasted channels, RX[%d] TX[%d]\n",
- parent->index, tot_rx, tot_tx);
+ pr_warn("niu%d: Driver bug, wasted channels, RX[%d] TX[%d]\n",
+ parent->index, tot_rx, tot_tx);
}
}
diff --git a/drivers/net/ethernet/sun/sungem.c b/drivers/net/ethernet/sun/sungem.c
index f7415b6..fef5dec 100644
--- a/drivers/net/ethernet/sun/sungem.c
+++ b/drivers/net/ethernet/sun/sungem.c
@@ -115,7 +115,7 @@ static const struct pci_device_id gem_pci_tbl[] = {
MODULE_DEVICE_TABLE(pci, gem_pci_tbl);
-static u16 __phy_read(struct gem *gp, int phy_addr, int reg)
+static u16 __sungem_phy_read(struct gem *gp, int phy_addr, int reg)
{
u32 cmd;
int limit = 10000;
@@ -141,18 +141,18 @@ static u16 __phy_read(struct gem *gp, int phy_addr, int reg)
return cmd & MIF_FRAME_DATA;
}
-static inline int _phy_read(struct net_device *dev, int mii_id, int reg)
+static inline int _sungem_phy_read(struct net_device *dev, int mii_id, int reg)
{
struct gem *gp = netdev_priv(dev);
- return __phy_read(gp, mii_id, reg);
+ return __sungem_phy_read(gp, mii_id, reg);
}
-static inline u16 phy_read(struct gem *gp, int reg)
+static inline u16 sungem_phy_read(struct gem *gp, int reg)
{
- return __phy_read(gp, gp->mii_phy_addr, reg);
+ return __sungem_phy_read(gp, gp->mii_phy_addr, reg);
}
-static void __phy_write(struct gem *gp, int phy_addr, int reg, u16 val)
+static void __sungem_phy_write(struct gem *gp, int phy_addr, int reg, u16 val)
{
u32 cmd;
int limit = 10000;
@@ -174,15 +174,15 @@ static void __phy_write(struct gem *gp, int phy_addr, int reg, u16 val)
}
}
-static inline void _phy_write(struct net_device *dev, int mii_id, int reg, int val)
+static inline void _sungem_phy_write(struct net_device *dev, int mii_id, int reg, int val)
{
struct gem *gp = netdev_priv(dev);
- __phy_write(gp, mii_id, reg, val & 0xffff);
+ __sungem_phy_write(gp, mii_id, reg, val & 0xffff);
}
-static inline void phy_write(struct gem *gp, int reg, u16 val)
+static inline void sungem_phy_write(struct gem *gp, int reg, u16 val)
{
- __phy_write(gp, gp->mii_phy_addr, reg, val);
+ __sungem_phy_write(gp, gp->mii_phy_addr, reg, val);
}
static inline void gem_enable_ints(struct gem *gp)
@@ -1687,9 +1687,9 @@ static void gem_init_phy(struct gem *gp)
/* Some PHYs used by apple have problem getting back to us,
* we do an additional reset here
*/
- phy_write(gp, MII_BMCR, BMCR_RESET);
+ sungem_phy_write(gp, MII_BMCR, BMCR_RESET);
msleep(20);
- if (phy_read(gp, MII_BMCR) != 0xffff)
+ if (sungem_phy_read(gp, MII_BMCR) != 0xffff)
break;
if (i == 2)
netdev_warn(gp->dev, "GMAC PHY not responding !\n");
@@ -2012,7 +2012,7 @@ static int gem_check_invariants(struct gem *gp)
for (i = 0; i < 32; i++) {
gp->mii_phy_addr = i;
- if (phy_read(gp, MII_BMCR) != 0xffff)
+ if (sungem_phy_read(gp, MII_BMCR) != 0xffff)
break;
}
if (i == 32) {
@@ -2696,13 +2696,13 @@ static int gem_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
/* Fallthrough... */
case SIOCGMIIREG: /* Read MII PHY register. */
- data->val_out = __phy_read(gp, data->phy_id & 0x1f,
+ data->val_out = __sungem_phy_read(gp, data->phy_id & 0x1f,
data->reg_num & 0x1f);
rc = 0;
break;
case SIOCSMIIREG: /* Write MII PHY register. */
- __phy_write(gp, data->phy_id & 0x1f, data->reg_num & 0x1f,
+ __sungem_phy_write(gp, data->phy_id & 0x1f, data->reg_num & 0x1f,
data->val_in);
rc = 0;
break;
@@ -2933,8 +2933,8 @@ static int gem_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Fill up the mii_phy structure (even if we won't use it) */
gp->phy_mii.dev = dev;
- gp->phy_mii.mdio_read = _phy_read;
- gp->phy_mii.mdio_write = _phy_write;
+ gp->phy_mii.mdio_read = _sungem_phy_read;
+ gp->phy_mii.mdio_write = _sungem_phy_write;
#ifdef CONFIG_PPC_PMAC
gp->phy_mii.platform_data = gp->of_node;
#endif
diff --git a/drivers/net/ethernet/sun/sunvnet.c b/drivers/net/ethernet/sun/sunvnet.c
index f675396..1539672 100644
--- a/drivers/net/ethernet/sun/sunvnet.c
+++ b/drivers/net/ethernet/sun/sunvnet.c
@@ -15,6 +15,14 @@
#include <linux/ethtool.h>
#include <linux/etherdevice.h>
#include <linux/mutex.h>
+#include <linux/if_vlan.h>
+
+#if IS_ENABLED(CONFIG_IPV6)
+#include <linux/icmpv6.h>
+#endif
+
+#include <net/icmp.h>
+#include <net/route.h>
#include <asm/vio.h>
#include <asm/ldc.h>
@@ -37,8 +45,11 @@ MODULE_VERSION(DRV_MODULE_VERSION);
*/
#define VNET_MAX_RETRIES 10
+static int __vnet_tx_trigger(struct vnet_port *port, u32 start);
+
/* Ordered from largest major to lowest */
static struct vio_version vnet_versions[] = {
+ { .major = 1, .minor = 6 },
{ .major = 1, .minor = 0 },
};
@@ -65,6 +76,7 @@ static int vnet_send_attr(struct vio_driver_state *vio)
struct vnet_port *port = to_vnet_port(vio);
struct net_device *dev = port->vp->dev;
struct vio_net_attr_info pkt;
+ int framelen = ETH_FRAME_LEN;
int i;
memset(&pkt, 0, sizeof(pkt));
@@ -72,19 +84,41 @@ static int vnet_send_attr(struct vio_driver_state *vio)
pkt.tag.stype = VIO_SUBTYPE_INFO;
pkt.tag.stype_env = VIO_ATTR_INFO;
pkt.tag.sid = vio_send_sid(vio);
- pkt.xfer_mode = VIO_DRING_MODE;
+ if (vio_version_before(vio, 1, 2))
+ pkt.xfer_mode = VIO_DRING_MODE;
+ else
+ pkt.xfer_mode = VIO_NEW_DRING_MODE;
pkt.addr_type = VNET_ADDR_ETHERMAC;
pkt.ack_freq = 0;
for (i = 0; i < 6; i++)
pkt.addr |= (u64)dev->dev_addr[i] << ((5 - i) * 8);
- pkt.mtu = ETH_FRAME_LEN;
+ if (vio_version_after(vio, 1, 3)) {
+ if (port->rmtu) {
+ port->rmtu = min(VNET_MAXPACKET, port->rmtu);
+ pkt.mtu = port->rmtu;
+ } else {
+ port->rmtu = VNET_MAXPACKET;
+ pkt.mtu = port->rmtu;
+ }
+ if (vio_version_after_eq(vio, 1, 6))
+ pkt.options = VIO_TX_DRING;
+ } else if (vio_version_before(vio, 1, 3)) {
+ pkt.mtu = framelen;
+ } else { /* v1.3 */
+ pkt.mtu = framelen + VLAN_HLEN;
+ }
+
+ pkt.plnk_updt = PHYSLINK_UPDATE_NONE;
+ pkt.cflags = 0;
viodbg(HS, "SEND NET ATTR xmode[0x%x] atype[0x%x] addr[%llx] "
- "ackfreq[%u] mtu[%llu]\n",
+ "ackfreq[%u] plnk_updt[0x%02x] opts[0x%02x] mtu[%llu] "
+ "cflags[0x%04x] lso_max[%u]\n",
pkt.xfer_mode, pkt.addr_type,
- (unsigned long long) pkt.addr,
- pkt.ack_freq,
- (unsigned long long) pkt.mtu);
+ (unsigned long long)pkt.addr,
+ pkt.ack_freq, pkt.plnk_updt, pkt.options,
+ (unsigned long long)pkt.mtu, pkt.cflags, pkt.ipv4_lso_maxlen);
+
return vio_ldc_send(vio, &pkt, sizeof(pkt));
}
@@ -92,18 +126,52 @@ static int vnet_send_attr(struct vio_driver_state *vio)
static int handle_attr_info(struct vio_driver_state *vio,
struct vio_net_attr_info *pkt)
{
- viodbg(HS, "GOT NET ATTR INFO xmode[0x%x] atype[0x%x] addr[%llx] "
- "ackfreq[%u] mtu[%llu]\n",
+ struct vnet_port *port = to_vnet_port(vio);
+ u64 localmtu;
+ u8 xfer_mode;
+
+ viodbg(HS, "GOT NET ATTR xmode[0x%x] atype[0x%x] addr[%llx] "
+ "ackfreq[%u] plnk_updt[0x%02x] opts[0x%02x] mtu[%llu] "
+ " (rmtu[%llu]) cflags[0x%04x] lso_max[%u]\n",
pkt->xfer_mode, pkt->addr_type,
- (unsigned long long) pkt->addr,
- pkt->ack_freq,
- (unsigned long long) pkt->mtu);
+ (unsigned long long)pkt->addr,
+ pkt->ack_freq, pkt->plnk_updt, pkt->options,
+ (unsigned long long)pkt->mtu, port->rmtu, pkt->cflags,
+ pkt->ipv4_lso_maxlen);
pkt->tag.sid = vio_send_sid(vio);
- if (pkt->xfer_mode != VIO_DRING_MODE ||
+ xfer_mode = pkt->xfer_mode;
+ /* for version < 1.2, VIO_DRING_MODE = 0x3 and no bitmask */
+ if (vio_version_before(vio, 1, 2) && xfer_mode == VIO_DRING_MODE)
+ xfer_mode = VIO_NEW_DRING_MODE;
+
+ /* MTU negotiation:
+ * < v1.3 - ETH_FRAME_LEN exactly
+ * > v1.3 - MIN(pkt.mtu, VNET_MAXPACKET, port->rmtu) and change
+ * pkt->mtu for ACK
+ * = v1.3 - ETH_FRAME_LEN + VLAN_HLEN exactly
+ */
+ if (vio_version_before(vio, 1, 3)) {
+ localmtu = ETH_FRAME_LEN;
+ } else if (vio_version_after(vio, 1, 3)) {
+ localmtu = port->rmtu ? port->rmtu : VNET_MAXPACKET;
+ localmtu = min(pkt->mtu, localmtu);
+ pkt->mtu = localmtu;
+ } else { /* v1.3 */
+ localmtu = ETH_FRAME_LEN + VLAN_HLEN;
+ }
+ port->rmtu = localmtu;
+
+ /* for version >= 1.6, ACK packet mode we support */
+ if (vio_version_after_eq(vio, 1, 6)) {
+ pkt->xfer_mode = VIO_NEW_DRING_MODE;
+ pkt->options = VIO_TX_DRING;
+ }
+
+ if (!(xfer_mode | VIO_NEW_DRING_MODE) ||
pkt->addr_type != VNET_ADDR_ETHERMAC ||
- pkt->mtu != ETH_FRAME_LEN) {
+ pkt->mtu != localmtu) {
viodbg(HS, "SEND NET ATTR NACK\n");
pkt->tag.stype = VIO_SUBTYPE_NACK;
@@ -112,7 +180,14 @@ static int handle_attr_info(struct vio_driver_state *vio,
return -ECONNRESET;
} else {
- viodbg(HS, "SEND NET ATTR ACK\n");
+ viodbg(HS, "SEND NET ATTR ACK xmode[0x%x] atype[0x%x] "
+ "addr[%llx] ackfreq[%u] plnk_updt[0x%02x] opts[0x%02x] "
+ "mtu[%llu] (rmtu[%llu]) cflags[0x%04x] lso_max[%u]\n",
+ pkt->xfer_mode, pkt->addr_type,
+ (unsigned long long)pkt->addr,
+ pkt->ack_freq, pkt->plnk_updt, pkt->options,
+ (unsigned long long)pkt->mtu, port->rmtu, pkt->cflags,
+ pkt->ipv4_lso_maxlen);
pkt->tag.stype = VIO_SUBTYPE_ACK;
@@ -208,7 +283,7 @@ static int vnet_rx_one(struct vnet_port *port, unsigned int len,
int err;
err = -EMSGSIZE;
- if (unlikely(len < ETH_ZLEN || len > ETH_FRAME_LEN)) {
+ if (unlikely(len < ETH_ZLEN || len > port->rmtu)) {
dev->stats.rx_length_errors++;
goto out_dropped;
}
@@ -283,10 +358,18 @@ static int vnet_send_ack(struct vnet_port *port, struct vio_dring_state *dr,
port->raddr[0], port->raddr[1],
port->raddr[2], port->raddr[3],
port->raddr[4], port->raddr[5]);
- err = -ECONNRESET;
+ break;
}
} while (err == -EAGAIN);
+ if (err <= 0 && vio_dring_state == VIO_DRING_STOPPED) {
+ port->stop_rx_idx = end;
+ port->stop_rx = true;
+ } else {
+ port->stop_rx_idx = 0;
+ port->stop_rx = false;
+ }
+
return err;
}
@@ -451,7 +534,7 @@ static int vnet_ack(struct vnet_port *port, void *msgbuf)
struct net_device *dev;
struct vnet *vp;
u32 end;
-
+ struct vio_net_desc *desc;
if (unlikely(pkt->tag.stype_env != VIO_DRING_DATA))
return 0;
@@ -459,7 +542,24 @@ static int vnet_ack(struct vnet_port *port, void *msgbuf)
if (unlikely(!idx_is_pending(dr, end)))
return 0;
+ /* sync for race conditions with vnet_start_xmit() and tell xmit it
+ * is time to send a trigger.
+ */
dr->cons = next_idx(end, dr);
+ desc = vio_dring_entry(dr, dr->cons);
+ if (desc->hdr.state == VIO_DESC_READY && port->start_cons) {
+ /* vnet_start_xmit() just populated this dring but missed
+ * sending the "start" LDC message to the consumer.
+ * Send a "start" trigger on its behalf.
+ */
+ if (__vnet_tx_trigger(port, dr->cons) > 0)
+ port->start_cons = false;
+ else
+ port->start_cons = true;
+ } else {
+ port->start_cons = true;
+ }
+
vp = port->vp;
dev = vp->dev;
@@ -531,13 +631,15 @@ static void vnet_event(void *arg, int event)
vio_link_state_change(vio, event);
spin_unlock_irqrestore(&vio->lock, flags);
- if (event == LDC_EVENT_RESET)
+ if (event == LDC_EVENT_RESET) {
+ port->rmtu = 0;
vio_port_up(vio);
+ }
return;
}
if (unlikely(event != LDC_EVENT_DATA_READY)) {
- pr_warning("Unexpected LDC event %d\n", event);
+ pr_warn("Unexpected LDC event %d\n", event);
spin_unlock_irqrestore(&vio->lock, flags);
return;
}
@@ -600,7 +702,7 @@ static void vnet_event(void *arg, int event)
local_irq_restore(flags);
}
-static int __vnet_tx_trigger(struct vnet_port *port)
+static int __vnet_tx_trigger(struct vnet_port *port, u32 start)
{
struct vio_dring_state *dr = &port->vio.drings[VIO_DRIVER_TX_RING];
struct vio_dring_data hdr = {
@@ -611,12 +713,21 @@ static int __vnet_tx_trigger(struct vnet_port *port)
.sid = vio_send_sid(&port->vio),
},
.dring_ident = dr->ident,
- .start_idx = dr->prod,
+ .start_idx = start,
.end_idx = (u32) -1,
};
int err, delay;
int retries = 0;
+ if (port->stop_rx) {
+ err = vnet_send_ack(port,
+ &port->vio.drings[VIO_DRIVER_RX_RING],
+ port->stop_rx_idx, -1,
+ VIO_DRING_STOPPED);
+ if (err <= 0)
+ return err;
+ }
+
hdr.seq = dr->snd_nxt;
delay = 1;
do {
@@ -676,6 +787,117 @@ struct vnet_port *tx_port_find(struct vnet *vp, struct sk_buff *skb)
return ret;
}
+static struct sk_buff *vnet_clean_tx_ring(struct vnet_port *port,
+ unsigned *pending)
+{
+ struct vio_dring_state *dr = &port->vio.drings[VIO_DRIVER_TX_RING];
+ struct sk_buff *skb = NULL;
+ int i, txi;
+
+ *pending = 0;
+
+ txi = dr->prod-1;
+ if (txi < 0)
+ txi = VNET_TX_RING_SIZE-1;
+
+ for (i = 0; i < VNET_TX_RING_SIZE; ++i) {
+ struct vio_net_desc *d;
+
+ d = vio_dring_entry(dr, txi);
+
+ if (d->hdr.state == VIO_DESC_DONE) {
+ if (port->tx_bufs[txi].skb) {
+ BUG_ON(port->tx_bufs[txi].skb->next);
+
+ port->tx_bufs[txi].skb->next = skb;
+ skb = port->tx_bufs[txi].skb;
+ port->tx_bufs[txi].skb = NULL;
+
+ ldc_unmap(port->vio.lp,
+ port->tx_bufs[txi].cookies,
+ port->tx_bufs[txi].ncookies);
+ }
+ d->hdr.state = VIO_DESC_FREE;
+ } else if (d->hdr.state == VIO_DESC_READY) {
+ (*pending)++;
+ } else if (d->hdr.state == VIO_DESC_FREE) {
+ break;
+ }
+ --txi;
+ if (txi < 0)
+ txi = VNET_TX_RING_SIZE-1;
+ }
+ return skb;
+}
+
+static inline void vnet_free_skbs(struct sk_buff *skb)
+{
+ struct sk_buff *next;
+
+ while (skb) {
+ next = skb->next;
+ skb->next = NULL;
+ dev_kfree_skb(skb);
+ skb = next;
+ }
+}
+
+static void vnet_clean_timer_expire(unsigned long port0)
+{
+ struct vnet_port *port = (struct vnet_port *)port0;
+ struct sk_buff *freeskbs;
+ unsigned pending;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port->vio.lock, flags);
+ freeskbs = vnet_clean_tx_ring(port, &pending);
+ spin_unlock_irqrestore(&port->vio.lock, flags);
+
+ vnet_free_skbs(freeskbs);
+
+ if (pending)
+ (void)mod_timer(&port->clean_timer,
+ jiffies + VNET_CLEAN_TIMEOUT);
+ else
+ del_timer(&port->clean_timer);
+}
+
+static inline struct sk_buff *vnet_skb_shape(struct sk_buff *skb, void **pstart,
+ int *plen)
+{
+ struct sk_buff *nskb;
+ int len, pad;
+
+ len = skb->len;
+ pad = 0;
+ if (len < ETH_ZLEN) {
+ pad += ETH_ZLEN - skb->len;
+ len += pad;
+ }
+ len += VNET_PACKET_SKIP;
+ pad += 8 - (len & 7);
+ len += 8 - (len & 7);
+
+ if (((unsigned long)skb->data & 7) != VNET_PACKET_SKIP ||
+ skb_tailroom(skb) < pad ||
+ skb_headroom(skb) < VNET_PACKET_SKIP) {
+ nskb = alloc_and_align_skb(skb->dev, skb->len);
+ skb_reserve(nskb, VNET_PACKET_SKIP);
+ if (skb_copy_bits(skb, 0, nskb->data, skb->len)) {
+ dev_kfree_skb(nskb);
+ dev_kfree_skb(skb);
+ return NULL;
+ }
+ (void)skb_put(nskb, skb->len);
+ dev_kfree_skb(skb);
+ skb = nskb;
+ }
+
+ *pstart = skb->data - VNET_PACKET_SKIP;
+ *plen = len;
+ return skb;
+}
+
static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct vnet *vp = netdev_priv(dev);
@@ -684,12 +906,51 @@ static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev)
struct vio_net_desc *d;
unsigned long flags;
unsigned int len;
- void *tx_buf;
- int i, err;
+ struct sk_buff *freeskbs = NULL;
+ int i, err, txi;
+ void *start = NULL;
+ int nlen = 0;
+ unsigned pending = 0;
if (unlikely(!port))
goto out_dropped;
+ skb = vnet_skb_shape(skb, &start, &nlen);
+
+ if (unlikely(!skb))
+ goto out_dropped;
+
+ if (skb->len > port->rmtu) {
+ unsigned long localmtu = port->rmtu - ETH_HLEN;
+
+ if (vio_version_after_eq(&port->vio, 1, 3))
+ localmtu -= VLAN_HLEN;
+
+ if (skb->protocol == htons(ETH_P_IP)) {
+ struct flowi4 fl4;
+ struct rtable *rt = NULL;
+
+ memset(&fl4, 0, sizeof(fl4));
+ fl4.flowi4_oif = dev->ifindex;
+ fl4.flowi4_tos = RT_TOS(ip_hdr(skb)->tos);
+ fl4.daddr = ip_hdr(skb)->daddr;
+ fl4.saddr = ip_hdr(skb)->saddr;
+
+ rt = ip_route_output_key(dev_net(dev), &fl4);
+ if (!IS_ERR(rt)) {
+ skb_dst_set(skb, &rt->dst);
+ icmp_send(skb, ICMP_DEST_UNREACH,
+ ICMP_FRAG_NEEDED,
+ htonl(localmtu));
+ }
+ }
+#if IS_ENABLED(CONFIG_IPV6)
+ else if (skb->protocol == htons(ETH_P_IPV6))
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, localmtu);
+#endif
+ goto out_dropped;
+ }
+
spin_lock_irqsave(&port->vio.lock, flags);
dr = &port->vio.drings[VIO_DRIVER_TX_RING];
@@ -707,14 +968,27 @@ static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev)
d = vio_dring_cur(dr);
- tx_buf = port->tx_bufs[dr->prod].buf;
- skb_copy_from_linear_data(skb, tx_buf + VNET_PACKET_SKIP, skb->len);
+ txi = dr->prod;
+
+ freeskbs = vnet_clean_tx_ring(port, &pending);
+
+ BUG_ON(port->tx_bufs[txi].skb);
len = skb->len;
- if (len < ETH_ZLEN) {
+ if (len < ETH_ZLEN)
len = ETH_ZLEN;
- memset(tx_buf+VNET_PACKET_SKIP+skb->len, 0, len - skb->len);
+
+ port->tx_bufs[txi].skb = skb;
+ skb = NULL;
+
+ err = ldc_map_single(port->vio.lp, start, nlen,
+ port->tx_bufs[txi].cookies, VNET_MAXCOOKIES,
+ (LDC_MAP_SHADOW | LDC_MAP_DIRECT | LDC_MAP_RW));
+ if (err < 0) {
+ netdev_info(dev, "tx buffer map error %d\n", err);
+ goto out_dropped_unlock;
}
+ port->tx_bufs[txi].ncookies = err;
/* We don't rely on the ACKs to free the skb in vnet_start_xmit(),
* thus it is safe to not set VIO_ACK_ENABLE for each transmission:
@@ -726,9 +1000,9 @@ static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev)
*/
d->hdr.ack = VIO_ACK_DISABLE;
d->size = len;
- d->ncookies = port->tx_bufs[dr->prod].ncookies;
+ d->ncookies = port->tx_bufs[txi].ncookies;
for (i = 0; i < d->ncookies; i++)
- d->cookies[i] = port->tx_bufs[dr->prod].cookies[i];
+ d->cookies[i] = port->tx_bufs[txi].cookies[i];
/* This has to be a non-SMP write barrier because we are writing
* to memory which is shared with the peer LDOM.
@@ -737,7 +1011,30 @@ static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev)
d->hdr.state = VIO_DESC_READY;
- err = __vnet_tx_trigger(port);
+ /* Exactly one ldc "start" trigger (for dr->cons) needs to be sent
+ * to notify the consumer that some descriptors are READY.
+ * After that "start" trigger, no additional triggers are needed until
+ * a DRING_STOPPED is received from the consumer. The dr->cons field
+ * (set up by vnet_ack()) has the value of the next dring index
+ * that has not yet been ack-ed. We send a "start" trigger here
+ * if, and only if, start_cons is true (reset it afterward). Conversely,
+ * vnet_ack() should check if the dring corresponding to cons
+ * is marked READY, but start_cons was false.
+ * If so, vnet_ack() should send out the missed "start" trigger.
+ *
+ * Note that the wmb() above makes sure the cookies et al. are
+ * not globally visible before the VIO_DESC_READY, and that the
+ * stores are ordered correctly by the compiler. The consumer will
+ * not proceed until the VIO_DESC_READY is visible assuring that
+ * the consumer does not observe anything related to descriptors
+ * out of order. The HV trap from the LDC start trigger is the
+ * producer to consumer announcement that work is available to the
+ * consumer
+ */
+ if (!port->start_cons)
+ goto ldc_start_done; /* previous trigger suffices */
+
+ err = __vnet_tx_trigger(port, dr->cons);
if (unlikely(err < 0)) {
netdev_info(dev, "TX trigger error %d\n", err);
d->hdr.state = VIO_DESC_FREE;
@@ -745,8 +1042,11 @@ static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev)
goto out_dropped_unlock;
}
+ldc_start_done:
+ port->start_cons = false;
+
dev->stats.tx_packets++;
- dev->stats.tx_bytes += skb->len;
+ dev->stats.tx_bytes += port->tx_bufs[txi].skb->len;
dr->prod = (dr->prod + 1) & (VNET_TX_RING_SIZE - 1);
if (unlikely(vnet_tx_dring_avail(dr) < 2)) {
@@ -757,7 +1057,9 @@ static int vnet_start_xmit(struct sk_buff *skb, struct net_device *dev)
spin_unlock_irqrestore(&port->vio.lock, flags);
- dev_kfree_skb(skb);
+ vnet_free_skbs(freeskbs);
+
+ (void)mod_timer(&port->clean_timer, jiffies + VNET_CLEAN_TIMEOUT);
return NETDEV_TX_OK;
@@ -765,7 +1067,14 @@ out_dropped_unlock:
spin_unlock_irqrestore(&port->vio.lock, flags);
out_dropped:
- dev_kfree_skb(skb);
+ if (skb)
+ dev_kfree_skb(skb);
+ vnet_free_skbs(freeskbs);
+ if (pending)
+ (void)mod_timer(&port->clean_timer,
+ jiffies + VNET_CLEAN_TIMEOUT);
+ else if (port)
+ del_timer(&port->clean_timer);
dev->stats.tx_dropped++;
return NETDEV_TX_OK;
}
@@ -911,7 +1220,7 @@ static void vnet_set_rx_mode(struct net_device *dev)
static int vnet_change_mtu(struct net_device *dev, int new_mtu)
{
- if (new_mtu != ETH_DATA_LEN)
+ if (new_mtu < 68 || new_mtu > 65535)
return -EINVAL;
dev->mtu = new_mtu;
@@ -967,17 +1276,22 @@ static void vnet_port_free_tx_bufs(struct vnet_port *port)
}
for (i = 0; i < VNET_TX_RING_SIZE; i++) {
- void *buf = port->tx_bufs[i].buf;
+ struct vio_net_desc *d;
+ void *skb = port->tx_bufs[i].skb;
- if (!buf)
+ if (!skb)
continue;
+ d = vio_dring_entry(dr, i);
+ if (d->hdr.state == VIO_DESC_READY)
+ pr_warn("active transmit buffers freed\n");
+
ldc_unmap(port->vio.lp,
port->tx_bufs[i].cookies,
port->tx_bufs[i].ncookies);
-
- kfree(buf);
- port->tx_bufs[i].buf = NULL;
+ dev_kfree_skb(skb);
+ port->tx_bufs[i].skb = NULL;
+ d->hdr.state = VIO_DESC_FREE;
}
}
@@ -988,34 +1302,6 @@ static int vnet_port_alloc_tx_bufs(struct vnet_port *port)
int i, err, ncookies;
void *dring;
- for (i = 0; i < VNET_TX_RING_SIZE; i++) {
- void *buf = kzalloc(ETH_FRAME_LEN + 8, GFP_KERNEL);
- int map_len = (ETH_FRAME_LEN + 7) & ~7;
-
- err = -ENOMEM;
- if (!buf)
- goto err_out;
-
- err = -EFAULT;
- if ((unsigned long)buf & (8UL - 1)) {
- pr_err("TX buffer misaligned\n");
- kfree(buf);
- goto err_out;
- }
-
- err = ldc_map_single(port->vio.lp, buf, map_len,
- port->tx_bufs[i].cookies, 2,
- (LDC_MAP_SHADOW |
- LDC_MAP_DIRECT |
- LDC_MAP_RW));
- if (err < 0) {
- kfree(buf);
- goto err_out;
- }
- port->tx_bufs[i].buf = buf;
- port->tx_bufs[i].ncookies = err;
- }
-
dr = &port->vio.drings[VIO_DRIVER_TX_RING];
len = (VNET_TX_RING_SIZE *
@@ -1038,9 +1324,16 @@ static int vnet_port_alloc_tx_bufs(struct vnet_port *port)
(sizeof(struct ldc_trans_cookie) * 2));
dr->num_entries = VNET_TX_RING_SIZE;
dr->prod = dr->cons = 0;
+ port->start_cons = true; /* need an initial trigger */
dr->pending = VNET_TX_RING_SIZE;
dr->ncookies = ncookies;
+ for (i = 0; i < VNET_TX_RING_SIZE; ++i) {
+ struct vio_net_desc *d;
+
+ d = vio_dring_entry(dr, i);
+ d->hdr.state = VIO_DESC_FREE;
+ }
return 0;
err_out:
@@ -1072,6 +1365,8 @@ static struct vnet *vnet_new(const u64 *local_mac)
dev = alloc_etherdev(sizeof(*vp));
if (!dev)
return ERR_PTR(-ENOMEM);
+ dev->needed_headroom = VNET_PACKET_SKIP + 8;
+ dev->needed_tailroom = 8;
for (i = 0; i < ETH_ALEN; i++)
dev->dev_addr[i] = (*local_mac >> (5 - i) * 8) & 0xff;
@@ -1266,6 +1561,9 @@ static int vnet_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
pr_info("%s: PORT ( remote-mac %pM%s )\n",
vp->dev->name, port->raddr, switch_port ? " switch-port" : "");
+ setup_timer(&port->clean_timer, vnet_clean_timer_expire,
+ (unsigned long)port);
+
vio_port_up(&port->vio);
mdesc_release(hp);
@@ -1292,6 +1590,7 @@ static int vnet_port_remove(struct vio_dev *vdev)
unsigned long flags;
del_timer_sync(&port->vio.timer);
+ del_timer_sync(&port->clean_timer);
spin_lock_irqsave(&vp->lock, flags);
list_del(&port->list);
diff --git a/drivers/net/ethernet/sun/sunvnet.h b/drivers/net/ethernet/sun/sunvnet.h
index de5c2c6..c911045 100644
--- a/drivers/net/ethernet/sun/sunvnet.h
+++ b/drivers/net/ethernet/sun/sunvnet.h
@@ -11,6 +11,12 @@
*/
#define VNET_TX_TIMEOUT (5 * HZ)
+/* length of time (or less) we expect pending descriptors to be marked
+ * as VIO_DESC_DONE and skbs ready to be freed
+ */
+#define VNET_CLEAN_TIMEOUT ((HZ/100)+1)
+
+#define VNET_MAXPACKET (65535ULL + ETH_HLEN + VLAN_HLEN)
#define VNET_TX_RING_SIZE 512
#define VNET_TX_WAKEUP_THRESH(dr) ((dr)->pending / 4)
@@ -20,10 +26,12 @@
*/
#define VNET_PACKET_SKIP 6
+#define VNET_MAXCOOKIES (VNET_MAXPACKET/PAGE_SIZE + 1)
+
struct vnet_tx_entry {
- void *buf;
+ struct sk_buff *skb;
unsigned int ncookies;
- struct ldc_trans_cookie cookies[2];
+ struct ldc_trans_cookie cookies[VNET_MAXCOOKIES];
};
struct vnet;
@@ -40,6 +48,14 @@ struct vnet_port {
struct vnet_tx_entry tx_bufs[VNET_TX_RING_SIZE];
struct list_head list;
+
+ u32 stop_rx_idx;
+ bool stop_rx;
+ bool start_cons;
+
+ struct timer_list clean_timer;
+
+ u64 rmtu;
};
static inline struct vnet_port *to_vnet_port(struct vio_driver_state *vio)
diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
index 1769700..5d8cb79 100644
--- a/drivers/net/ethernet/ti/Kconfig
+++ b/drivers/net/ethernet/ti/Kconfig
@@ -62,6 +62,8 @@ config TI_CPSW
select TI_DAVINCI_CPDMA
select TI_DAVINCI_MDIO
select TI_CPSW_PHY_SEL
+ select MFD_SYSCON
+ select REGMAP
---help---
This driver supports TI's CPSW Ethernet Switch.
diff --git a/drivers/net/ethernet/ti/cpmac.c b/drivers/net/ethernet/ti/cpmac.c
index f9bcf7a..dd94300 100644
--- a/drivers/net/ethernet/ti/cpmac.c
+++ b/drivers/net/ethernet/ti/cpmac.c
@@ -1207,7 +1207,6 @@ static int cpmac_remove(struct platform_device *pdev)
static struct platform_driver cpmac_driver = {
.driver = {
.name = "cpmac",
- .owner = THIS_MODULE,
},
.probe = cpmac_probe,
.remove = cpmac_remove,
diff --git a/drivers/net/ethernet/ti/cpsw-phy-sel.c b/drivers/net/ethernet/ti/cpsw-phy-sel.c
index aa8bf45..0ea78326 100644
--- a/drivers/net/ethernet/ti/cpsw-phy-sel.c
+++ b/drivers/net/ethernet/ti/cpsw-phy-sel.c
@@ -211,7 +211,6 @@ static struct platform_driver cpsw_phy_sel_driver = {
.probe = cpsw_phy_sel_probe,
.driver = {
.name = "cpsw-phy-sel",
- .owner = THIS_MODULE,
.of_match_table = cpsw_phy_sel_id_table,
},
};
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index e2a0028..ab167dc 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -33,6 +33,8 @@
#include <linux/of_net.h>
#include <linux/of_device.h>
#include <linux/if_vlan.h>
+#include <linux/mfd/syscon.h>
+#include <linux/regmap.h>
#include <linux/pinctrl/consumer.h>
@@ -397,6 +399,8 @@ struct cpsw_priv {
struct cpdma_ctlr *dma;
struct cpdma_chan *txch, *rxch;
struct cpsw_ale *ale;
+ bool rx_pause;
+ bool tx_pause;
/* snapshot of IRQ numbers */
u32 irqs_table[4];
u32 num_irqs;
@@ -855,6 +859,12 @@ static void _cpsw_adjust_link(struct cpsw_slave *slave,
else if (phy->speed == 10)
mac_control |= BIT(18); /* In Band mode */
+ if (priv->rx_pause)
+ mac_control |= BIT(3);
+
+ if (priv->tx_pause)
+ mac_control |= BIT(4);
+
*link = true;
} else {
mac_control = 0;
@@ -1246,6 +1256,9 @@ static int cpsw_ndo_open(struct net_device *ndev)
/* enable statistics collection only on all ports */
__raw_writel(0x7, &priv->regs->stat_port_en);
+ /* Enable internal fifo flow control */
+ writel(0x7, &priv->regs->flow_control);
+
if (WARN_ON(!priv->data.rx_descs))
priv->data.rx_descs = 128;
@@ -1807,6 +1820,30 @@ static int cpsw_set_wol(struct net_device *ndev, struct ethtool_wolinfo *wol)
return -EOPNOTSUPP;
}
+static void cpsw_get_pauseparam(struct net_device *ndev,
+ struct ethtool_pauseparam *pause)
+{
+ struct cpsw_priv *priv = netdev_priv(ndev);
+
+ pause->autoneg = AUTONEG_DISABLE;
+ pause->rx_pause = priv->rx_pause ? true : false;
+ pause->tx_pause = priv->tx_pause ? true : false;
+}
+
+static int cpsw_set_pauseparam(struct net_device *ndev,
+ struct ethtool_pauseparam *pause)
+{
+ struct cpsw_priv *priv = netdev_priv(ndev);
+ bool link;
+
+ priv->rx_pause = pause->rx_pause ? true : false;
+ priv->tx_pause = pause->tx_pause ? true : false;
+
+ for_each_slave(priv, _cpsw_adjust_link, priv, &link);
+
+ return 0;
+}
+
static const struct ethtool_ops cpsw_ethtool_ops = {
.get_drvinfo = cpsw_get_drvinfo,
.get_msglevel = cpsw_get_msglevel,
@@ -1820,6 +1857,8 @@ static const struct ethtool_ops cpsw_ethtool_ops = {
.get_sset_count = cpsw_get_sset_count,
.get_strings = cpsw_get_strings,
.get_ethtool_stats = cpsw_get_ethtool_stats,
+ .get_pauseparam = cpsw_get_pauseparam,
+ .set_pauseparam = cpsw_set_pauseparam,
.get_wol = cpsw_get_wol,
.set_wol = cpsw_set_wol,
.get_regs_len = cpsw_get_regs_len,
@@ -1839,6 +1878,36 @@ static void cpsw_slave_init(struct cpsw_slave *slave, struct cpsw_priv *priv,
slave->port_vlan = data->dual_emac_res_vlan;
}
+#define AM33XX_CTRL_MAC_LO_REG(id) (0x630 + 0x8 * id)
+#define AM33XX_CTRL_MAC_HI_REG(id) (0x630 + 0x8 * id + 0x4)
+
+static int cpsw_am33xx_cm_get_macid(struct device *dev, int slave,
+ u8 *mac_addr)
+{
+ u32 macid_lo;
+ u32 macid_hi;
+ struct regmap *syscon;
+
+ syscon = syscon_regmap_lookup_by_phandle(dev->of_node, "syscon");
+ if (IS_ERR(syscon)) {
+ if (PTR_ERR(syscon) == -ENODEV)
+ return 0;
+ return PTR_ERR(syscon);
+ }
+
+ regmap_read(syscon, AM33XX_CTRL_MAC_LO_REG(slave), &macid_lo);
+ regmap_read(syscon, AM33XX_CTRL_MAC_HI_REG(slave), &macid_hi);
+
+ mac_addr[5] = (macid_lo >> 8) & 0xff;
+ mac_addr[4] = macid_lo & 0xff;
+ mac_addr[3] = (macid_hi >> 24) & 0xff;
+ mac_addr[2] = (macid_hi >> 16) & 0xff;
+ mac_addr[1] = (macid_hi >> 8) & 0xff;
+ mac_addr[0] = macid_hi & 0xff;
+
+ return 0;
+}
+
static int cpsw_probe_dt(struct cpsw_platform_data *data,
struct platform_device *pdev)
{
@@ -1944,15 +2013,23 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data,
mdio = of_find_device_by_node(mdio_node);
of_node_put(mdio_node);
if (!mdio) {
- pr_err("Missing mdio platform device\n");
+ dev_err(&pdev->dev, "Missing mdio platform device\n");
return -EINVAL;
}
snprintf(slave_data->phy_id, sizeof(slave_data->phy_id),
PHY_ID_FMT, mdio->name, phyid);
mac_addr = of_get_mac_address(slave_node);
- if (mac_addr)
+ if (mac_addr) {
memcpy(slave_data->mac_addr, mac_addr, ETH_ALEN);
+ } else {
+ if (of_machine_is_compatible("ti,am33xx")) {
+ ret = cpsw_am33xx_cm_get_macid(&pdev->dev, i,
+ slave_data->mac_addr);
+ if (ret)
+ return ret;
+ }
+ }
slave_data->phy_if = of_get_phy_mode(slave_node);
if (slave_data->phy_if < 0) {
@@ -2086,6 +2163,7 @@ static int cpsw_probe(struct platform_device *pdev)
priv->irq_enabled = true;
if (!priv->cpts) {
dev_err(&pdev->dev, "error allocating cpts\n");
+ ret = -ENOMEM;
goto clean_ndev_ret;
}
@@ -2255,18 +2333,24 @@ static int cpsw_probe(struct platform_device *pdev)
}
while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k))) {
- for (i = res->start; i <= res->end; i++) {
- if (devm_request_irq(&pdev->dev, i, cpsw_interrupt, 0,
- dev_name(&pdev->dev), priv)) {
- dev_err(priv->dev, "error attaching irq\n");
- goto clean_ale_ret;
- }
- priv->irqs_table[k] = i;
- priv->num_irqs = k + 1;
+ if (k >= ARRAY_SIZE(priv->irqs_table)) {
+ ret = -EINVAL;
+ goto clean_ale_ret;
}
+
+ ret = devm_request_irq(&pdev->dev, res->start, cpsw_interrupt,
+ 0, dev_name(&pdev->dev), priv);
+ if (ret < 0) {
+ dev_err(priv->dev, "error attaching irq (%d)\n", ret);
+ goto clean_ale_ret;
+ }
+
+ priv->irqs_table[k] = res->start;
k++;
}
+ priv->num_irqs = k;
+
ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
ndev->netdev_ops = &cpsw_netdev_ops;
@@ -2395,7 +2479,6 @@ MODULE_DEVICE_TABLE(of, cpsw_of_mtable);
static struct platform_driver cpsw_driver = {
.driver = {
.name = "cpsw",
- .owner = THIS_MODULE,
.pm = &cpsw_pm_ops,
.of_match_table = cpsw_of_mtable,
},
diff --git a/drivers/net/ethernet/ti/cpsw.h b/drivers/net/ethernet/ti/cpsw.h
index 574f49d..1b71067 100644
--- a/drivers/net/ethernet/ti/cpsw.h
+++ b/drivers/net/ethernet/ti/cpsw.h
@@ -15,6 +15,7 @@
#define __CPSW_H__
#include <linux/if_ether.h>
+#include <linux/phy.h>
struct cpsw_slave_data {
char phy_id[MII_BUS_ID_SIZE];
diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
index 35a139e..ea71251 100644
--- a/drivers/net/ethernet/ti/davinci_emac.c
+++ b/drivers/net/ethernet/ti/davinci_emac.c
@@ -2083,7 +2083,6 @@ MODULE_DEVICE_TABLE(of, davinci_emac_of_match);
static struct platform_driver davinci_emac_driver = {
.driver = {
.name = "davinci_emac",
- .owner = THIS_MODULE,
.pm = &davinci_emac_pm_ops,
.of_match_table = of_match_ptr(davinci_emac_of_match),
},
diff --git a/drivers/net/ethernet/ti/davinci_mdio.c b/drivers/net/ethernet/ti/davinci_mdio.c
index 2791f6f..98655b4 100644
--- a/drivers/net/ethernet/ti/davinci_mdio.c
+++ b/drivers/net/ethernet/ti/davinci_mdio.c
@@ -481,7 +481,6 @@ MODULE_DEVICE_TABLE(of, davinci_mdio_of_mtable);
static struct platform_driver davinci_mdio_driver = {
.driver = {
.name = "davinci_mdio",
- .owner = THIS_MODULE,
.pm = &davinci_mdio_pm_ops,
.of_match_table = of_match_ptr(davinci_mdio_of_mtable),
},
diff --git a/drivers/net/ethernet/tile/tilepro.c b/drivers/net/ethernet/tile/tilepro.c
index 88c7121..88818d5 100644
--- a/drivers/net/ethernet/tile/tilepro.c
+++ b/drivers/net/ethernet/tile/tilepro.c
@@ -956,7 +956,7 @@ static int tile_net_open_aux(struct net_device *dev)
*/
if (hv_dev_pwrite(priv->hv_devhdl, 0, (HV_VirtAddr)&dummy,
sizeof(dummy), NETIO_IPP_START_SHIM_OFF) < 0) {
- pr_warning("Failed to start LIPP/LEPP.\n");
+ pr_warn("Failed to start LIPP/LEPP\n");
return -EIO;
}
@@ -2399,8 +2399,7 @@ static int __init network_cpus_setup(char *str)
{
int rc = cpulist_parse_crop(str, &network_cpus_map);
if (rc != 0) {
- pr_warning("network_cpus=%s: malformed cpu list\n",
- str);
+ pr_warn("network_cpus=%s: malformed cpu list\n", str);
} else {
/* Remove dedicated cpus. */
@@ -2409,8 +2408,7 @@ static int __init network_cpus_setup(char *str)
if (cpumask_empty(&network_cpus_map)) {
- pr_warning("Ignoring network_cpus='%s'.\n",
- str);
+ pr_warn("Ignoring network_cpus='%s'\n", str);
} else {
char buf[1024];
cpulist_scnprintf(buf, sizeof(buf), &network_cpus_map);
diff --git a/drivers/net/ethernet/toshiba/spider_net.c b/drivers/net/ethernet/toshiba/spider_net.c
index 3e38f67..8e9371a 100644
--- a/drivers/net/ethernet/toshiba/spider_net.c
+++ b/drivers/net/ethernet/toshiba/spider_net.c
@@ -267,34 +267,6 @@ spider_net_set_promisc(struct spider_net_card *card)
}
/**
- * spider_net_get_mac_address - read mac address from spider card
- * @card: device structure
- *
- * reads MAC address from GMACUNIMACU and GMACUNIMACL registers
- */
-static int
-spider_net_get_mac_address(struct net_device *netdev)
-{
- struct spider_net_card *card = netdev_priv(netdev);
- u32 macl, macu;
-
- macl = spider_net_read_reg(card, SPIDER_NET_GMACUNIMACL);
- macu = spider_net_read_reg(card, SPIDER_NET_GMACUNIMACU);
-
- netdev->dev_addr[0] = (macu >> 24) & 0xff;
- netdev->dev_addr[1] = (macu >> 16) & 0xff;
- netdev->dev_addr[2] = (macu >> 8) & 0xff;
- netdev->dev_addr[3] = macu & 0xff;
- netdev->dev_addr[4] = (macl >> 8) & 0xff;
- netdev->dev_addr[5] = macl & 0xff;
-
- if (!is_valid_ether_addr(&netdev->dev_addr[0]))
- return -EINVAL;
-
- return 0;
-}
-
-/**
* spider_net_get_descr_status -- returns the status of a descriptor
* @descr: descriptor to look at
*
@@ -1345,15 +1317,17 @@ spider_net_set_mac(struct net_device *netdev, void *p)
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
+ memcpy(netdev->dev_addr, addr->sa_data, ETH_ALEN);
+
/* switch off GMACTPE and GMACRPE */
regvalue = spider_net_read_reg(card, SPIDER_NET_GMACOPEMD);
regvalue &= ~((1 << 5) | (1 << 6));
spider_net_write_reg(card, SPIDER_NET_GMACOPEMD, regvalue);
/* write mac */
- macu = (addr->sa_data[0]<<24) + (addr->sa_data[1]<<16) +
- (addr->sa_data[2]<<8) + (addr->sa_data[3]);
- macl = (addr->sa_data[4]<<8) + (addr->sa_data[5]);
+ macu = (netdev->dev_addr[0]<<24) + (netdev->dev_addr[1]<<16) +
+ (netdev->dev_addr[2]<<8) + (netdev->dev_addr[3]);
+ macl = (netdev->dev_addr[4]<<8) + (netdev->dev_addr[5]);
spider_net_write_reg(card, SPIDER_NET_GMACUNIMACU, macu);
spider_net_write_reg(card, SPIDER_NET_GMACUNIMACL, macl);
@@ -1364,12 +1338,6 @@ spider_net_set_mac(struct net_device *netdev, void *p)
spider_net_set_promisc(card);
- /* look up, whether we have been successful */
- if (spider_net_get_mac_address(netdev))
- return -EADDRNOTAVAIL;
- if (memcmp(netdev->dev_addr,addr->sa_data,netdev->addr_len))
- return -EADDRNOTAVAIL;
-
return 0;
}
diff --git a/drivers/net/ethernet/wiznet/w5100.c b/drivers/net/ethernet/wiznet/w5100.c
index 104d46f..0f56b1c 100644
--- a/drivers/net/ethernet/wiznet/w5100.c
+++ b/drivers/net/ethernet/wiznet/w5100.c
@@ -708,7 +708,6 @@ static int w5100_probe(struct platform_device *pdev)
priv = netdev_priv(ndev);
priv->ndev = ndev;
- ether_setup(ndev);
ndev->netdev_ops = &w5100_netdev_ops;
ndev->ethtool_ops = &w5100_ethtool_ops;
ndev->watchdog_timeo = HZ;
diff --git a/drivers/net/ethernet/wiznet/w5300.c b/drivers/net/ethernet/wiznet/w5300.c
index 1f33c4c..f961f14 100644
--- a/drivers/net/ethernet/wiznet/w5300.c
+++ b/drivers/net/ethernet/wiznet/w5300.c
@@ -620,7 +620,6 @@ static int w5300_probe(struct platform_device *pdev)
priv = netdev_priv(ndev);
priv->ndev = ndev;
- ether_setup(ndev);
ndev->netdev_ops = &w5300_netdev_ops;
ndev->ethtool_ops = &w5300_ethtool_ops;
ndev->watchdog_timeo = HZ;
diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
index fda5891..6290770 100644
--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
+++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
@@ -1012,7 +1012,6 @@ static int temac_of_probe(struct platform_device *op)
if (!ndev)
return -ENOMEM;
- ether_setup(ndev);
platform_set_drvdata(op, ndev);
SET_NETDEV_DEV(ndev, &op->dev);
ndev->flags &= ~IFF_MULTICAST; /* clear multicast */
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
index c8fd941..4ea2d4e 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
@@ -1485,7 +1485,6 @@ static int axienet_of_probe(struct platform_device *op)
if (!ndev)
return -ENOMEM;
- ether_setup(ndev);
platform_set_drvdata(op, ndev);
SET_NETDEV_DEV(ndev, &op->dev);
OpenPOWER on IntegriCloud