S: Maintained
F: arch/arm/mach-s5p*/
+ ARM/SAMSUNG S5P SERIES FIMC SUPPORT
+ M: Kyungmin Park <kyungmin.park@samsung.com>
+ M: Sylwester Nawrocki <s.nawrocki@samsung.com>
+ L: linux-arm-kernel@lists.infradead.org
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ F: arch/arm/plat-s5p/dev-fimc*
+ F: arch/arm/plat-samsung/include/plat/*fimc*
+ F: drivers/media/video/s5p-fimc/
+
ARM/SHMOBILE ARM ARCHITECTURE
M: Paul Mundt <lethal@linux-sh.org>
M: Magnus Damm <magnus.damm@gmail.com>
S: Maintained
F: drivers/net/wireless/ath/ar9170/
+CARL9170 LINUX COMMUNITY WIRELESS DRIVER
+M: Christian Lamparter <chunkeey@googlemail.com>
+L: linux-wireless@vger.kernel.org
+W: http://wireless.kernel.org/en/users/Drivers/carl9170
+S: Maintained
+F: drivers/net/wireless/ath/carl9170/
+
ATK0110 HWMON DRIVER
M: Luca Tettamanti <kronos.it@gmail.com>
L: lm-sensors@lm-sensors.org
BLUETOOTH DRIVERS
M: Marcel Holtmann <marcel@holtmann.org>
+M: Gustavo F. Padovan <padovan@profusion.mobi>
L: linux-bluetooth@vger.kernel.org
W: http://www.bluez.org/
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/padovan/bluetooth-2.6.git
S: Maintained
F: drivers/bluetooth/
BLUETOOTH SUBSYSTEM
M: Marcel Holtmann <marcel@holtmann.org>
+M: Gustavo F. Padovan <padovan@profusion.mobi>
L: linux-bluetooth@vger.kernel.org
W: http://www.bluez.org/
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/holtmann/bluetooth-2.6.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/padovan/bluetooth-2.6.git
S: Maintained
F: net/bluetooth/
F: include/net/bluetooth/
S: Supported
F: drivers/scsi/bfa/
+BROCADE BNA 10 GIGABIT ETHERNET DRIVER
+M: Rasesh Mody <rmody@brocade.com>
+M: Debashis Dutt <ddutt@brocade.com>
+L: netdev@vger.kernel.org
+S: Supported
+F: drivers/net/bna/
+
BSG (block layer generic sg v4 driver)
M: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
L: linux-scsi@vger.kernel.org
F: scripts/checkpatch.pl
CISCO VIC ETHERNET NIC DRIVER
-M: Scott Feldman <scofeldm@cisco.com>
M: Vasanthy Kolluri <vkolluri@cisco.com>
M: Roopa Prabhu <roprabhu@cisco.com>
+M: David Wang <dwang2@cisco.com>
S: Supported
F: drivers/net/enic/
F: drivers/scsi/gdt*
GENERIC GPIO I2C DRIVER
- M: Haavard Skinnemoen <hskinnemoen@atmel.com>
+ M: Haavard Skinnemoen <hskinnemoen@gmail.com>
S: Supported
F: drivers/i2c/busses/i2c-gpio.c
F: include/linux/i2c-gpio.h
S: Supported
F: drivers/scsi/ipr.*
+IBM Power Virtual Ethernet Device Driver
+M: Santiago Leon <santil@linux.vnet.ibm.com>
+L: netdev@vger.kernel.org
+S: Supported
+F: drivers/net/ibmveth.*
+
IBM ServeRAID RAID DRIVER
P: Jack Hammer
M: Dave Jeffery <ipslinux@adaptec.com>
IOC3 SERIAL DRIVER
M: Pat Gefre <pfg@sgi.com>
- L: linux-mips@linux-mips.org
+ L: linux-serial@vger.kernel.org
S: Maintained
F: drivers/serial/ioc3_serial.c
F: fs/ocfs2/
ORINOCO DRIVER
-M: Pavel Roskin <proski@gnu.org>
-M: David Gibson <hermes@gibson.dropbear.id.au>
L: linux-wireless@vger.kernel.org
L: orinoco-users@lists.sourceforge.net
L: orinoco-devel@lists.sourceforge.net
+W: http://linuxwireless.org/en/users/Drivers/orinoco
W: http://www.nongnu.org/orinoco/
-S: Maintained
+S: Orphan
F: drivers/net/wireless/orinoco/
OSD LIBRARY and FILESYSTEM
S: Maintained
F: include/linux/personality.h
+PHONET PROTOCOL
+M: Remi Denis-Courmont <remi.denis-courmont@nokia.com>
+S: Supported
+F: Documentation/networking/phonet.txt
+F: include/linux/phonet.h
+F: include/net/phonet/
+F: net/phonet/
+
PHRAM MTD DRIVER
M: Joern Engel <joern@lazybastard.org>
L: linux-mtd@lists.infradead.org
F: drivers/media/video/*7146*
F: include/media/*7146*
+ SAMSUNG AUDIO (ASoC) DRIVERS
+ M: Jassi Brar <jassi.brar@samsung.com>
+ L: alsa-devel@alsa-project.org (moderated for non-subscribers)
+ S: Supported
+ F: sound/soc/s3c24xx
+
TLG2300 VIDEO4LINUX-2 DRIVER
M: Huang Shijie <shijie8@gmail.com>
M: Kang Yong <kangyong@telegent.com>
F: drivers/input/misc/wistron_btns.c
WL1251 WIRELESS DRIVER
-M: Kalle Valo <kalle.valo@iki.fi>
+M: Kalle Valo <kvalo@adurom.com>
L: linux-wireless@vger.kernel.org
W: http://wireless.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git
S: Maintained
-F: drivers/net/wireless/wl12xx/*
-X: drivers/net/wireless/wl12xx/wl1271*
+F: drivers/net/wireless/wl1251/*
WL1271 WIRELESS DRIVER
M: Luciano Coelho <luciano.coelho@nokia.com>
L: linux-wireless@vger.kernel.org
W: http://wireless.kernel.org
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/luca/wl12xx.git
S: Maintained
F: drivers/net/wireless/wl12xx/wl1271*
+F: include/linux/wl12xx.h
WL3501 WIRELESS PCMCIA CARD DRIVER
M: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
WOLFSON MICROELECTRONICS DRIVERS
M: Mark Brown <broonie@opensource.wolfsonmicro.com>
M: Ian Lartey <ian@opensource.wolfsonmicro.com>
+ M: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
+ T: git git://opensource.wolfsonmicro.com/linux-2.6-asoc
T: git git://opensource.wolfsonmicro.com/linux-2.6-audioplus
- W: http://opensource.wolfsonmicro.com/node/8
+ W: http://opensource.wolfsonmicro.com/content/linux-drivers-wolfson-devices
S: Supported
F: Documentation/hwmon/wm83??
F: drivers/leds/leds-wm83*.c
S: Maintained
F: drivers/serial/zs.*
+GRE DEMULTIPLEXER DRIVER
+M: Dmitry Kozlov <xeb@mail.ru>
+L: netdev@vger.kernel.org
+S: Maintained
+F: net/ipv4/gre.c
+F: include/net/gre.h
+
+PPTP DRIVER
+M: Dmitry Kozlov <xeb@mail.ru>
+L: netdev@vger.kernel.org
+S: Maintained
+F: drivers/net/pptp.c
+W: http://sourceforge.net/projects/accel-pptp
+
THE REST
M: Linus Torvalds <torvalds@linux-foundation.org>
L: linux-kernel@vger.kernel.org
copy_skb->data, len);
skb = copy_skb;
}
- skb->ip_summed = CHECKSUM_NONE;
+ skb_checksum_none_assert(skb);
skb->protocol = eth_type_trans(skb, bp->dev);
netif_receive_skb(skb);
received++;
dev->irq = sdev->irq;
SET_ETHTOOL_OPS(dev, &b44_ethtool_ops);
- netif_carrier_off(dev);
-
err = ssb_bus_powerup(sdev->bus, 0);
if (err) {
dev_err(sdev->dev,
goto err_out_powerdown;
}
+ netif_carrier_off(dev);
+
ssb_set_drvdata(sdev, dev);
/* Chip reset provides power to the b44 MAC & PCI cores, which
if (!netif_running(dev))
return 0;
+ spin_lock_irq(&bp->lock);
+ b44_init_rings(bp);
+ b44_init_hw(bp, B44_FULL_RESET);
+ spin_unlock_irq(&bp->lock);
+
+ /*
+ * As a shared interrupt, the handler can be called immediately. To be
+ * able to check the interrupt status the hardware must already be
+ * powered back on (b44_init_hw).
+ */
rc = request_irq(dev->irq, b44_interrupt, IRQF_SHARED, dev->name, dev);
if (rc) {
netdev_err(dev, "request_irq failed\n");
+ spin_lock_irq(&bp->lock);
+ b44_halt(bp);
+ b44_free_rings(bp);
+ spin_unlock_irq(&bp->lock);
return rc;
}
- spin_lock_irq(&bp->lock);
-
- b44_init_rings(bp);
- b44_init_hw(bp, B44_FULL_RESET);
netif_device_attach(bp->dev);
- spin_unlock_irq(&bp->lock);
b44_enable_ints(bp);
netif_wake_queue(dev);
num_portres * EHEA_NUM_PORTRES_FW_HANDLES;
if (num_fw_handles) {
- arr = kzalloc(num_fw_handles * sizeof(*arr), GFP_KERNEL);
+ arr = kcalloc(num_fw_handles, sizeof(*arr), GFP_KERNEL);
if (!arr)
goto out; /* Keep the existing array */
} else
}
if (num_registrations) {
- arr = kzalloc(num_registrations * sizeof(*arr), GFP_ATOMIC);
+ arr = kcalloc(num_registrations, sizeof(*arr), GFP_ATOMIC);
if (!arr)
goto out; /* Keep the existing array */
} else
int length = cqe->num_bytes_transfered - 4; /*remove CRC */
skb_put(skb, length);
- skb->ip_summed = CHECKSUM_UNNECESSARY;
skb->protocol = eth_type_trans(skb, dev);
+
+ /* The packet was not an IPV4 packet so a complemented checksum was
+ calculated. The value is found in the Internet Checksum field. */
+ if (cqe->status & EHEA_CQE_BLIND_CKSUM) {
+ skb->ip_summed = CHECKSUM_COMPLETE;
+ skb->csum = csum_unfold(~cqe->inet_checksum_value);
+ } else
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
}
static inline struct sk_buff *get_skb_by_index(struct sk_buff **skb_array,
struct ehea_port_res *pr = &port->port_res[i];
pr->sq_restart_flag = 0;
}
+ wake_up(&port->restart_wq);
}
static void check_sqs(struct ehea_port *port)
for (i = 0; i < port->num_def_qps + port->num_add_tx_qps; i++) {
struct ehea_port_res *pr = &port->port_res[i];
+ int ret;
k = 0;
swqe = ehea_get_swqe(pr->qp, &swqe_index);
memset(swqe, 0, SWQE_HEADER_SIZE);
ehea_post_swqe(pr->qp, swqe);
- while (pr->sq_restart_flag == 0) {
- msleep(5);
- if (++k == 100) {
- ehea_error("HW/SW queues out of sync");
- ehea_schedule_port_reset(pr->port);
- return;
- }
+ ret = wait_event_timeout(port->restart_wq,
+ pr->sq_restart_flag == 0,
+ msecs_to_jiffies(100));
+
+ if (!ret) {
+ ehea_error("HW/SW queues out of sync");
+ ehea_schedule_port_reset(pr->port);
+ return;
}
}
-
- return;
}
pr->queue_stopped = 0;
}
spin_unlock_irqrestore(&pr->netif_queue, flags);
+ wake_up(&pr->port->swqe_avail_wq);
return cqe;
}
struct hcp_ehea_port_cb7 *cb7;
u64 hret;
- if ((enable && port->promisc) || (!enable && !port->promisc))
+ if (enable == port->promisc)
return;
cb7 = (void *)get_zeroed_page(GFP_ATOMIC);
}
pr->swqe_id_counter += 1;
- if (port->vgrp && vlan_tx_tag_present(skb)) {
+ if (vlan_tx_tag_present(skb)) {
swqe->tx_control |= EHEA_SWQE_VLAN_INSERT;
swqe->vlan_tag = vlan_tx_tag_get(skb);
}
netif_start_queue(dev);
}
+ init_waitqueue_head(&port->swqe_avail_wq);
+ init_waitqueue_head(&port->restart_wq);
+
mutex_unlock(&port->port_lock);
return ret;
for (i = 0; i < port->num_def_qps + port->num_add_tx_qps; i++) {
struct ehea_port_res *pr = &port->port_res[i];
int swqe_max = pr->sq_skba_size - 2 - pr->swqe_ll_count;
- int k = 0;
- while (atomic_read(&pr->swqe_avail) < swqe_max) {
- msleep(5);
- if (++k == 20) {
- ehea_error("WARNING: sq not flushed completely");
- break;
- }
+ int ret;
+
+ ret = wait_event_timeout(port->swqe_avail_wq,
+ atomic_read(&pr->swqe_avail) >= swqe_max,
+ msecs_to_jiffies(100));
+
+ if (!ret) {
+ ehea_error("WARNING: sq not flushed completely");
+ break;
}
}
}
if (ret)
ehea_info("failed registering memory remove notifier");
- ret = crash_shutdown_register(&ehea_crash_handler);
+ ret = crash_shutdown_register(ehea_crash_handler);
if (ret)
ehea_info("failed registering crash handler");
out2:
unregister_memory_notifier(&ehea_mem_nb);
unregister_reboot_notifier(&ehea_reboot_nb);
- crash_shutdown_unregister(&ehea_crash_handler);
+ crash_shutdown_unregister(ehea_crash_handler);
out:
return ret;
}
driver_remove_file(&ehea_driver.driver, &driver_attr_capabilities);
ibmebus_unregister_driver(&ehea_driver);
unregister_reboot_notifier(&ehea_reboot_nb);
- ret = crash_shutdown_unregister(&ehea_crash_handler);
+ ret = crash_shutdown_unregister(ehea_crash_handler);
if (ret)
ehea_info("failed unregistering crash handler");
unregister_memory_notifier(&ehea_mem_nb);
/* Make sure we return a number greater than 0
* if usecs > 0 */
- return ((usecs * 1000 + count - 1) / count);
+ return (usecs * 1000 + count - 1) / count;
}
/* Convert ethernet clock ticks to microseconds */
/* Make sure we return a number greater than 0 */
/* if ticks is > 0 */
- return ((ticks * count) / 1000);
+ return (ticks * count) / 1000;
}
/* Get the coalescing parameters, and put them in the cvals
unlock_tx_qs(priv);
unlock_rx_qs(priv);
- local_irq_save(flags);
+ local_irq_restore(flags);
for (i = 0; i < priv->num_rx_queues; i++)
gfar_clean_rx_ring(priv->rx_queue[i],
int old_duplex;
};
-static char version[] __devinitdata = KERN_INFO DRV_NAME
+static char version[] __devinitdata = DRV_NAME
": RDC R6040 NAPI net driver,"
"version "DRV_VERSION " (" DRV_RELDATE ")";
}
/* Write a word data from PHY Chip */
-static void r6040_phy_write(void __iomem *ioaddr, int phy_addr, int reg, u16 val)
+static void r6040_phy_write(void __iomem *ioaddr,
+ int phy_addr, int reg, u16 val)
{
int limit = 2048;
u16 cmd;
}
desc->skb_ptr = skb;
desc->buf = cpu_to_le32(pci_map_single(lp->pdev,
- desc->skb_ptr->data,
- MAX_BUF_SIZE, PCI_DMA_FROMDEVICE));
+ desc->skb_ptr->data,
+ MAX_BUF_SIZE, PCI_DMA_FROMDEVICE));
desc->status = DSC_OWNER_MAC;
desc = desc->vndescp;
} while (desc != lp->rx_ring);
/* Free Descriptor memory */
if (lp->rx_ring) {
- pci_free_consistent(pdev, RX_DESC_SIZE, lp->rx_ring, lp->rx_ring_dma);
+ pci_free_consistent(pdev,
+ RX_DESC_SIZE, lp->rx_ring, lp->rx_ring_dma);
lp->rx_ring = NULL;
}
if (lp->tx_ring) {
- pci_free_consistent(pdev, TX_DESC_SIZE, lp->tx_ring, lp->tx_ring_dma);
+ pci_free_consistent(pdev,
+ TX_DESC_SIZE, lp->tx_ring, lp->tx_ring_dma);
lp->tx_ring = NULL;
}
}
goto next_descr;
}
-
+
/* Packet successfully received */
new_skb = netdev_alloc_skb(dev, MAX_BUF_SIZE);
if (!new_skb) {
}
skb_ptr = descptr->skb_ptr;
skb_ptr->dev = priv->dev;
-
+
/* Do not count the CRC */
skb_put(skb_ptr, descptr->len - 4);
pci_unmap_single(priv->pdev, le32_to_cpu(descptr->buf),
MAX_BUF_SIZE, PCI_DMA_FROMDEVICE);
skb_ptr->protocol = eth_type_trans(skb_ptr, priv->dev);
-
+
/* Send to upper layer */
netif_receive_skb(skb_ptr);
dev->stats.rx_packets++;
return ret;
/* improve performance (by RDC guys) */
- r6040_phy_write(ioaddr, 30, 17, (r6040_phy_read(ioaddr, 30, 17) | 0x4000));
- r6040_phy_write(ioaddr, 30, 17, ~((~r6040_phy_read(ioaddr, 30, 17)) | 0x2000));
+ r6040_phy_write(ioaddr, 30, 17,
+ (r6040_phy_read(ioaddr, 30, 17) | 0x4000));
+ r6040_phy_write(ioaddr, 30, 17,
+ ~((~r6040_phy_read(ioaddr, 30, 17)) | 0x2000));
r6040_phy_write(ioaddr, 0, 19, 0x0000);
r6040_phy_write(ioaddr, 0, 30, 0x01F0);
iowrite16(adrp[0], ioaddr + MID_0L);
iowrite16(adrp[1], ioaddr + MID_0M);
iowrite16(adrp[2], ioaddr + MID_0H);
+
+ /* Store MAC Address in perm_addr */
+ memcpy(dev->perm_addr, dev->dev_addr, ETH_ALEN);
}
static int r6040_open(struct net_device *dev)
ret = request_irq(dev->irq, r6040_interrupt,
IRQF_SHARED, dev->name, dev);
if (ret)
- return ret;
+ goto out;
/* Set MAC address */
r6040_mac_address(dev);
/* Allocate Descriptor memory */
lp->rx_ring =
pci_alloc_consistent(lp->pdev, RX_DESC_SIZE, &lp->rx_ring_dma);
- if (!lp->rx_ring)
- return -ENOMEM;
+ if (!lp->rx_ring) {
+ ret = -ENOMEM;
+ goto err_free_irq;
+ }
lp->tx_ring =
pci_alloc_consistent(lp->pdev, TX_DESC_SIZE, &lp->tx_ring_dma);
if (!lp->tx_ring) {
- pci_free_consistent(lp->pdev, RX_DESC_SIZE, lp->rx_ring,
- lp->rx_ring_dma);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto err_free_rx_ring;
}
ret = r6040_up(dev);
- if (ret) {
- pci_free_consistent(lp->pdev, TX_DESC_SIZE, lp->tx_ring,
- lp->tx_ring_dma);
- pci_free_consistent(lp->pdev, RX_DESC_SIZE, lp->rx_ring,
- lp->rx_ring_dma);
- return ret;
- }
+ if (ret)
+ goto err_free_tx_ring;
napi_enable(&lp->napi);
netif_start_queue(dev);
return 0;
+
+err_free_tx_ring:
+ pci_free_consistent(lp->pdev, TX_DESC_SIZE, lp->tx_ring,
+ lp->tx_ring_dma);
+err_free_rx_ring:
+ pci_free_consistent(lp->pdev, RX_DESC_SIZE, lp->rx_ring,
+ lp->rx_ring_dma);
+err_free_irq:
+ free_irq(dev->irq, dev);
+out:
+ return ret;
}
static netdev_tx_t r6040_start_xmit(struct sk_buff *skb,
/* Multicast Address 1~4 case */
i = 0;
netdev_for_each_mc_addr(ha, dev) {
- if (i < MCAST_MAX) {
- adrp = (u16 *) ha->addr;
- iowrite16(adrp[0], ioaddr + MID_1L + 8 * i);
- iowrite16(adrp[1], ioaddr + MID_1M + 8 * i);
- iowrite16(adrp[2], ioaddr + MID_1H + 8 * i);
- } else {
- iowrite16(0xffff, ioaddr + MID_1L + 8 * i);
- iowrite16(0xffff, ioaddr + MID_1M + 8 * i);
- iowrite16(0xffff, ioaddr + MID_1H + 8 * i);
- }
+ if (i >= MCAST_MAX)
+ break;
+ adrp = (u16 *) ha->addr;
+ iowrite16(adrp[0], ioaddr + MID_1L + 8 * i);
+ iowrite16(adrp[1], ioaddr + MID_1M + 8 * i);
+ iowrite16(adrp[2], ioaddr + MID_1H + 8 * i);
+ i++;
+ }
+ while (i < MCAST_MAX) {
+ iowrite16(0xffff, ioaddr + MID_1L + 8 * i);
+ iowrite16(0xffff, ioaddr + MID_1M + 8 * i);
+ iowrite16(0xffff, ioaddr + MID_1H + 8 * i);
i++;
}
}
.ndo_set_multicast_list = r6040_multicast_list,
.ndo_change_mtu = eth_change_mtu,
.ndo_validate_addr = eth_validate_addr,
- .ndo_set_mac_address = eth_mac_addr,
+ .ndo_set_mac_address = eth_mac_addr,
.ndo_do_ioctl = r6040_ioctl,
.ndo_tx_timeout = r6040_tx_timeout,
#ifdef CONFIG_NET_POLL_CONTROLLER
u16 *adrp;
int i;
- printk("%s\n", version);
+ pr_info("%s\n", version);
err = pci_enable_device(pdev);
if (err)
/* Some bootloader/BIOSes do not initialize
* MAC address, warn about that */
if (!(adrp[0] || adrp[1] || adrp[2])) {
- netdev_warn(dev, "MAC address not initialized, generating random\n");
+ netdev_warn(dev, "MAC address not initialized, "
+ "generating random\n");
random_ether_addr(dev->dev_addr);
}
#define DRV_MODULE_NAME "tg3"
#define TG3_MAJ_NUM 3
-#define TG3_MIN_NUM 113
+#define TG3_MIN_NUM 115
#define DRV_MODULE_VERSION \
__stringify(TG3_MAJ_NUM) "." __stringify(TG3_MIN_NUM)
-#define DRV_MODULE_RELDATE "August 2, 2010"
+#define DRV_MODULE_RELDATE "October 14, 2010"
#define TG3_DEF_MAC_MODE 0
#define TG3_DEF_RX_MODE 0
* You can't change the ring sizes, but you can change where you place
* them in the NIC onboard memory.
*/
-#define TG3_RX_RING_SIZE 512
+#define TG3_RX_STD_RING_SIZE(tp) \
+ ((GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717 || \
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719) ? \
+ RX_STD_MAX_SIZE_5717 : 512)
#define TG3_DEF_RX_RING_PENDING 200
-#define TG3_RX_JUMBO_RING_SIZE 256
+#define TG3_RX_JMB_RING_SIZE(tp) \
+ ((GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717 || \
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719) ? \
+ 1024 : 256)
#define TG3_DEF_RX_JUMBO_RING_PENDING 100
#define TG3_RSS_INDIR_TBL_SIZE 128
* hw multiply/modulo instructions. Another solution would be to
* replace things like '% foo' with '& (foo - 1)'.
*/
-#define TG3_RX_RCB_RING_SIZE(tp) \
- (((tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE) && \
- !(tp->tg3_flags2 & TG3_FLG2_5780_CLASS)) ? 1024 : 512)
#define TG3_TX_RING_SIZE 512
#define TG3_DEF_TX_RING_PENDING (TG3_TX_RING_SIZE - 1)
-#define TG3_RX_RING_BYTES (sizeof(struct tg3_rx_buffer_desc) * \
- TG3_RX_RING_SIZE)
-#define TG3_RX_JUMBO_RING_BYTES (sizeof(struct tg3_ext_rx_buffer_desc) * \
- TG3_RX_JUMBO_RING_SIZE)
-#define TG3_RX_RCB_RING_BYTES(tp) (sizeof(struct tg3_rx_buffer_desc) * \
- TG3_RX_RCB_RING_SIZE(tp))
+#define TG3_RX_STD_RING_BYTES(tp) \
+ (sizeof(struct tg3_rx_buffer_desc) * TG3_RX_STD_RING_SIZE(tp))
+#define TG3_RX_JMB_RING_BYTES(tp) \
+ (sizeof(struct tg3_ext_rx_buffer_desc) * TG3_RX_JMB_RING_SIZE(tp))
+#define TG3_RX_RCB_RING_BYTES(tp) \
+ (sizeof(struct tg3_rx_buffer_desc) * (tp->rx_ret_ring_mask + 1))
#define TG3_TX_RING_BYTES (sizeof(struct tg3_tx_buffer_desc) * \
TG3_TX_RING_SIZE)
#define NEXT_TX(N) (((N) + 1) & (TG3_TX_RING_SIZE - 1))
#define TG3_RX_STD_MAP_SZ TG3_RX_DMA_TO_MAP_SZ(TG3_RX_STD_DMA_SZ)
#define TG3_RX_JMB_MAP_SZ TG3_RX_DMA_TO_MAP_SZ(TG3_RX_JMB_DMA_SZ)
-#define TG3_RX_STD_BUFF_RING_SIZE \
- (sizeof(struct ring_info) * TG3_RX_RING_SIZE)
+#define TG3_RX_STD_BUFF_RING_SIZE(tp) \
+ (sizeof(struct ring_info) * TG3_RX_STD_RING_SIZE(tp))
-#define TG3_RX_JMB_BUFF_RING_SIZE \
- (sizeof(struct ring_info) * TG3_RX_JUMBO_RING_SIZE)
+#define TG3_RX_JMB_BUFF_RING_SIZE(tp) \
+ (sizeof(struct ring_info) * TG3_RX_JMB_RING_SIZE(tp))
/* Due to a hardware bug, the 5701 can only DMA to memory addresses
* that are at least dword aligned when used in PCIX mode. The driver
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, TG3PCI_DEVICE_TIGON3_57788)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, TG3PCI_DEVICE_TIGON3_5717)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, TG3PCI_DEVICE_TIGON3_5718)},
- {PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, TG3PCI_DEVICE_TIGON3_5724)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, TG3PCI_DEVICE_TIGON3_57781)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, TG3PCI_DEVICE_TIGON3_57785)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, TG3PCI_DEVICE_TIGON3_57761)},
HOSTCC_MODE_ENABLE | tnapi->coal_now);
}
-static void tg3_napi_disable(struct tg3 *tp)
-{
- int i;
-
- for (i = tp->irq_cnt - 1; i >= 0; i--)
- napi_disable(&tp->napi[i].napi);
-}
-
-static void tg3_napi_enable(struct tg3 *tp)
-{
- int i;
-
- for (i = 0; i < tp->irq_cnt; i++)
- napi_enable(&tp->napi[i].napi);
-}
-
-static inline void tg3_netif_stop(struct tg3 *tp)
-{
- tp->dev->trans_start = jiffies; /* prevent tx timeout */
- tg3_napi_disable(tp);
- netif_tx_disable(tp->dev);
-}
-
-static inline void tg3_netif_start(struct tg3 *tp)
-{
- /* NOTE: unconditional netif_tx_wake_all_queues is only
- * appropriate so long as all callers are assured to
- * have free tx slots (such as after tg3_init_hw)
- */
- netif_tx_wake_all_queues(tp->dev);
-
- tg3_napi_enable(tp);
- tp->napi[0].hw_status->status |= SD_STATUS_UPDATED;
- tg3_enable_ints(tp);
-}
-
static void tg3_switch_clocks(struct tg3 *tp)
{
u32 clock_ctrl;
}
}
+static int tg3_phy_cl45_write(struct tg3 *tp, u32 devad, u32 addr, u32 val)
+{
+ int err;
+
+ err = tg3_writephy(tp, MII_TG3_MMD_CTRL, devad);
+ if (err)
+ goto done;
+
+ err = tg3_writephy(tp, MII_TG3_MMD_ADDRESS, addr);
+ if (err)
+ goto done;
+
+ err = tg3_writephy(tp, MII_TG3_MMD_CTRL,
+ MII_TG3_MMD_CTRL_DATA_NOINC | devad);
+ if (err)
+ goto done;
+
+ err = tg3_writephy(tp, MII_TG3_MMD_ADDRESS, val);
+
+done:
+ return err;
+}
+
+static int tg3_phy_cl45_read(struct tg3 *tp, u32 devad, u32 addr, u32 *val)
+{
+ int err;
+
+ err = tg3_writephy(tp, MII_TG3_MMD_CTRL, devad);
+ if (err)
+ goto done;
+
+ err = tg3_writephy(tp, MII_TG3_MMD_ADDRESS, addr);
+ if (err)
+ goto done;
+
+ err = tg3_writephy(tp, MII_TG3_MMD_CTRL,
+ MII_TG3_MMD_CTRL_DATA_NOINC | devad);
+ if (err)
+ goto done;
+
+ err = tg3_readphy(tp, MII_TG3_MMD_ADDRESS, val);
+
+done:
+ return err;
+}
+
/* tp->lock is held. */
static inline void tg3_generate_fw_event(struct tg3 *tp)
{
}
}
+static int tg3_phydsp_read(struct tg3 *tp, u32 reg, u32 *val)
+{
+ int err;
+
+ err = tg3_writephy(tp, MII_TG3_DSP_ADDRESS, reg);
+ if (!err)
+ err = tg3_readphy(tp, MII_TG3_DSP_RW_PORT, val);
+
+ return err;
+}
+
static int tg3_phydsp_write(struct tg3 *tp, u32 reg, u32 val)
{
int err;
tg3_writephy(tp, MII_TG3_AUX_CTRL, phy);
}
+static void tg3_phy_eee_adjust(struct tg3 *tp, u32 current_link_up)
+{
+ u32 val;
+
+ if (!(tp->phy_flags & TG3_PHYFLG_EEE_CAP))
+ return;
+
+ tp->setlpicnt = 0;
+
+ if (tp->link_config.autoneg == AUTONEG_ENABLE &&
+ current_link_up == 1 &&
+ (tp->link_config.active_speed == SPEED_1000 ||
+ (tp->link_config.active_speed == SPEED_100 &&
+ tp->link_config.active_duplex == DUPLEX_FULL))) {
+ u32 eeectl;
+
+ if (tp->link_config.active_speed == SPEED_1000)
+ eeectl = TG3_CPMU_EEE_CTRL_EXIT_16_5_US;
+ else
+ eeectl = TG3_CPMU_EEE_CTRL_EXIT_36_US;
+
+ tw32(TG3_CPMU_EEE_CTRL, eeectl);
+
+ tg3_phy_cl45_read(tp, 0x7, TG3_CL45_D7_EEERES_STAT, &val);
+
+ if (val == TG3_CL45_D7_EEERES_STAT_LP_1000T ||
+ val == TG3_CL45_D7_EEERES_STAT_LP_100TX)
+ tp->setlpicnt = 2;
+ }
+
+ if (!tp->setlpicnt) {
+ val = tr32(TG3_CPMU_EEE_MODE);
+ tw32(TG3_CPMU_EEE_MODE, val & ~TG3_CPMU_EEEMD_LPI_ENABLE);
+ }
+}
+
static int tg3_wait_macro_done(struct tg3 *tp)
{
int limit = 100;
*/
static int tg3_phy_reset(struct tg3 *tp)
{
- u32 cpmuctrl;
- u32 phy_status;
+ u32 val, cpmuctrl;
int err;
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906) {
- u32 val;
-
val = tr32(GRC_MISC_CFG);
tw32_f(GRC_MISC_CFG, val & ~GRC_MISC_CFG_EPHY_IDDQ);
udelay(40);
}
- err = tg3_readphy(tp, MII_BMSR, &phy_status);
- err |= tg3_readphy(tp, MII_BMSR, &phy_status);
+ err = tg3_readphy(tp, MII_BMSR, &val);
+ err |= tg3_readphy(tp, MII_BMSR, &val);
if (err != 0)
return -EBUSY;
return err;
if (cpmuctrl & CPMU_CTRL_GPHY_10MB_RXONLY) {
- u32 phy;
-
- phy = MII_TG3_DSP_EXP8_AEDW | MII_TG3_DSP_EXP8_REJ2MHz;
- tg3_phydsp_write(tp, MII_TG3_DSP_EXP8, phy);
+ val = MII_TG3_DSP_EXP8_AEDW | MII_TG3_DSP_EXP8_REJ2MHz;
+ tg3_phydsp_write(tp, MII_TG3_DSP_EXP8, val);
tw32(TG3_CPMU_CTRL, cpmuctrl);
}
if (GET_CHIP_REV(tp->pci_chip_rev_id) == CHIPREV_5784_AX ||
GET_CHIP_REV(tp->pci_chip_rev_id) == CHIPREV_5761_AX) {
- u32 val;
-
val = tr32(TG3_CPMU_LSPD_1000MB_CLK);
if ((val & CPMU_LSPD_1000MB_MACCLK_MASK) ==
CPMU_LSPD_1000MB_MACCLK_12_5) {
/* Cannot do read-modify-write on 5401 */
tg3_writephy(tp, MII_TG3_AUX_CTRL, 0x4c20);
} else if (tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE) {
- u32 phy_reg;
-
/* Set bit 14 with read-modify-write to preserve other bits */
if (!tg3_writephy(tp, MII_TG3_AUX_CTRL, 0x0007) &&
- !tg3_readphy(tp, MII_TG3_AUX_CTRL, &phy_reg))
- tg3_writephy(tp, MII_TG3_AUX_CTRL, phy_reg | 0x4000);
+ !tg3_readphy(tp, MII_TG3_AUX_CTRL, &val))
+ tg3_writephy(tp, MII_TG3_AUX_CTRL, val | 0x4000);
}
/* Set phy register 0x10 bit 0 to high fifo elasticity to support
* jumbo frames transmission.
*/
if (tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE) {
- u32 phy_reg;
-
- if (!tg3_readphy(tp, MII_TG3_EXT_CTRL, &phy_reg))
+ if (!tg3_readphy(tp, MII_TG3_EXT_CTRL, &val))
tg3_writephy(tp, MII_TG3_EXT_CTRL,
- phy_reg | MII_TG3_EXT_CTRL_FIFO_ELASTIC);
+ val | MII_TG3_EXT_CTRL_FIFO_ELASTIC);
}
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906) {
tg3_writephy(tp, MII_TG3_CTRL, new_adv);
}
+ if (tp->phy_flags & TG3_PHYFLG_EEE_CAP) {
+ u32 val = 0;
+
+ tw32(TG3_CPMU_EEE_MODE,
+ tr32(TG3_CPMU_EEE_MODE) & ~TG3_CPMU_EEEMD_LPI_ENABLE);
+
+ /* Enable SM_DSP clock and tx 6dB coding. */
+ val = MII_TG3_AUXCTL_SHDWSEL_AUXCTL |
+ MII_TG3_AUXCTL_ACTL_SMDSP_ENA |
+ MII_TG3_AUXCTL_ACTL_TX_6DB;
+ tg3_writephy(tp, MII_TG3_AUX_CTRL, val);
+
+ if ((GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717 ||
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765) &&
+ !tg3_phydsp_read(tp, MII_TG3_DSP_CH34TP2, &val))
+ tg3_phydsp_write(tp, MII_TG3_DSP_CH34TP2,
+ val | MII_TG3_DSP_CH34TP2_HIBW01);
+
+ if (tp->link_config.autoneg == AUTONEG_ENABLE) {
+ /* Advertise 100-BaseTX EEE ability */
+ if (tp->link_config.advertising &
+ (ADVERTISED_100baseT_Half |
+ ADVERTISED_100baseT_Full))
+ val |= TG3_CL45_D7_EEEADV_CAP_100TX;
+ /* Advertise 1000-BaseT EEE ability */
+ if (tp->link_config.advertising &
+ (ADVERTISED_1000baseT_Half |
+ ADVERTISED_1000baseT_Full))
+ val |= TG3_CL45_D7_EEEADV_CAP_1000T;
+ }
+ tg3_phy_cl45_write(tp, 0x7, TG3_CL45_D7_EEEADV_CAP, val);
+
+ /* Turn off SM_DSP clock. */
+ val = MII_TG3_AUXCTL_SHDWSEL_AUXCTL |
+ MII_TG3_AUXCTL_ACTL_TX_6DB;
+ tg3_writephy(tp, MII_TG3_AUX_CTRL, val);
+ }
+
if (tp->link_config.autoneg == AUTONEG_DISABLE &&
tp->link_config.speed != SPEED_INVALID) {
u32 bmcr, orig_bmcr;
static int tg3_setup_copper_phy(struct tg3 *tp, int force_reset)
{
int current_link_up;
- u32 bmsr, dummy;
+ u32 bmsr, val;
u32 lcl_adv, rmt_adv;
u16 current_speed;
u8 current_duplex;
}
/* Clear pending interrupts... */
- tg3_readphy(tp, MII_TG3_ISTAT, &dummy);
- tg3_readphy(tp, MII_TG3_ISTAT, &dummy);
+ tg3_readphy(tp, MII_TG3_ISTAT, &val);
+ tg3_readphy(tp, MII_TG3_ISTAT, &val);
if (tp->phy_flags & TG3_PHYFLG_USE_MI_INTERRUPT)
tg3_writephy(tp, MII_TG3_IMASK, ~MII_TG3_INT_LINKCHG);
current_duplex = DUPLEX_INVALID;
if (tp->phy_flags & TG3_PHYFLG_CAPACITIVE_COUPLING) {
- u32 val;
-
tg3_writephy(tp, MII_TG3_AUX_CTRL, 0x4007);
tg3_readphy(tp, MII_TG3_AUX_CTRL, &val);
if (!(val & (1 << 10))) {
relink:
if (current_link_up == 0 || (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER)) {
- u32 tmp;
-
tg3_phy_copper_begin(tp);
- tg3_readphy(tp, MII_BMSR, &tmp);
- if (!tg3_readphy(tp, MII_BMSR, &tmp) &&
- (tmp & BMSR_LSTATUS))
+ tg3_readphy(tp, MII_BMSR, &bmsr);
+ if (!tg3_readphy(tp, MII_BMSR, &bmsr) &&
+ (bmsr & BMSR_LSTATUS))
current_link_up = 1;
}
tw32_f(MAC_MODE, tp->mac_mode);
udelay(40);
+ tg3_phy_eee_adjust(tp, current_link_up);
+
if (tp->tg3_flags & TG3_FLAG_USE_LINKCHG_REG) {
/* Polled via timer. */
tw32_f(MAC_EVENT, 0);
return err;
}
+static inline int tg3_irq_sync(struct tg3 *tp)
+{
+ return tp->irq_sync;
+}
+
/* This is called whenever we suspect that the system chipset is re-
* ordering the sequence of MMIO to the tx send mailbox. The symptom
* is bogus tx completions. We try to recover by setting the
u32 opaque_key, u32 dest_idx_unmasked)
{
struct tg3_rx_buffer_desc *desc;
- struct ring_info *map, *src_map;
+ struct ring_info *map;
struct sk_buff *skb;
dma_addr_t mapping;
int skb_size, dest_idx;
- src_map = NULL;
switch (opaque_key) {
case RXD_OPAQUE_RING_STD:
- dest_idx = dest_idx_unmasked % TG3_RX_RING_SIZE;
+ dest_idx = dest_idx_unmasked & tp->rx_std_ring_mask;
desc = &tpr->rx_std[dest_idx];
map = &tpr->rx_std_buffers[dest_idx];
skb_size = tp->rx_pkt_map_sz;
break;
case RXD_OPAQUE_RING_JUMBO:
- dest_idx = dest_idx_unmasked % TG3_RX_JUMBO_RING_SIZE;
+ dest_idx = dest_idx_unmasked & tp->rx_jmb_ring_mask;
desc = &tpr->rx_jmb[dest_idx].std;
map = &tpr->rx_jmb_buffers[dest_idx];
skb_size = TG3_RX_JMB_MAP_SZ;
struct tg3 *tp = tnapi->tp;
struct tg3_rx_buffer_desc *src_desc, *dest_desc;
struct ring_info *src_map, *dest_map;
- struct tg3_rx_prodring_set *spr = &tp->prodring[0];
+ struct tg3_rx_prodring_set *spr = &tp->napi[0].prodring;
int dest_idx;
switch (opaque_key) {
case RXD_OPAQUE_RING_STD:
- dest_idx = dest_idx_unmasked % TG3_RX_RING_SIZE;
+ dest_idx = dest_idx_unmasked & tp->rx_std_ring_mask;
dest_desc = &dpr->rx_std[dest_idx];
dest_map = &dpr->rx_std_buffers[dest_idx];
src_desc = &spr->rx_std[src_idx];
break;
case RXD_OPAQUE_RING_JUMBO:
- dest_idx = dest_idx_unmasked % TG3_RX_JUMBO_RING_SIZE;
+ dest_idx = dest_idx_unmasked & tp->rx_jmb_ring_mask;
dest_desc = &dpr->rx_jmb[dest_idx].std;
dest_map = &dpr->rx_jmb_buffers[dest_idx];
src_desc = &spr->rx_jmb[src_idx].std;
u32 sw_idx = tnapi->rx_rcb_ptr;
u16 hw_idx;
int received;
- struct tg3_rx_prodring_set *tpr = tnapi->prodring;
+ struct tg3_rx_prodring_set *tpr = &tnapi->prodring;
hw_idx = *(tnapi->rx_rcb_prod_idx);
/*
desc_idx = desc->opaque & RXD_OPAQUE_INDEX_MASK;
opaque_key = desc->opaque & RXD_OPAQUE_RING_MASK;
if (opaque_key == RXD_OPAQUE_RING_STD) {
- ri = &tp->prodring[0].rx_std_buffers[desc_idx];
+ ri = &tp->napi[0].prodring.rx_std_buffers[desc_idx];
dma_addr = dma_unmap_addr(ri, mapping);
skb = ri->skb;
post_ptr = &std_prod_idx;
rx_std_posted++;
} else if (opaque_key == RXD_OPAQUE_RING_JUMBO) {
- ri = &tp->prodring[0].rx_jmb_buffers[desc_idx];
+ ri = &tp->napi[0].prodring.rx_jmb_buffers[desc_idx];
dma_addr = dma_unmap_addr(ri, mapping);
skb = ri->skb;
post_ptr = &jmb_prod_idx;
desc_idx, *post_ptr);
drop_it_no_recycle:
/* Other statistics kept track of by card. */
- tp->net_stats.rx_dropped++;
+ tp->rx_dropped++;
goto next_pkt;
}
>> RXD_TCPCSUM_SHIFT) == 0xffff))
skb->ip_summed = CHECKSUM_UNNECESSARY;
else
- skb->ip_summed = CHECKSUM_NONE;
+ skb_checksum_none_assert(skb);
skb->protocol = eth_type_trans(skb, tp->dev);
if (len > (tp->dev->mtu + ETH_HLEN) &&
skb->protocol != htons(ETH_P_8021Q)) {
dev_kfree_skb(skb);
- goto next_pkt;
+ goto drop_it_no_recycle;
}
if (desc->type_flags & RXD_FLAG_VLAN &&
(*post_ptr)++;
if (unlikely(rx_std_posted >= tp->rx_std_max_post)) {
- tpr->rx_std_prod_idx = std_prod_idx % TG3_RX_RING_SIZE;
+ tpr->rx_std_prod_idx = std_prod_idx &
+ tp->rx_std_ring_mask;
tw32_rx_mbox(TG3_RX_STD_PROD_IDX_REG,
tpr->rx_std_prod_idx);
work_mask &= ~RXD_OPAQUE_RING_STD;
}
next_pkt_nopost:
sw_idx++;
- sw_idx &= (TG3_RX_RCB_RING_SIZE(tp) - 1);
+ sw_idx &= tp->rx_ret_ring_mask;
/* Refresh hw_idx to see if there is new work */
if (sw_idx == hw_idx) {
/* Refill RX ring(s). */
if (!(tp->tg3_flags3 & TG3_FLG3_ENABLE_RSS)) {
if (work_mask & RXD_OPAQUE_RING_STD) {
- tpr->rx_std_prod_idx = std_prod_idx % TG3_RX_RING_SIZE;
+ tpr->rx_std_prod_idx = std_prod_idx &
+ tp->rx_std_ring_mask;
tw32_rx_mbox(TG3_RX_STD_PROD_IDX_REG,
tpr->rx_std_prod_idx);
}
if (work_mask & RXD_OPAQUE_RING_JUMBO) {
- tpr->rx_jmb_prod_idx = jmb_prod_idx %
- TG3_RX_JUMBO_RING_SIZE;
+ tpr->rx_jmb_prod_idx = jmb_prod_idx &
+ tp->rx_jmb_ring_mask;
tw32_rx_mbox(TG3_RX_JMB_PROD_IDX_REG,
tpr->rx_jmb_prod_idx);
}
*/
smp_wmb();
- tpr->rx_std_prod_idx = std_prod_idx % TG3_RX_RING_SIZE;
- tpr->rx_jmb_prod_idx = jmb_prod_idx % TG3_RX_JUMBO_RING_SIZE;
+ tpr->rx_std_prod_idx = std_prod_idx & tp->rx_std_ring_mask;
+ tpr->rx_jmb_prod_idx = jmb_prod_idx & tp->rx_jmb_ring_mask;
if (tnapi != &tp->napi[1])
napi_schedule(&tp->napi[1].napi);
if (spr->rx_std_cons_idx < src_prod_idx)
cpycnt = src_prod_idx - spr->rx_std_cons_idx;
else
- cpycnt = TG3_RX_RING_SIZE - spr->rx_std_cons_idx;
+ cpycnt = tp->rx_std_ring_mask + 1 -
+ spr->rx_std_cons_idx;
- cpycnt = min(cpycnt, TG3_RX_RING_SIZE - dpr->rx_std_prod_idx);
+ cpycnt = min(cpycnt,
+ tp->rx_std_ring_mask + 1 - dpr->rx_std_prod_idx);
si = spr->rx_std_cons_idx;
di = dpr->rx_std_prod_idx;
dbd->addr_lo = sbd->addr_lo;
}
- spr->rx_std_cons_idx = (spr->rx_std_cons_idx + cpycnt) %
- TG3_RX_RING_SIZE;
- dpr->rx_std_prod_idx = (dpr->rx_std_prod_idx + cpycnt) %
- TG3_RX_RING_SIZE;
+ spr->rx_std_cons_idx = (spr->rx_std_cons_idx + cpycnt) &
+ tp->rx_std_ring_mask;
+ dpr->rx_std_prod_idx = (dpr->rx_std_prod_idx + cpycnt) &
+ tp->rx_std_ring_mask;
}
while (1) {
if (spr->rx_jmb_cons_idx < src_prod_idx)
cpycnt = src_prod_idx - spr->rx_jmb_cons_idx;
else
- cpycnt = TG3_RX_JUMBO_RING_SIZE - spr->rx_jmb_cons_idx;
+ cpycnt = tp->rx_jmb_ring_mask + 1 -
+ spr->rx_jmb_cons_idx;
cpycnt = min(cpycnt,
- TG3_RX_JUMBO_RING_SIZE - dpr->rx_jmb_prod_idx);
+ tp->rx_jmb_ring_mask + 1 - dpr->rx_jmb_prod_idx);
si = spr->rx_jmb_cons_idx;
di = dpr->rx_jmb_prod_idx;
dbd->addr_lo = sbd->addr_lo;
}
- spr->rx_jmb_cons_idx = (spr->rx_jmb_cons_idx + cpycnt) %
- TG3_RX_JUMBO_RING_SIZE;
- dpr->rx_jmb_prod_idx = (dpr->rx_jmb_prod_idx + cpycnt) %
- TG3_RX_JUMBO_RING_SIZE;
+ spr->rx_jmb_cons_idx = (spr->rx_jmb_cons_idx + cpycnt) &
+ tp->rx_jmb_ring_mask;
+ dpr->rx_jmb_prod_idx = (dpr->rx_jmb_prod_idx + cpycnt) &
+ tp->rx_jmb_ring_mask;
}
return err;
work_done += tg3_rx(tnapi, budget - work_done);
if ((tp->tg3_flags3 & TG3_FLG3_ENABLE_RSS) && tnapi == &tp->napi[1]) {
- struct tg3_rx_prodring_set *dpr = &tp->prodring[0];
+ struct tg3_rx_prodring_set *dpr = &tp->napi[0].prodring;
int i, err = 0;
u32 std_prod_idx = dpr->rx_std_prod_idx;
u32 jmb_prod_idx = dpr->rx_jmb_prod_idx;
for (i = 1; i < tp->irq_cnt; i++)
err |= tg3_rx_prodring_xfer(tp, dpr,
- tp->napi[i].prodring);
+ &tp->napi[i].prodring);
wmb();
return work_done;
}
+static void tg3_napi_disable(struct tg3 *tp)
+{
+ int i;
+
+ for (i = tp->irq_cnt - 1; i >= 0; i--)
+ napi_disable(&tp->napi[i].napi);
+}
+
+static void tg3_napi_enable(struct tg3 *tp)
+{
+ int i;
+
+ for (i = 0; i < tp->irq_cnt; i++)
+ napi_enable(&tp->napi[i].napi);
+}
+
+static void tg3_napi_init(struct tg3 *tp)
+{
+ int i;
+
+ netif_napi_add(tp->dev, &tp->napi[0].napi, tg3_poll, 64);
+ for (i = 1; i < tp->irq_cnt; i++)
+ netif_napi_add(tp->dev, &tp->napi[i].napi, tg3_poll_msix, 64);
+}
+
+static void tg3_napi_fini(struct tg3 *tp)
+{
+ int i;
+
+ for (i = 0; i < tp->irq_cnt; i++)
+ netif_napi_del(&tp->napi[i].napi);
+}
+
+static inline void tg3_netif_stop(struct tg3 *tp)
+{
+ tp->dev->trans_start = jiffies; /* prevent tx timeout */
+ tg3_napi_disable(tp);
+ netif_tx_disable(tp->dev);
+}
+
+static inline void tg3_netif_start(struct tg3 *tp)
+{
+ /* NOTE: unconditional netif_tx_wake_all_queues is only
+ * appropriate so long as all callers are assured to
+ * have free tx slots (such as after tg3_init_hw)
+ */
+ netif_tx_wake_all_queues(tp->dev);
+
+ tg3_napi_enable(tp);
+ tp->napi[0].hw_status->status |= SD_STATUS_UPDATED;
+ tg3_enable_ints(tp);
+}
+
static void tg3_irq_quiesce(struct tg3 *tp)
{
int i;
synchronize_irq(tp->napi[i].irq_vec);
}
-static inline int tg3_irq_sync(struct tg3 *tp)
-{
- return tp->irq_sync;
-}
-
/* Fully shutdown all tg3 driver activity elsewhere in the system.
* If irq_sync is non-zero, then the IRQ handler must be synchronized
* with as well. Most of the time, this is not necessary except when
{
u32 base = (u32) mapping & 0xffffffff;
- return ((base > 0xffffdcc0) &&
- (base + len + 8 < base));
+ return (base > 0xffffdcc0) && (base + len + 8 < base);
}
/* Test for DMA addresses > 40-bit */
{
#if defined(CONFIG_HIGHMEM) && (BITS_PER_LONG == 64)
if (tp->tg3_flags & TG3_FLAG_40BIT_DMA_BUG)
- return (((u64) mapping + len) > DMA_BIT_MASK(40));
+ return ((u64) mapping + len) > DMA_BIT_MASK(40);
return 0;
#else
return 0;
goto out_unlock;
}
- if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
+ if (skb_is_gso_v6(skb)) {
hdrlen = skb_headlen(skb) - ETH_HLEN;
- else {
+ } else {
struct iphdr *iph = ip_hdr(skb);
tcp_opt_len = tcp_optlen(skb);
}
#if TG3_VLAN_TAG_USED
- if (tp->vlgrp != NULL && vlan_tx_tag_present(skb))
+ if (vlan_tx_tag_present(skb))
base_flags |= (TXD_FLAG_VLAN |
(vlan_tx_tag_get(skb) << 16));
#endif
iph = ip_hdr(skb);
tcp_opt_len = tcp_optlen(skb);
- if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) {
+ if (skb_is_gso_v6(skb)) {
hdr_len = skb_headlen(skb) - ETH_HLEN;
} else {
u32 ip_tcp_len;
}
}
#if TG3_VLAN_TAG_USED
- if (tp->vlgrp != NULL && vlan_tx_tag_present(skb))
+ if (vlan_tx_tag_present(skb))
base_flags |= (TXD_FLAG_VLAN |
(vlan_tx_tag_get(skb) << 16));
#endif
{
int i;
- if (tpr != &tp->prodring[0]) {
+ if (tpr != &tp->napi[0].prodring) {
for (i = tpr->rx_std_cons_idx; i != tpr->rx_std_prod_idx;
- i = (i + 1) % TG3_RX_RING_SIZE)
+ i = (i + 1) & tp->rx_std_ring_mask)
tg3_rx_skb_free(tp, &tpr->rx_std_buffers[i],
tp->rx_pkt_map_sz);
if (tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE) {
for (i = tpr->rx_jmb_cons_idx;
i != tpr->rx_jmb_prod_idx;
- i = (i + 1) % TG3_RX_JUMBO_RING_SIZE) {
+ i = (i + 1) & tp->rx_jmb_ring_mask) {
tg3_rx_skb_free(tp, &tpr->rx_jmb_buffers[i],
TG3_RX_JMB_MAP_SZ);
}
return;
}
- for (i = 0; i < TG3_RX_RING_SIZE; i++)
+ for (i = 0; i <= tp->rx_std_ring_mask; i++)
tg3_rx_skb_free(tp, &tpr->rx_std_buffers[i],
tp->rx_pkt_map_sz);
- if (tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE) {
- for (i = 0; i < TG3_RX_JUMBO_RING_SIZE; i++)
+ if ((tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE) &&
+ !(tp->tg3_flags2 & TG3_FLG2_5780_CLASS)) {
+ for (i = 0; i <= tp->rx_jmb_ring_mask; i++)
tg3_rx_skb_free(tp, &tpr->rx_jmb_buffers[i],
TG3_RX_JMB_MAP_SZ);
}
tpr->rx_jmb_cons_idx = 0;
tpr->rx_jmb_prod_idx = 0;
- if (tpr != &tp->prodring[0]) {
- memset(&tpr->rx_std_buffers[0], 0, TG3_RX_STD_BUFF_RING_SIZE);
- if (tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE)
+ if (tpr != &tp->napi[0].prodring) {
+ memset(&tpr->rx_std_buffers[0], 0,
+ TG3_RX_STD_BUFF_RING_SIZE(tp));
+ if (tpr->rx_jmb_buffers)
memset(&tpr->rx_jmb_buffers[0], 0,
- TG3_RX_JMB_BUFF_RING_SIZE);
+ TG3_RX_JMB_BUFF_RING_SIZE(tp));
goto done;
}
/* Zero out all descriptors. */
- memset(tpr->rx_std, 0, TG3_RX_RING_BYTES);
+ memset(tpr->rx_std, 0, TG3_RX_STD_RING_BYTES(tp));
rx_pkt_dma_sz = TG3_RX_STD_DMA_SZ;
if ((tp->tg3_flags2 & TG3_FLG2_5780_CLASS) &&
* stuff once. This works because the card does not
* write into the rx buffer posting rings.
*/
- for (i = 0; i < TG3_RX_RING_SIZE; i++) {
+ for (i = 0; i <= tp->rx_std_ring_mask; i++) {
struct tg3_rx_buffer_desc *rxd;
rxd = &tpr->rx_std[i];
}
}
- if (!(tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE))
+ if (!(tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE) ||
+ (tp->tg3_flags2 & TG3_FLG2_5780_CLASS))
goto done;
- memset(tpr->rx_jmb, 0, TG3_RX_JUMBO_RING_BYTES);
+ memset(tpr->rx_jmb, 0, TG3_RX_JMB_RING_BYTES(tp));
if (!(tp->tg3_flags & TG3_FLAG_JUMBO_RING_ENABLE))
goto done;
- for (i = 0; i < TG3_RX_JUMBO_RING_SIZE; i++) {
+ for (i = 0; i <= tp->rx_jmb_ring_mask; i++) {
struct tg3_rx_buffer_desc *rxd;
rxd = &tpr->rx_jmb[i].std;
kfree(tpr->rx_jmb_buffers);
tpr->rx_jmb_buffers = NULL;
if (tpr->rx_std) {
- pci_free_consistent(tp->pdev, TG3_RX_RING_BYTES,
+ pci_free_consistent(tp->pdev, TG3_RX_STD_RING_BYTES(tp),
tpr->rx_std, tpr->rx_std_mapping);
tpr->rx_std = NULL;
}
if (tpr->rx_jmb) {
- pci_free_consistent(tp->pdev, TG3_RX_JUMBO_RING_BYTES,
+ pci_free_consistent(tp->pdev, TG3_RX_JMB_RING_BYTES(tp),
tpr->rx_jmb, tpr->rx_jmb_mapping);
tpr->rx_jmb = NULL;
}
static int tg3_rx_prodring_init(struct tg3 *tp,
struct tg3_rx_prodring_set *tpr)
{
- tpr->rx_std_buffers = kzalloc(TG3_RX_STD_BUFF_RING_SIZE, GFP_KERNEL);
+ tpr->rx_std_buffers = kzalloc(TG3_RX_STD_BUFF_RING_SIZE(tp),
+ GFP_KERNEL);
if (!tpr->rx_std_buffers)
return -ENOMEM;
- tpr->rx_std = pci_alloc_consistent(tp->pdev, TG3_RX_RING_BYTES,
+ tpr->rx_std = pci_alloc_consistent(tp->pdev, TG3_RX_STD_RING_BYTES(tp),
&tpr->rx_std_mapping);
if (!tpr->rx_std)
goto err_out;
- if (tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE) {
- tpr->rx_jmb_buffers = kzalloc(TG3_RX_JMB_BUFF_RING_SIZE,
+ if ((tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE) &&
+ !(tp->tg3_flags2 & TG3_FLG2_5780_CLASS)) {
+ tpr->rx_jmb_buffers = kzalloc(TG3_RX_JMB_BUFF_RING_SIZE(tp),
GFP_KERNEL);
if (!tpr->rx_jmb_buffers)
goto err_out;
tpr->rx_jmb = pci_alloc_consistent(tp->pdev,
- TG3_RX_JUMBO_RING_BYTES,
+ TG3_RX_JMB_RING_BYTES(tp),
&tpr->rx_jmb_mapping);
if (!tpr->rx_jmb)
goto err_out;
for (j = 0; j < tp->irq_cnt; j++) {
struct tg3_napi *tnapi = &tp->napi[j];
- tg3_rx_prodring_free(tp, &tp->prodring[j]);
+ tg3_rx_prodring_free(tp, &tnapi->prodring);
if (!tnapi->tx_buffers)
continue;
if (tnapi->rx_rcb)
memset(tnapi->rx_rcb, 0, TG3_RX_RCB_RING_BYTES(tp));
- if (tg3_rx_prodring_alloc(tp, &tp->prodring[i])) {
+ if (tg3_rx_prodring_alloc(tp, &tnapi->prodring)) {
tg3_free_rings(tp);
return -ENOMEM;
}
tnapi->rx_rcb = NULL;
}
+ tg3_rx_prodring_fini(tp, &tnapi->prodring);
+
if (tnapi->hw_status) {
pci_free_consistent(tp->pdev, TG3_HW_STATUS_SIZE,
tnapi->hw_status,
tp->hw_stats, tp->stats_mapping);
tp->hw_stats = NULL;
}
-
- for (i = 0; i < tp->irq_cnt; i++)
- tg3_rx_prodring_fini(tp, &tp->prodring[i]);
}
/*
{
int i;
- for (i = 0; i < tp->irq_cnt; i++) {
- if (tg3_rx_prodring_init(tp, &tp->prodring[i]))
- goto err_out;
- }
-
tp->hw_stats = pci_alloc_consistent(tp->pdev,
sizeof(struct tg3_hw_stats),
&tp->stats_mapping);
memset(tnapi->hw_status, 0, TG3_HW_STATUS_SIZE);
sblk = tnapi->hw_status;
+ if (tg3_rx_prodring_init(tp, &tnapi->prodring))
+ goto err_out;
+
/* If multivector TSS is enabled, vector 0 does not handle
* tx interrupts. Don't allocate any resources for it.
*/
break;
}
- tnapi->prodring = &tp->prodring[i];
-
/*
* If multivector RSS is enabled, vector 0 does not handle
* rx or tx interrupts. Don't allocate any resources for it.
int i;
u32 apedata;
+ /* NCSI does not support APE events */
+ if (tp->tg3_flags3 & TG3_FLG3_APE_HAS_NCSI)
+ return;
+
apedata = tg3_ape_read32(tp, TG3_APE_SEG_SIG);
if (apedata != APE_SEG_SIG_MAGIC)
return;
APE_HOST_DRIVER_ID_MAGIC(TG3_MAJ_NUM, TG3_MIN_NUM));
tg3_ape_write32(tp, TG3_APE_HOST_BEHAVIOR,
APE_HOST_BEHAV_NO_PHYLOCK);
+ tg3_ape_write32(tp, TG3_APE_HOST_DRVR_STATE,
+ TG3_APE_HOST_DRVR_STATE_START);
event = APE_EVENT_STATUS_STATE_START;
break;
*/
tg3_ape_write32(tp, TG3_APE_HOST_SEG_SIG, 0x0);
+ if (device_may_wakeup(&tp->pdev->dev) &&
+ (tp->tg3_flags & TG3_FLAG_WOL_ENABLE)) {
+ tg3_ape_write32(tp, TG3_APE_HOST_WOL_SPEED,
+ TG3_APE_HOST_WOL_SPEED_AUTO);
+ apedata = TG3_APE_HOST_DRVR_STATE_WOL;
+ } else
+ apedata = TG3_APE_HOST_DRVR_STATE_UNLOAD;
+
+ tg3_ape_write32(tp, TG3_APE_HOST_DRVR_STATE, apedata);
+
event = APE_EVENT_STATUS_STATE_UNLOAD;
break;
case RESET_KIND_SUSPEND:
/* Disable all transmit rings but the first. */
if (!(tp->tg3_flags2 & TG3_FLG2_5705_PLUS))
limit = NIC_SRAM_SEND_RCB + TG3_BDINFO_SIZE * 16;
+ else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717 ||
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719)
+ limit = NIC_SRAM_SEND_RCB + TG3_BDINFO_SIZE * 4;
else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765)
limit = NIC_SRAM_SEND_RCB + TG3_BDINFO_SIZE * 2;
else
/* Zero mailbox registers. */
if (tp->tg3_flags & TG3_FLAG_SUPPORT_MSIX) {
- for (i = 1; i < TG3_IRQ_MAX_VECS; i++) {
+ for (i = 1; i < tp->irq_max; i++) {
tp->napi[i].tx_prod = 0;
tp->napi[i].tx_cons = 0;
if (tp->tg3_flags3 & TG3_FLG3_ENABLE_TSS)
if (tnapi->rx_rcb) {
tg3_set_bdinfo(tp, rxrcb, tnapi->rx_rcb_mapping,
- (TG3_RX_RCB_RING_SIZE(tp) <<
- BDINFO_FLAGS_MAXLEN_SHIFT), 0);
+ (tp->rx_ret_ring_mask + 1) <<
+ BDINFO_FLAGS_MAXLEN_SHIFT, 0);
rxrcb += TG3_BDINFO_SIZE;
}
}
tg3_set_bdinfo(tp, rxrcb, tnapi->rx_rcb_mapping,
- (TG3_RX_RCB_RING_SIZE(tp) <<
+ ((tp->rx_ret_ring_mask + 1) <<
BDINFO_FLAGS_MAXLEN_SHIFT), 0);
stblk += 8;
{
u32 val, rdmac_mode;
int i, err, limit;
- struct tg3_rx_prodring_set *tpr = &tp->prodring[0];
+ struct tg3_rx_prodring_set *tpr = &tp->napi[0].prodring;
tg3_disable_ints(tp);
tw32(TG3_CPMU_LSPD_10MB_CLK, val);
}
+ /* Enable MAC control of LPI */
+ if (tp->phy_flags & TG3_PHYFLG_EEE_CAP) {
+ tw32_f(TG3_CPMU_EEE_LNKIDL_CTRL,
+ TG3_CPMU_EEE_LNKIDL_PCIE_NL0 |
+ TG3_CPMU_EEE_LNKIDL_UART_IDL);
+
+ tw32_f(TG3_CPMU_EEE_CTRL,
+ TG3_CPMU_EEE_CTRL_EXIT_20_1_US);
+
+ tw32_f(TG3_CPMU_EEE_MODE,
+ TG3_CPMU_EEEMD_ERLY_L1_XIT_DET |
+ TG3_CPMU_EEEMD_LPI_IN_TX |
+ TG3_CPMU_EEEMD_LPI_IN_RX |
+ TG3_CPMU_EEEMD_EEE_ENABLE);
+ }
+
/* This works around an issue with Athlon chipsets on
* B3 tigon3 silicon. This bit has no effect on any
* other revision. But do not set this on PCI Express
tw32(BUFMGR_DMA_HIGH_WATER,
tp->bufmgr_config.dma_high_water);
- tw32(BUFMGR_MODE, BUFMGR_MODE_ENABLE | BUFMGR_MODE_ATTN_ENABLE);
+ val = BUFMGR_MODE_ENABLE | BUFMGR_MODE_ATTN_ENABLE;
+ if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719)
+ val |= BUFMGR_MODE_NO_TX_UNDERRUN;
+ tw32(BUFMGR_MODE, val);
for (i = 0; i < 2000; i++) {
if (tr32(BUFMGR_MODE) & BUFMGR_MODE_ENABLE)
break;
BDINFO_FLAGS_DISABLED);
}
- if (tp->tg3_flags3 & TG3_FLG3_5717_PLUS)
- val = (RX_STD_MAX_SIZE_5705 << BDINFO_FLAGS_MAXLEN_SHIFT) |
- (TG3_RX_STD_DMA_SZ << 2);
- else
+ if (tp->tg3_flags3 & TG3_FLG3_5717_PLUS) {
+ if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765)
+ val = RX_STD_MAX_SIZE_5705;
+ else
+ val = RX_STD_MAX_SIZE_5717;
+ val <<= BDINFO_FLAGS_MAXLEN_SHIFT;
+ val |= (TG3_RX_STD_DMA_SZ << 2);
+ } else
val = TG3_RX_STD_DMA_SZ << BDINFO_FLAGS_MAXLEN_SHIFT;
} else
val = RX_STD_MAX_SIZE_5705 << BDINFO_FLAGS_MAXLEN_SHIFT;
GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57780)
rdmac_mode |= RDMAC_MODE_IPV6_LSO_EN;
+ if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5761 ||
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5784 ||
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5785 ||
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57780 ||
+ (tp->tg3_flags3 & TG3_FLG3_5717_PLUS)) {
+ val = tr32(TG3_RDMA_RSRVCTRL_REG);
+ tw32(TG3_RDMA_RSRVCTRL_REG,
+ val | TG3_RDMA_RSRVCTRL_FIFO_OFLW_FIX);
+ }
+
+ if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719) {
+ val = tr32(TG3_LSO_RD_DMA_CRPTEN_CTRL);
+ tw32(TG3_LSO_RD_DMA_CRPTEN_CTRL, val |
+ TG3_LSO_RD_DMA_CRPTEN_CTRL_BLEN_BD_4K |
+ TG3_LSO_RD_DMA_CRPTEN_CTRL_BLEN_LSO_4K);
+ }
+
/* Receive/send statistics. */
if (tp->tg3_flags2 & TG3_FLG2_5750_PLUS) {
val = tr32(RCVLPC_STATS_ENABLE);
tw32(SNDBDC_MODE, SNDBDC_MODE_ENABLE | SNDBDC_MODE_ATTN_ENABLE);
tw32(RCVBDI_MODE, RCVBDI_MODE_ENABLE | RCVBDI_MODE_RCB_ATTN_ENAB);
- tw32(RCVDBDI_MODE, RCVDBDI_MODE_ENABLE | RCVDBDI_MODE_INV_RING_SZ);
+ val = RCVDBDI_MODE_ENABLE | RCVDBDI_MODE_INV_RING_SZ;
+ if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717 ||
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719)
+ val |= RCVDBDI_MODE_LRG_RING_SZ;
+ tw32(RCVDBDI_MODE, val);
tw32(SNDDATAI_MODE, SNDDATAI_MODE_ENABLE);
if (tp->tg3_flags2 & TG3_FLG2_HW_TSO)
tw32(SNDDATAI_MODE, SNDDATAI_MODE_ENABLE | 0x8);
if (tp->tg3_flags2 & TG3_FLG2_5705_PLUS)
tg3_periodic_fetch_stats(tp);
+ if (tp->setlpicnt && !--tp->setlpicnt) {
+ u32 val = tr32(TG3_CPMU_EEE_MODE);
+ tw32(TG3_CPMU_EEE_MODE,
+ val | TG3_CPMU_EEEMD_LPI_ENABLE);
+ }
+
if (tp->tg3_flags & TG3_FLAG_USE_LINKCHG_REG) {
u32 mac_stat;
int phy_event;
for (i = 0; i < tp->irq_max; i++)
tp->napi[i].irq_vec = msix_ent[i].vector;
- tp->dev->real_num_tx_queues = 1;
- if (tp->irq_cnt > 1) {
- tp->tg3_flags3 |= TG3_FLG3_ENABLE_RSS;
-
- if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717 ||
- GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719) {
- tp->tg3_flags3 |= TG3_FLG3_ENABLE_TSS;
- tp->dev->real_num_tx_queues = tp->irq_cnt - 1;
- }
+ netif_set_real_num_tx_queues(tp->dev, 1);
+ rc = tp->irq_cnt > 1 ? tp->irq_cnt - 1 : 1;
+ if (netif_set_real_num_rx_queues(tp->dev, rc)) {
+ pci_disable_msix(tp->pdev);
+ return false;
}
+ if (tp->irq_cnt > 1)
+ tp->tg3_flags3 |= TG3_FLG3_ENABLE_RSS;
return true;
}
if (!(tp->tg3_flags2 & TG3_FLG2_USING_MSIX)) {
tp->irq_cnt = 1;
tp->napi[0].irq_vec = tp->pdev->irq;
- tp->dev->real_num_tx_queues = 1;
+ netif_set_real_num_tx_queues(tp->dev, 1);
+ netif_set_real_num_rx_queues(tp->dev, 1);
}
}
if (err)
goto err_out1;
+ tg3_napi_init(tp);
+
tg3_napi_enable(tp);
for (i = 0; i < tp->irq_cnt; i++) {
err_out2:
tg3_napi_disable(tp);
+ tg3_napi_fini(tp);
tg3_free_consistent(tp);
err_out1:
memcpy(&tp->estats_prev, tg3_get_estats(tp),
sizeof(tp->estats_prev));
+ tg3_napi_fini(tp);
+
tg3_free_consistent(tp);
tg3_set_power_state(tp, PCI_D3hot);
stats->rx_missed_errors = old_stats->rx_missed_errors +
get_stat64(&hw_stats->rx_discards);
+ stats->rx_dropped = tp->rx_dropped;
+
return stats;
}
if (netif_running(dev)) {
cmd->speed = tp->link_config.active_speed;
cmd->duplex = tp->link_config.active_duplex;
+ } else {
+ cmd->speed = SPEED_INVALID;
+ cmd->duplex = DUPLEX_INVALID;
}
cmd->phy_address = tp->phy_addr;
cmd->transceiver = XCVR_INTERNAL;
{
struct tg3 *tp = netdev_priv(dev);
- ering->rx_max_pending = TG3_RX_RING_SIZE - 1;
+ ering->rx_max_pending = tp->rx_std_ring_mask;
ering->rx_mini_max_pending = 0;
if (tp->tg3_flags & TG3_FLAG_JUMBO_RING_ENABLE)
- ering->rx_jumbo_max_pending = TG3_RX_JUMBO_RING_SIZE - 1;
+ ering->rx_jumbo_max_pending = tp->rx_jmb_ring_mask;
else
ering->rx_jumbo_max_pending = 0;
struct tg3 *tp = netdev_priv(dev);
int i, irq_sync = 0, err = 0;
- if ((ering->rx_pending > TG3_RX_RING_SIZE - 1) ||
- (ering->rx_jumbo_pending > TG3_RX_JUMBO_RING_SIZE - 1) ||
+ if ((ering->rx_pending > tp->rx_std_ring_mask) ||
+ (ering->rx_jumbo_pending > tp->rx_jmb_ring_mask) ||
(ering->tx_pending > TG3_TX_RING_SIZE - 1) ||
(ering->tx_pending <= MAX_SKB_FRAGS) ||
((tp->tg3_flags2 & TG3_FLG2_TSO_BUG) &&
tp->rx_pending = 63;
tp->rx_jumbo_pending = ering->rx_jumbo_pending;
- for (i = 0; i < TG3_IRQ_MAX_VECS; i++)
+ for (i = 0; i < tp->irq_max; i++)
tp->napi[i].tx_pending = ering->tx_pending;
if (netif_running(dev)) {
if (!(phydev->supported & SUPPORTED_Pause) ||
(!(phydev->supported & SUPPORTED_Asym_Pause) &&
- ((epause->rx_pause && !epause->tx_pause) ||
- (!epause->rx_pause && epause->tx_pause))))
+ (epause->rx_pause != epause->tx_pause)))
return -EINVAL;
tp->link_config.flowctrl = 0;
int num_pkts, tx_len, rx_len, i, err;
struct tg3_rx_buffer_desc *desc;
struct tg3_napi *tnapi, *rnapi;
- struct tg3_rx_prodring_set *tpr = &tp->prodring[0];
+ struct tg3_rx_prodring_set *tpr = &tp->napi[0].prodring;
tnapi = &tp->napi[0];
rnapi = &tp->napi[0];
if (tp->irq_cnt > 1) {
- rnapi = &tp->napi[1];
+ if (tp->tg3_flags3 & TG3_FLG3_ENABLE_RSS)
+ rnapi = &tp->napi[1];
if (tp->tg3_flags3 & TG3_FLG3_ENABLE_TSS)
tnapi = &tp->napi[1];
}
}
}
+ if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_5718 ||
+ (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765 &&
+ tp->pci_chip_rev_id != CHIPREV_ID_57765_A0))
+ tp->phy_flags |= TG3_PHYFLG_EEE_CAP;
+
if (!(tp->phy_flags & TG3_PHYFLG_ANY_SERDES) &&
!(tp->tg3_flags3 & TG3_FLG3_ENABLE_APE) &&
!(tp->tg3_flags & TG3_FLAG_ENABLE_ASF)) {
static void __devinit tg3_read_vpd(struct tg3 *tp)
{
- u8 vpd_data[TG3_NVM_VPD_LEN];
+ u8 *vpd_data;
unsigned int block_end, rosize, len;
int j, i = 0;
u32 magic;
if ((tp->tg3_flags3 & TG3_FLG3_NO_NVRAM) ||
tg3_nvram_read(tp, 0x0, &magic))
- goto out_not_found;
+ goto out_no_vpd;
+
+ vpd_data = kmalloc(TG3_NVM_VPD_LEN, GFP_KERNEL);
+ if (!vpd_data)
+ goto out_no_vpd;
if (magic == TG3_EEPROM_MAGIC) {
for (i = 0; i < TG3_NVM_VPD_LEN; i += 4) {
memcpy(tp->board_part_number, &vpd_data[i], len);
- return;
-
out_not_found:
- if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906)
+ kfree(vpd_data);
+ if (tp->board_part_number[0])
+ return;
+
+out_no_vpd:
+ if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717) {
+ if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_5717)
+ strcpy(tp->board_part_number, "BCM5717");
+ else if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_5718)
+ strcpy(tp->board_part_number, "BCM5718");
+ else
+ goto nomatch;
+ } else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57780) {
+ if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_57780)
+ strcpy(tp->board_part_number, "BCM57780");
+ else if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_57760)
+ strcpy(tp->board_part_number, "BCM57760");
+ else if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_57790)
+ strcpy(tp->board_part_number, "BCM57790");
+ else if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_57788)
+ strcpy(tp->board_part_number, "BCM57788");
+ else
+ goto nomatch;
+ } else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765) {
+ if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_57761)
+ strcpy(tp->board_part_number, "BCM57761");
+ else if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_57765)
+ strcpy(tp->board_part_number, "BCM57765");
+ else if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_57781)
+ strcpy(tp->board_part_number, "BCM57781");
+ else if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_57785)
+ strcpy(tp->board_part_number, "BCM57785");
+ else if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_57791)
+ strcpy(tp->board_part_number, "BCM57791");
+ else if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_57795)
+ strcpy(tp->board_part_number, "BCM57795");
+ else
+ goto nomatch;
+ } else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906) {
strcpy(tp->board_part_number, "BCM95906");
- else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57780 &&
- tp->pdev->device == TG3PCI_DEVICE_TIGON3_57780)
- strcpy(tp->board_part_number, "BCM57780");
- else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57780 &&
- tp->pdev->device == TG3PCI_DEVICE_TIGON3_57760)
- strcpy(tp->board_part_number, "BCM57760");
- else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57780 &&
- tp->pdev->device == TG3PCI_DEVICE_TIGON3_57790)
- strcpy(tp->board_part_number, "BCM57790");
- else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57780 &&
- tp->pdev->device == TG3PCI_DEVICE_TIGON3_57788)
- strcpy(tp->board_part_number, "BCM57788");
- else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765 &&
- tp->pdev->device == TG3PCI_DEVICE_TIGON3_57761)
- strcpy(tp->board_part_number, "BCM57761");
- else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765 &&
- tp->pdev->device == TG3PCI_DEVICE_TIGON3_57765)
- strcpy(tp->board_part_number, "BCM57765");
- else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765 &&
- tp->pdev->device == TG3PCI_DEVICE_TIGON3_57781)
- strcpy(tp->board_part_number, "BCM57781");
- else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765 &&
- tp->pdev->device == TG3PCI_DEVICE_TIGON3_57785)
- strcpy(tp->board_part_number, "BCM57785");
- else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765 &&
- tp->pdev->device == TG3PCI_DEVICE_TIGON3_57791)
- strcpy(tp->board_part_number, "BCM57791");
- else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765 &&
- tp->pdev->device == TG3PCI_DEVICE_TIGON3_57795)
- strcpy(tp->board_part_number, "BCM57795");
- else
+ } else {
+nomatch:
strcpy(tp->board_part_number, "none");
+ }
}
static int __devinit tg3_fw_img_is_valid(struct tg3 *tp, u32 offset)
case TG3_EEPROM_SB_REVISION_5:
offset = TG3_EEPROM_SB_F1R5_EDH_OFF;
break;
+ case TG3_EEPROM_SB_REVISION_6:
+ offset = TG3_EEPROM_SB_F1R6_EDH_OFF;
+ break;
default:
return;
}
apedata = tg3_ape_read32(tp, TG3_APE_FW_VERSION);
- if (tg3_ape_read32(tp, TG3_APE_FW_FEATURES) & TG3_APE_FW_FEATURE_NCSI)
+ if (tg3_ape_read32(tp, TG3_APE_FW_FEATURES) & TG3_APE_FW_FEATURE_NCSI) {
+ tp->tg3_flags3 |= TG3_FLG3_APE_HAS_NCSI;
fwtype = "NCSI";
- else
+ } else {
fwtype = "DASH";
+ }
vlen = strlen(tp->fw_ver);
#endif
}
+static inline u32 tg3_rx_ret_ring_size(struct tg3 *tp)
+{
+ if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717 ||
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719)
+ return 4096;
+ else if ((tp->tg3_flags & TG3_FLAG_JUMBO_CAPABLE) &&
+ !(tp->tg3_flags2 & TG3_FLG2_5780_CLASS))
+ return 1024;
+ else
+ return 512;
+}
+
static int __devinit tg3_get_invariants(struct tg3 *tp)
{
static struct pci_device_id write_reorder_chipsets[] = {
if (tp->pdev->device == TG3PCI_DEVICE_TIGON3_5717 ||
tp->pdev->device == TG3PCI_DEVICE_TIGON3_5718 ||
- tp->pdev->device == TG3PCI_DEVICE_TIGON3_5724 ||
tp->pdev->device == TG3PCI_DEVICE_TIGON3_5719)
pci_read_config_dword(tp->pdev,
TG3PCI_GEN2_PRODID_ASICREV,
if (err)
return err;
- if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717 &&
- tp->pci_chip_rev_id != CHIPREV_ID_5717_A0)
- return -ENOTSUPP;
-
/* Initialize data/descriptor byte/word swapping. */
val = tr32(GRC_MODE);
val &= GRC_MODE_HOST_STACKUP;
#endif
}
- tp->rx_std_max_post = TG3_RX_RING_SIZE;
+ tp->rx_std_ring_mask = TG3_RX_STD_RING_SIZE(tp) - 1;
+ tp->rx_jmb_ring_mask = TG3_RX_JMB_RING_SIZE(tp) - 1;
+ tp->rx_ret_ring_mask = tg3_rx_ret_ring_size(tp) - 1;
+
+ tp->rx_std_max_post = tp->rx_std_ring_mask + 1;
/* Increment the rx prod index on the rx std ring by at most
* 8 for these chips to workaround hw errata.
}
if ((tp->tg3_flags3 & TG3_FLG3_5755_PLUS) &&
- tp->pci_chip_rev_id != CHIPREV_ID_5717_A0 &&
+ GET_ASIC_REV(tp->pci_chip_rev_id) != ASIC_REV_5717 &&
GET_ASIC_REV(tp->pci_chip_rev_id) != ASIC_REV_5719)
dev->netdev_ops = &tg3_netdev_ops;
else
intmbx = MAILBOX_INTERRUPT_0 + TG3_64BIT_REG_LOW;
rcvmbx = MAILBOX_RCVRET_CON_IDX_0 + TG3_64BIT_REG_LOW;
sndmbx = MAILBOX_SNDHOST_PROD_IDX_0 + TG3_64BIT_REG_LOW;
- for (i = 0; i < TG3_IRQ_MAX_VECS; i++) {
+ for (i = 0; i < tp->irq_max; i++) {
struct tg3_napi *tnapi = &tp->napi[i];
tnapi->tp = tp;
tnapi->consmbox = rcvmbx;
tnapi->prodmbox = sndmbx;
- if (i) {
+ if (i)
tnapi->coal_now = HOSTCC_MODE_COAL_VEC1_NOW << (i - 1);
- netif_napi_add(dev, &tnapi->napi, tg3_poll_msix, 64);
- } else {
+ else
tnapi->coal_now = HOSTCC_MODE_NOW;
- netif_napi_add(dev, &tnapi->napi, tg3_poll, 64);
- }
if (!(tp->tg3_flags & TG3_FLAG_SUPPORT_MSIX))
break;
#define TG3_RX_INTERNAL_RING_SZ_5906 32
#define RX_STD_MAX_SIZE_5705 512
+#define RX_STD_MAX_SIZE_5717 2048
#define RX_JUMBO_MAX_SIZE 0xdeadbeef /* XXX */
/* First 256 bytes are a mirror of PCI config space. */
#define TG3PCI_DEVICE_TIGON3_5785_F 0x16a0 /* 10/100 only */
#define TG3PCI_DEVICE_TIGON3_5717 0x1655
#define TG3PCI_DEVICE_TIGON3_5718 0x1656
-#define TG3PCI_DEVICE_TIGON3_5724 0x165c
#define TG3PCI_DEVICE_TIGON3_57781 0x16b1
#define TG3PCI_DEVICE_TIGON3_57785 0x16b5
#define TG3PCI_DEVICE_TIGON3_57761 0x16b0
#define RCVDBDI_MODE_JUMBOBD_NEEDED 0x00000004
#define RCVDBDI_MODE_FRM_TOO_BIG 0x00000008
#define RCVDBDI_MODE_INV_RING_SZ 0x00000010
+#define RCVDBDI_MODE_LRG_RING_SZ 0x00010000
#define RCVDBDI_STATUS 0x00002404
#define RCVDBDI_STATUS_JUMBOBD_NEEDED 0x00000004
#define RCVDBDI_STATUS_FRM_TOO_BIG 0x00000008
#define CPMU_MUTEX_GNT_DRIVER 0x00001000
#define TG3_CPMU_PHY_STRAP 0x00003664
#define TG3_CPMU_PHY_STRAP_IS_SERDES 0x00000020
-/* 0x3664 --> 0x3800 unused */
+/* 0x3664 --> 0x36b0 unused */
+
+#define TG3_CPMU_EEE_MODE 0x000036b0
+#define TG3_CPMU_EEEMD_ERLY_L1_XIT_DET 0x00000008
+#define TG3_CPMU_EEEMD_LPI_ENABLE 0x00000080
+#define TG3_CPMU_EEEMD_LPI_IN_TX 0x00000100
+#define TG3_CPMU_EEEMD_LPI_IN_RX 0x00000200
+#define TG3_CPMU_EEEMD_EEE_ENABLE 0x00100000
+/* 0x36b4 --> 0x36b8 unused */
+
+#define TG3_CPMU_EEE_LNKIDL_CTRL 0x000036bc
+#define TG3_CPMU_EEE_LNKIDL_PCIE_NL0 0x01000000
+#define TG3_CPMU_EEE_LNKIDL_UART_IDL 0x00000004
+/* 0x36c0 --> 0x36d0 unused */
+
+#define TG3_CPMU_EEE_CTRL 0x000036d0
+#define TG3_CPMU_EEE_CTRL_EXIT_16_5_US 0x0000019d
+#define TG3_CPMU_EEE_CTRL_EXIT_36_US 0x00000384
+#define TG3_CPMU_EEE_CTRL_EXIT_20_1_US 0x000001f8
+/* 0x36d4 --> 0x3800 unused */
/* Mbuf cluster free registers */
#define MBFREE_MODE 0x00003800
#define BUFMGR_MODE_ATTN_ENABLE 0x00000004
#define BUFMGR_MODE_BM_TEST 0x00000008
#define BUFMGR_MODE_MBLOW_ATTN_ENAB 0x00000010
+#define BUFMGR_MODE_NO_TX_UNDERRUN 0x80000000
#define BUFMGR_STATUS 0x00004404
#define BUFMGR_STATUS_ERROR 0x00000004
#define BUFMGR_STATUS_MBLOW 0x00000010
#define RDMAC_STATUS_FIFOURUN 0x00000080
#define RDMAC_STATUS_FIFOOREAD 0x00000100
#define RDMAC_STATUS_LNGREAD 0x00000200
-/* 0x4808 --> 0x4c00 unused */
+/* 0x4808 --> 0x4900 unused */
+
+#define TG3_RDMA_RSRVCTRL_REG 0x00004900
+#define TG3_RDMA_RSRVCTRL_FIFO_OFLW_FIX 0x00000004
+/* 0x4904 --> 0x4910 unused */
+
+#define TG3_LSO_RD_DMA_CRPTEN_CTRL 0x00004910
+#define TG3_LSO_RD_DMA_CRPTEN_CTRL_BLEN_BD_4K 0x00030000
+#define TG3_LSO_RD_DMA_CRPTEN_CTRL_BLEN_LSO_4K 0x000c0000
+/* 0x4914 --> 0x4c00 unused */
/* Write DMA control registers */
#define WDMAC_MODE 0x00004c00
#define TG3_EEPROM_SB_REVISION_3 0x00030000
#define TG3_EEPROM_SB_REVISION_4 0x00040000
#define TG3_EEPROM_SB_REVISION_5 0x00050000
+#define TG3_EEPROM_SB_REVISION_6 0x00060000
#define TG3_EEPROM_MAGIC_HW 0xabcd
#define TG3_EEPROM_MAGIC_HW_MSK 0xffff
#define TG3_EEPROM_SB_F1R3_EDH_OFF 0x18
#define TG3_EEPROM_SB_F1R4_EDH_OFF 0x1c
#define TG3_EEPROM_SB_F1R5_EDH_OFF 0x20
+#define TG3_EEPROM_SB_F1R6_EDH_OFF 0x4c
#define TG3_EEPROM_SB_EDH_MAJ_MASK 0x00000700
#define TG3_EEPROM_SB_EDH_MAJ_SHFT 8
#define TG3_EEPROM_SB_EDH_MIN_MASK 0x000000ff
#define MII_TG3_CTRL_AS_MASTER 0x0800
#define MII_TG3_CTRL_ENABLE_AS_MASTER 0x1000
+#define MII_TG3_MMD_CTRL 0x0d /* MMD Access Control register */
+#define MII_TG3_MMD_CTRL_DATA_NOINC 0x4000
+#define MII_TG3_MMD_ADDRESS 0x0e /* MMD Address Data register */
+
#define MII_TG3_EXT_CTRL 0x10 /* Extended control register */
#define MII_TG3_EXT_CTRL_FIFO_ELASTIC 0x0001
#define MII_TG3_EXT_CTRL_LNK3_LED_MODE 0x0002
#define MII_TG3_DSP_TAP1 0x0001
#define MII_TG3_DSP_TAP1_AGCTGT_DFLT 0x0007
#define MII_TG3_DSP_AADJ1CH0 0x001f
+#define MII_TG3_DSP_CH34TP2 0x4022
+#define MII_TG3_DSP_CH34TP2_HIBW01 0x0010
#define MII_TG3_DSP_AADJ1CH3 0x601f
#define MII_TG3_DSP_AADJ1CH3_ADCCKADJ 0x0002
#define MII_TG3_DSP_EXP1_INT_STAT 0x0f01
#define MII_TG3_TEST1_TRIM_EN 0x0010
#define MII_TG3_TEST1_CRC_EN 0x8000
+/* Clause 45 expansion registers */
+#define TG3_CL45_D7_EEEADV_CAP 0x003c
+#define TG3_CL45_D7_EEEADV_CAP_100TX 0x0002
+#define TG3_CL45_D7_EEEADV_CAP_1000T 0x0004
+#define TG3_CL45_D7_EEERES_STAT 0x803e
+#define TG3_CL45_D7_EEERES_STAT_LP_100TX 0x0002
+#define TG3_CL45_D7_EEERES_STAT_LP_1000T 0x0004
+
/* Fast Ethernet Tranceiver definitions */
#define MII_TG3_FET_PTEST 0x17
#define TG3_APE_HOST_SEG_SIG 0x4200
#define APE_HOST_SEG_SIG_MAGIC 0x484f5354
#define TG3_APE_HOST_SEG_LEN 0x4204
-#define APE_HOST_SEG_LEN_MAGIC 0x0000001c
+#define APE_HOST_SEG_LEN_MAGIC 0x00000020
#define TG3_APE_HOST_INIT_COUNT 0x4208
#define TG3_APE_HOST_DRIVER_ID 0x420c
#define APE_HOST_DRIVER_ID_LINUX 0xf0000000
#define APE_HOST_HEARTBEAT_INT_DISABLE 0
#define APE_HOST_HEARTBEAT_INT_5SEC 5000
#define TG3_APE_HOST_HEARTBEAT_COUNT 0x4218
+#define TG3_APE_HOST_DRVR_STATE 0x421c
+#define TG3_APE_HOST_DRVR_STATE_START 0x00000001
+#define TG3_APE_HOST_DRVR_STATE_UNLOAD 0x00000002
+#define TG3_APE_HOST_DRVR_STATE_WOL 0x00000003
+#define TG3_APE_HOST_WOL_SPEED 0x4224
+#define TG3_APE_HOST_WOL_SPEED_AUTO 0x00008000
#define TG3_APE_EVENT_STATUS 0x4300
dma_addr_t rx_jmb_mapping;
};
-#define TG3_IRQ_MAX_VECS 5
+#define TG3_IRQ_MAX_VECS_RSS 5
+#define TG3_IRQ_MAX_VECS TG3_IRQ_MAX_VECS_RSS
struct tg3_napi {
struct napi_struct napi ____cacheline_aligned;
u32 consmbox;
u32 rx_rcb_ptr;
u16 *rx_rcb_prod_idx;
- struct tg3_rx_prodring_set *prodring;
+ struct tg3_rx_prodring_set prodring;
struct tg3_rx_buffer_desc *rx_rcb;
struct tg3_tx_buffer_desc *tx_ring;
void (*write32_rx_mbox) (struct tg3 *, u32,
u32);
u32 rx_copy_thresh;
+ u32 rx_std_ring_mask;
+ u32 rx_jmb_ring_mask;
+ u32 rx_ret_ring_mask;
u32 rx_pending;
u32 rx_jumbo_pending;
u32 rx_std_max_post;
struct vlan_group *vlgrp;
#endif
- struct tg3_rx_prodring_set prodring[TG3_IRQ_MAX_VECS];
-
/* begin "everything else" cacheline(s) section */
- struct rtnl_link_stats64 net_stats;
+ unsigned long rx_dropped;
struct rtnl_link_stats64 net_stats_prev;
struct tg3_ethtool_stats estats;
struct tg3_ethtool_stats estats_prev;
#define TG3_FLG3_USE_JUMBO_BDFLAG 0x00400000
#define TG3_FLG3_L1PLLPD_EN 0x00800000
#define TG3_FLG3_5717_PLUS 0x01000000
+#define TG3_FLG3_APE_HAS_NCSI 0x02000000
struct timer_list timer;
u16 timer_counter;
#define TG3_PHYFLG_BER_BUG 0x00008000
#define TG3_PHYFLG_SERDES_PREEMPHASIS 0x00010000
#define TG3_PHYFLG_PARALLEL_DETECT 0x00020000
+#define TG3_PHYFLG_EEE_CAP 0x00040000
u32 led_ctrl;
u32 phy_otp;
+ u32 setlpicnt;
#define TG3_BPN_SIZE 24
char board_part_number[TG3_BPN_SIZE];
}
}
+/* Helper to allocate iovec buffers for all vqs. */
+static long vhost_dev_alloc_iovecs(struct vhost_dev *dev)
+{
+ int i;
+ for (i = 0; i < dev->nvqs; ++i) {
+ dev->vqs[i].indirect = kmalloc(sizeof *dev->vqs[i].indirect *
+ UIO_MAXIOV, GFP_KERNEL);
+ dev->vqs[i].log = kmalloc(sizeof *dev->vqs[i].log * UIO_MAXIOV,
+ GFP_KERNEL);
+ dev->vqs[i].heads = kmalloc(sizeof *dev->vqs[i].heads *
+ UIO_MAXIOV, GFP_KERNEL);
+
+ if (!dev->vqs[i].indirect || !dev->vqs[i].log ||
+ !dev->vqs[i].heads)
+ goto err_nomem;
+ }
+ return 0;
+err_nomem:
+ for (; i >= 0; --i) {
+ kfree(dev->vqs[i].indirect);
+ kfree(dev->vqs[i].log);
+ kfree(dev->vqs[i].heads);
+ }
+ return -ENOMEM;
+}
+
+static void vhost_dev_free_iovecs(struct vhost_dev *dev)
+{
+ int i;
+ for (i = 0; i < dev->nvqs; ++i) {
+ kfree(dev->vqs[i].indirect);
+ dev->vqs[i].indirect = NULL;
+ kfree(dev->vqs[i].log);
+ dev->vqs[i].log = NULL;
+ kfree(dev->vqs[i].heads);
+ dev->vqs[i].heads = NULL;
+ }
+}
+
long vhost_dev_init(struct vhost_dev *dev,
struct vhost_virtqueue *vqs, int nvqs)
{
dev->worker = NULL;
for (i = 0; i < dev->nvqs; ++i) {
+ dev->vqs[i].log = NULL;
+ dev->vqs[i].indirect = NULL;
+ dev->vqs[i].heads = NULL;
dev->vqs[i].dev = dev;
mutex_init(&dev->vqs[i].mutex);
vhost_vq_reset(dev, dev->vqs + i);
if (err)
goto err_cgroup;
+ err = vhost_dev_alloc_iovecs(dev);
+ if (err)
+ goto err_cgroup;
+
return 0;
err_cgroup:
kthread_stop(worker);
fput(dev->vqs[i].call);
vhost_vq_reset(dev, dev->vqs + i);
}
+ vhost_dev_free_iovecs(dev);
if (dev->log_ctx)
eventfd_ctx_put(dev->log_ctx);
dev->log_ctx = NULL;
/* Make sure 64 bit math will not overflow. */
if (a > ULONG_MAX - (unsigned long)log_base ||
a + (unsigned long)log_base > ULONG_MAX)
- return -EFAULT;
+ return 0;
return access_ok(VERIFY_WRITE, log_base + a,
(sz + VHOST_PAGE_SIZE * 8 - 1) / VHOST_PAGE_SIZE / 8);
}
ret = translate_desc(dev, indirect->addr, indirect->len, vq->indirect,
- ARRAY_SIZE(vq->indirect));
+ UIO_MAXIOV);
if (unlikely(ret < 0)) {
vq_err(vq, "Translation failure %d in indirect.\n", ret);
return ret;
header-y += ext2_fs.h
header-y += fadvise.h
header-y += falloc.h
- header-y += fanotify.h
header-y += fb.h
header-y += fcntl.h
header-y += fd.h
header-y += radeonfb.h
header-y += random.h
header-y += raw.h
+header-y += rds.h
header-y += reboot.h
header-y += reiserfs_fs.h
header-y += reiserfs_xattr.h
goto done;
}
- if (la.l2_psm && __le16_to_cpu(la.l2_psm) < 0x1001 &&
- !capable(CAP_NET_BIND_SERVICE)) {
- err = -EACCES;
- goto done;
+ if (la.l2_psm) {
+ __u16 psm = __le16_to_cpu(la.l2_psm);
+
+ /* PSM must be odd and lsb of upper byte must be 0 */
+ if ((psm & 0x0101) != 0x0001) {
+ err = -EINVAL;
+ goto done;
+ }
+
+ /* Restrict usage of well-known PSMs */
+ if (psm < 0x1001 && !capable(CAP_NET_BIND_SERVICE)) {
+ err = -EACCES;
+ goto done;
+ }
}
write_lock_bh(&l2cap_sk_list.lock);
goto done;
}
+ /* PSM must be odd and lsb of upper byte must be 0 */
+ if ((__le16_to_cpu(la.l2_psm) & 0x0101) != 0x0001 &&
+ sk->sk_type != SOCK_RAW) {
+ err = -EINVAL;
+ goto done;
+ }
+
/* Set destination address and psm */
bacpy(&bt_sk(sk)->dst, &la.l2_bdaddr);
l2cap_pi(sk)->psm = la.l2_psm;
*frag = bt_skb_send_alloc(sk, count, msg->msg_flags & MSG_DONTWAIT, &err);
if (!*frag)
- return -EFAULT;
+ return err;
if (memcpy_fromiovec(skb_put(*frag, count), msg->msg_iov, count))
return -EFAULT;
skb = bt_skb_send_alloc(sk, count + hlen,
msg->msg_flags & MSG_DONTWAIT, &err);
if (!skb)
- return ERR_PTR(-ENOMEM);
+ return ERR_PTR(err);
/* Create L2CAP header */
lh = (struct l2cap_hdr *) skb_put(skb, L2CAP_HDR_SIZE);
skb = bt_skb_send_alloc(sk, count + hlen,
msg->msg_flags & MSG_DONTWAIT, &err);
if (!skb)
- return ERR_PTR(-ENOMEM);
+ return ERR_PTR(err);
/* Create L2CAP header */
lh = (struct l2cap_hdr *) skb_put(skb, L2CAP_HDR_SIZE);
skb = bt_skb_send_alloc(sk, count + hlen,
msg->msg_flags & MSG_DONTWAIT, &err);
if (!skb)
- return ERR_PTR(-ENOMEM);
+ return ERR_PTR(err);
/* Create L2CAP header */
lh = (struct l2cap_hdr *) skb_put(skb, L2CAP_HDR_SIZE);
release_sock(sk);
+ if (sock->type == SOCK_STREAM)
+ return bt_sock_stream_recvmsg(iocb, sock, msg, len, flags);
+
return bt_sock_recvmsg(iocb, sock, msg, len, flags);
}
struct l2cap_chan_list *list = &conn->chan_list;
struct l2cap_conn_req *req = (struct l2cap_conn_req *) data;
struct l2cap_conn_rsp rsp;
- struct sock *parent, *uninitialized_var(sk);
+ struct sock *parent, *sk = NULL;
int result, status = L2CAP_CS_NO_INFO;
u16 dcid = 0, scid = __le16_to_cpu(req->scid);
L2CAP_INFO_REQ, sizeof(info), &info);
}
- if (!(l2cap_pi(sk)->conf_state & L2CAP_CONF_REQ_SENT) &&
+ if (sk && !(l2cap_pi(sk)->conf_state & L2CAP_CONF_REQ_SENT) &&
result == L2CAP_CR_SUCCESS) {
u8 buf[128];
l2cap_pi(sk)->conf_state |= L2CAP_CONF_REQ_SENT;
if (!(l2cap_pi(sk)->conf_state & L2CAP_CONF_REQ_SENT)) {
u8 buf[64];
+ l2cap_pi(sk)->conf_state |= L2CAP_CONF_REQ_SENT;
l2cap_send_cmd(conn, l2cap_get_ident(conn), L2CAP_CONF_REQ,
l2cap_build_conf_req(sk, buf), buf);
l2cap_pi(sk)->num_conf_req++;
if (flags & ACL_START) {
struct l2cap_hdr *hdr;
+ struct sock *sk;
+ u16 cid;
int len;
if (conn->rx_len) {
l2cap_conn_unreliable(conn, ECOMM);
}
- if (skb->len < 2) {
+ /* Start fragment always begin with Basic L2CAP header */
+ if (skb->len < L2CAP_HDR_SIZE) {
BT_ERR("Frame is too short (len %d)", skb->len);
l2cap_conn_unreliable(conn, ECOMM);
goto drop;
hdr = (struct l2cap_hdr *) skb->data;
len = __le16_to_cpu(hdr->len) + L2CAP_HDR_SIZE;
+ cid = __le16_to_cpu(hdr->cid);
if (len == skb->len) {
/* Complete frame received */
goto drop;
}
+ sk = l2cap_get_chan_by_scid(&conn->chan_list, cid);
+
+ if (sk && l2cap_pi(sk)->imtu < len - L2CAP_HDR_SIZE) {
+ BT_ERR("Frame exceeding recv MTU (len %d, MTU %d)",
+ len, l2cap_pi(sk)->imtu);
+ bh_unlock_sock(sk);
+ l2cap_conn_unreliable(conn, ECOMM);
+ goto drop;
+ }
+
+ if (sk)
+ bh_unlock_sock(sk);
+
/* Allocate skb for the complete frame (with header) */
conn->rx_skb = bt_skb_alloc(len, GFP_ATOMIC);
if (!conn->rx_skb)
#include <linux/random.h>
#include <trace/events/napi.h>
#include <linux/pci.h>
+#include <linux/inetdevice.h>
#include "net-sysfs.h"
* --ANK (980803)
*/
+static inline struct list_head *ptype_head(const struct packet_type *pt)
+{
+ if (pt->type == htons(ETH_P_ALL))
+ return &ptype_all;
+ else
+ return &ptype_base[ntohs(pt->type) & PTYPE_HASH_MASK];
+}
+
/**
* dev_add_pack - add packet handler
* @pt: packet type declaration
void dev_add_pack(struct packet_type *pt)
{
- int hash;
+ struct list_head *head = ptype_head(pt);
- spin_lock_bh(&ptype_lock);
- if (pt->type == htons(ETH_P_ALL))
- list_add_rcu(&pt->list, &ptype_all);
- else {
- hash = ntohs(pt->type) & PTYPE_HASH_MASK;
- list_add_rcu(&pt->list, &ptype_base[hash]);
- }
- spin_unlock_bh(&ptype_lock);
+ spin_lock(&ptype_lock);
+ list_add_rcu(&pt->list, head);
+ spin_unlock(&ptype_lock);
}
EXPORT_SYMBOL(dev_add_pack);
*/
void __dev_remove_pack(struct packet_type *pt)
{
- struct list_head *head;
+ struct list_head *head = ptype_head(pt);
struct packet_type *pt1;
- spin_lock_bh(&ptype_lock);
-
- if (pt->type == htons(ETH_P_ALL))
- head = &ptype_all;
- else
- head = &ptype_base[ntohs(pt->type) & PTYPE_HASH_MASK];
+ spin_lock(&ptype_lock);
list_for_each_entry(pt1, head, list) {
if (pt == pt1) {
printk(KERN_WARNING "dev_remove_pack: %p not found.\n", pt);
out:
- spin_unlock_bh(&ptype_lock);
+ spin_unlock(&ptype_lock);
}
EXPORT_SYMBOL(__dev_remove_pack);
skb_orphan(skb);
nf_reset(skb);
- if (!(dev->flags & IFF_UP) ||
- (skb->len > (dev->mtu + dev->hard_header_len + VLAN_HLEN))) {
+ if (unlikely(!(dev->flags & IFF_UP) ||
- (skb->len > (dev->mtu + dev->hard_header_len)))) {
++ (skb->len > (dev->mtu + dev->hard_header_len + VLAN_HLEN)))) {
+ atomic_long_inc(&dev->rx_dropped);
kfree_skb(skb);
return NET_RX_DROP;
}
* Routine to help set real_num_tx_queues. To avoid skbs mapped to queues
* greater then real_num_tx_queues stale skbs on the qdisc must be flushed.
*/
-void netif_set_real_num_tx_queues(struct net_device *dev, unsigned int txq)
+int netif_set_real_num_tx_queues(struct net_device *dev, unsigned int txq)
{
- unsigned int real_num = dev->real_num_tx_queues;
+ if (txq < 1 || txq > dev->num_tx_queues)
+ return -EINVAL;
- if (unlikely(txq > dev->num_tx_queues))
- ;
- else if (txq > real_num)
- dev->real_num_tx_queues = txq;
- else if (txq < real_num) {
- dev->real_num_tx_queues = txq;
- qdisc_reset_all_tx_gt(dev, txq);
+ if (dev->reg_state == NETREG_REGISTERED) {
+ ASSERT_RTNL();
+
+ if (txq < dev->real_num_tx_queues)
+ qdisc_reset_all_tx_gt(dev, txq);
}
+
+ dev->real_num_tx_queues = txq;
+ return 0;
}
EXPORT_SYMBOL(netif_set_real_num_tx_queues);
+#ifdef CONFIG_RPS
+/**
+ * netif_set_real_num_rx_queues - set actual number of RX queues used
+ * @dev: Network device
+ * @rxq: Actual number of RX queues
+ *
+ * This must be called either with the rtnl_lock held or before
+ * registration of the net device. Returns 0 on success, or a
+ * negative error code. If called before registration, it always
+ * succeeds.
+ */
+int netif_set_real_num_rx_queues(struct net_device *dev, unsigned int rxq)
+{
+ int rc;
+
+ if (rxq < 1 || rxq > dev->num_rx_queues)
+ return -EINVAL;
+
+ if (dev->reg_state == NETREG_REGISTERED) {
+ ASSERT_RTNL();
+
+ rc = net_rx_queue_update_kobjects(dev, dev->real_num_rx_queues,
+ rxq);
+ if (rc)
+ return rc;
+ }
+
+ dev->real_num_rx_queues = rxq;
+ return 0;
+}
+EXPORT_SYMBOL(netif_set_real_num_rx_queues);
+#endif
+
static inline void __netif_reschedule(struct Qdisc *q)
{
struct softnet_data *sd;
static bool dev_can_checksum(struct net_device *dev, struct sk_buff *skb)
{
- if (can_checksum_protocol(dev->features, skb->protocol))
+ int features = dev->features;
+
+ if (vlan_tx_tag_present(skb))
+ features &= dev->vlan_features;
+
+ if (can_checksum_protocol(features, skb->protocol))
return true;
if (skb->protocol == htons(ETH_P_8021Q)) {
__be16 type = skb->protocol;
int err;
+ if (type == htons(ETH_P_8021Q)) {
+ struct vlan_ethhdr *veh;
+
+ if (unlikely(!pskb_may_pull(skb, VLAN_ETH_HLEN)))
+ return ERR_PTR(-EINVAL);
+
+ veh = (struct vlan_ethhdr *)skb->data;
+ type = veh->h_vlan_encapsulated_proto;
+ }
+
skb_reset_mac_header(skb);
skb->mac_len = skb->network_header - skb->mac_header;
__skb_pull(skb, skb->mac_len);
/*
* Try to orphan skb early, right before transmission by the device.
- * We cannot orphan skb if tx timestamp is requested, since
- * drivers need to call skb_tstamp_tx() to send the timestamp.
+ * We cannot orphan skb if tx timestamp is requested or the sk-reference
+ * is needed on driver level for other reasons, e.g. see net/can/raw.c
*/
static inline void skb_orphan_try(struct sk_buff *skb)
{
struct sock *sk = skb->sk;
- if (sk && !skb_tx(skb)->flags) {
+ if (sk && !skb_shinfo(skb)->tx_flags) {
/* skb_tx_hash() wont be able to get sk.
* We copy sk_hash into skb->rxhash
*/
static inline int skb_needs_linearize(struct sk_buff *skb,
struct net_device *dev)
{
+ int features = dev->features;
+
+ if (skb->protocol == htons(ETH_P_8021Q) || vlan_tx_tag_present(skb))
+ features &= dev->vlan_features;
+
return skb_is_nonlinear(skb) &&
- ((skb_has_frags(skb) && !(dev->features & NETIF_F_FRAGLIST)) ||
- (skb_shinfo(skb)->nr_frags && (!(dev->features & NETIF_F_SG) ||
+ ((skb_has_frag_list(skb) && !(features & NETIF_F_FRAGLIST)) ||
+ (skb_shinfo(skb)->nr_frags && (!(features & NETIF_F_SG) ||
illegal_highdma(dev, skb))));
}
skb_orphan_try(skb);
+ if (vlan_tx_tag_present(skb) &&
+ !(dev->features & NETIF_F_HW_VLAN_TX)) {
+ skb = __vlan_put_tag(skb, vlan_tx_tag_get(skb));
+ if (unlikely(!skb))
+ goto out;
+
+ skb->vlan_tci = 0;
+ }
+
if (netif_needs_gso(dev, skb)) {
if (unlikely(dev_gso_segment(skb)))
goto out_kfree_skb;
skb->destructor = DEV_GSO_CB(skb)->destructor;
out_kfree_skb:
kfree_skb(skb);
+out:
return rc;
}
return rc;
}
+static DEFINE_PER_CPU(int, xmit_recursion);
+#define RECURSION_LIMIT 3
+
/**
* dev_queue_xmit - transmit a buffer
* @skb: buffer to transmit
if (txq->xmit_lock_owner != cpu) {
+ if (__this_cpu_read(xmit_recursion) > RECURSION_LIMIT)
+ goto recursion_alert;
+
HARD_TX_LOCK(dev, txq, cpu);
if (!netif_tx_queue_stopped(txq)) {
+ __this_cpu_inc(xmit_recursion);
rc = dev_hard_start_xmit(skb, dev, txq);
+ __this_cpu_dec(xmit_recursion);
if (dev_xmit_complete(rc)) {
HARD_TX_UNLOCK(dev, txq);
goto out;
"queue packet!\n", dev->name);
} else {
/* Recursion is detected! It is possible,
- * unfortunately */
+ * unfortunately
+ */
+recursion_alert:
if (net_ratelimit())
printk(KERN_CRIT "Dead loop on virtual device "
"%s, fix it urgently!\n", dev->name);
__raise_softirq_irqoff(NET_RX_SOFTIRQ);
}
-#ifdef CONFIG_RPS
-
-/* One global table that all flow-based protocols share. */
-struct rps_sock_flow_table *rps_sock_flow_table __read_mostly;
-EXPORT_SYMBOL(rps_sock_flow_table);
-
/*
- * get_rps_cpu is called from netif_receive_skb and returns the target
- * CPU from the RPS map of the receiving queue for a given skb.
- * rcu_read_lock must be held on entry.
+ * __skb_get_rxhash: calculate a flow hash based on src/dst addresses
+ * and src/dst port numbers. Returns a non-zero hash number on success
+ * and 0 on failure.
*/
-static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
- struct rps_dev_flow **rflowp)
+__u32 __skb_get_rxhash(struct sk_buff *skb)
{
+ int nhoff, hash = 0, poff;
struct ipv6hdr *ip6;
struct iphdr *ip;
- struct netdev_rx_queue *rxqueue;
- struct rps_map *map;
- struct rps_dev_flow_table *flow_table;
- struct rps_sock_flow_table *sock_flow_table;
- int cpu = -1;
u8 ip_proto;
- u16 tcpu;
u32 addr1, addr2, ihl;
union {
u32 v32;
u16 v16[2];
} ports;
- if (skb_rx_queue_recorded(skb)) {
- u16 index = skb_get_rx_queue(skb);
- if (unlikely(index >= dev->num_rx_queues)) {
- WARN_ONCE(dev->num_rx_queues > 1, "%s received packet "
- "on queue %u, but number of RX queues is %u\n",
- dev->name, index, dev->num_rx_queues);
- goto done;
- }
- rxqueue = dev->_rx + index;
- } else
- rxqueue = dev->_rx;
-
- if (!rxqueue->rps_map && !rxqueue->rps_flow_table)
- goto done;
-
- if (skb->rxhash)
- goto got_hash; /* Skip hash computation on packet header */
+ nhoff = skb_network_offset(skb);
switch (skb->protocol) {
case __constant_htons(ETH_P_IP):
- if (!pskb_may_pull(skb, sizeof(*ip)))
+ if (!pskb_may_pull(skb, sizeof(*ip) + nhoff))
goto done;
- ip = (struct iphdr *) skb->data;
- ip_proto = ip->protocol;
+ ip = (struct iphdr *) (skb->data + nhoff);
+ if (ip->frag_off & htons(IP_MF | IP_OFFSET))
+ ip_proto = 0;
+ else
+ ip_proto = ip->protocol;
addr1 = (__force u32) ip->saddr;
addr2 = (__force u32) ip->daddr;
ihl = ip->ihl;
break;
case __constant_htons(ETH_P_IPV6):
- if (!pskb_may_pull(skb, sizeof(*ip6)))
+ if (!pskb_may_pull(skb, sizeof(*ip6) + nhoff))
goto done;
- ip6 = (struct ipv6hdr *) skb->data;
+ ip6 = (struct ipv6hdr *) (skb->data + nhoff);
ip_proto = ip6->nexthdr;
addr1 = (__force u32) ip6->saddr.s6_addr32[3];
addr2 = (__force u32) ip6->daddr.s6_addr32[3];
default:
goto done;
}
- switch (ip_proto) {
- case IPPROTO_TCP:
- case IPPROTO_UDP:
- case IPPROTO_DCCP:
- case IPPROTO_ESP:
- case IPPROTO_AH:
- case IPPROTO_SCTP:
- case IPPROTO_UDPLITE:
- if (pskb_may_pull(skb, (ihl * 4) + 4)) {
- ports.v32 = * (__force u32 *) (skb->data + (ihl * 4));
+
+ ports.v32 = 0;
+ poff = proto_ports_offset(ip_proto);
+ if (poff >= 0) {
+ nhoff += ihl * 4 + poff;
+ if (pskb_may_pull(skb, nhoff + 4)) {
+ ports.v32 = * (__force u32 *) (skb->data + nhoff);
if (ports.v16[1] < ports.v16[0])
swap(ports.v16[0], ports.v16[1]);
- break;
}
- default:
- ports.v32 = 0;
- break;
}
/* get a consistent hash (same value on both flow directions) */
if (addr2 < addr1)
swap(addr1, addr2);
- skb->rxhash = jhash_3words(addr1, addr2, ports.v32, hashrnd);
- if (!skb->rxhash)
- skb->rxhash = 1;
-got_hash:
+ hash = jhash_3words(addr1, addr2, ports.v32, hashrnd);
+ if (!hash)
+ hash = 1;
+
+done:
+ return hash;
+}
+EXPORT_SYMBOL(__skb_get_rxhash);
+
+#ifdef CONFIG_RPS
+
+/* One global table that all flow-based protocols share. */
+struct rps_sock_flow_table *rps_sock_flow_table __read_mostly;
+EXPORT_SYMBOL(rps_sock_flow_table);
+
+/*
+ * get_rps_cpu is called from netif_receive_skb and returns the target
+ * CPU from the RPS map of the receiving queue for a given skb.
+ * rcu_read_lock must be held on entry.
+ */
+static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
+ struct rps_dev_flow **rflowp)
+{
+ struct netdev_rx_queue *rxqueue;
+ struct rps_map *map = NULL;
+ struct rps_dev_flow_table *flow_table;
+ struct rps_sock_flow_table *sock_flow_table;
+ int cpu = -1;
+ u16 tcpu;
+
+ if (skb_rx_queue_recorded(skb)) {
+ u16 index = skb_get_rx_queue(skb);
+ if (unlikely(index >= dev->real_num_rx_queues)) {
+ WARN_ONCE(dev->real_num_rx_queues > 1,
+ "%s received packet on queue %u, but number "
+ "of RX queues is %u\n",
+ dev->name, index, dev->real_num_rx_queues);
+ goto done;
+ }
+ rxqueue = dev->_rx + index;
+ } else
+ rxqueue = dev->_rx;
+
+ if (rxqueue->rps_map) {
+ map = rcu_dereference(rxqueue->rps_map);
+ if (map && map->len == 1) {
+ tcpu = map->cpus[0];
+ if (cpu_online(tcpu))
+ cpu = tcpu;
+ goto done;
+ }
+ } else if (!rxqueue->rps_flow_table) {
+ goto done;
+ }
+
+ skb_reset_network_header(skb);
+ if (!skb_get_rxhash(skb))
+ goto done;
+
flow_table = rcu_dereference(rxqueue->rps_flow_table);
sock_flow_table = rcu_dereference(rps_sock_flow_table);
if (flow_table && sock_flow_table) {
}
}
- map = rcu_dereference(rxqueue->rps_map);
if (map) {
tcpu = map->cpus[((u64) skb->rxhash * map->len) >> 32];
local_irq_restore(flags);
+ atomic_long_inc(&skb->dev->rx_dropped);
kfree_skb(skb);
return NET_RX_DROP;
}
* the ingress scheduler, you just cant add policies on ingress.
*
*/
-static int ing_filter(struct sk_buff *skb)
+static int ing_filter(struct sk_buff *skb, struct netdev_queue *rxq)
{
struct net_device *dev = skb->dev;
u32 ttl = G_TC_RTTL(skb->tc_verd);
- struct netdev_queue *rxq;
int result = TC_ACT_OK;
struct Qdisc *q;
skb->tc_verd = SET_TC_RTTL(skb->tc_verd, ttl);
skb->tc_verd = SET_TC_AT(skb->tc_verd, AT_INGRESS);
- rxq = &dev->rx_queue;
-
q = rxq->qdisc;
if (q != &noop_qdisc) {
spin_lock(qdisc_lock(q));
struct packet_type **pt_prev,
int *ret, struct net_device *orig_dev)
{
- if (skb->dev->rx_queue.qdisc == &noop_qdisc)
+ struct netdev_queue *rxq = rcu_dereference(skb->dev->ingress_queue);
+
+ if (!rxq || rxq->qdisc == &noop_qdisc)
goto out;
if (*pt_prev) {
*pt_prev = NULL;
}
- switch (ing_filter(skb)) {
+ switch (ing_filter(skb, rxq)) {
case TC_ACT_SHOT:
case TC_ACT_STOLEN:
kfree_skb(skb);
}
#endif
-/*
- * netif_nit_deliver - deliver received packets to network taps
- * @skb: buffer
- *
- * This function is used to deliver incoming packets to network
- * taps. It should be used when the normal netif_receive_skb path
- * is bypassed, for example because of VLAN acceleration.
- */
-void netif_nit_deliver(struct sk_buff *skb)
-{
- struct packet_type *ptype;
-
- if (list_empty(&ptype_all))
- return;
-
- skb_reset_network_header(skb);
- skb_reset_transport_header(skb);
- skb->mac_len = skb->network_header - skb->mac_header;
-
- rcu_read_lock();
- list_for_each_entry_rcu(ptype, &ptype_all, list) {
- if (!ptype->dev || ptype->dev == skb->dev)
- deliver_skb(skb, ptype, skb->dev);
- }
- rcu_read_unlock();
-}
-
/**
* netdev_rx_handler_register - register receive handler
* @dev: device to register a handler for
if (!netdev_tstamp_prequeue)
net_timestamp_check(skb);
- if (vlan_tx_tag_present(skb) && vlan_hwaccel_do_receive(skb))
- return NET_RX_SUCCESS;
-
/* if we've gotten here through NAPI, check netpoll */
if (netpoll_receive_skb(skb))
return NET_RX_DROP;
* be delivered to pkt handlers that are exact matches. Also
* the deliver_no_wcard flag will be set. If packet handlers
* are sensitive to duplicate packets these skbs will need to
- * be dropped at the handler. The vlan accel path may have
- * already set the deliver_no_wcard flag.
+ * be dropped at the handler.
*/
null_or_orig = NULL;
orig_dev = skb->dev;
goto out;
}
+ if (vlan_tx_tag_present(skb)) {
+ if (pt_prev) {
+ ret = deliver_skb(skb, pt_prev, orig_dev);
+ pt_prev = NULL;
+ }
+ if (vlan_hwaccel_do_receive(&skb)) {
+ ret = __netif_receive_skb(skb);
+ goto out;
+ } else if (unlikely(!skb))
+ goto out;
+ }
+
/*
* Make sure frames received on VLAN interfaces stacked on
* bonding interfaces still make their way to any base bonding
if (pt_prev) {
ret = pt_prev->func(skb, skb->dev, pt_prev, orig_dev);
} else {
+ atomic_long_inc(&skb->dev->rx_dropped);
kfree_skb(skb);
/* Jamal, now you will not able to escape explaining
* me how you were going to use this. :-)
return netif_receive_skb(skb);
}
-static void napi_gro_flush(struct napi_struct *napi)
+inline void napi_gro_flush(struct napi_struct *napi)
{
struct sk_buff *skb, *next;
napi->gro_count = 0;
napi->gro_list = NULL;
}
+EXPORT_SYMBOL(napi_gro_flush);
enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff *skb)
{
if (!(skb->dev->features & NETIF_F_GRO) || netpoll_rx_on(skb))
goto normal;
- if (skb_is_gso(skb) || skb_has_frags(skb))
+ if (skb_is_gso(skb) || skb_has_frag_list(skb))
goto normal;
rcu_read_lock();
}
EXPORT_SYMBOL(dev_gro_receive);
-static gro_result_t
+static inline gro_result_t
__napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb)
{
struct sk_buff *p;
for (p = napi->gro_list; p; p = p->next) {
- NAPI_GRO_CB(p)->same_flow =
- (p->dev == skb->dev) &&
- !compare_ether_header(skb_mac_header(p),
+ unsigned long diffs;
+
+ diffs = (unsigned long)p->dev ^ (unsigned long)skb->dev;
+ diffs |= p->vlan_tci ^ skb->vlan_tci;
+ diffs |= compare_ether_header(skb_mac_header(p),
skb_gro_mac_header(skb));
+ NAPI_GRO_CB(p)->same_flow = !diffs;
NAPI_GRO_CB(p)->flush = 0;
}
}
EXPORT_SYMBOL(napi_gro_receive);
-void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb)
+static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb)
{
__skb_pull(skb, skb_headlen(skb));
skb_reserve(skb, NET_IP_ALIGN - skb_headroom(skb));
+ skb->vlan_tci = 0;
napi->skb = skb;
}
-EXPORT_SYMBOL(napi_reuse_skb);
struct sk_buff *napi_get_frags(struct napi_struct *napi)
{
rollback_registered_many(&single);
}
-static void __netdev_init_queue_locks_one(struct net_device *dev,
- struct netdev_queue *dev_queue,
- void *_unused)
-{
- spin_lock_init(&dev_queue->_xmit_lock);
- netdev_set_xmit_lockdep_class(&dev_queue->_xmit_lock, dev->type);
- dev_queue->xmit_lock_owner = -1;
-}
-
-static void netdev_init_queue_locks(struct net_device *dev)
-{
- netdev_for_each_tx_queue(dev, __netdev_init_queue_locks_one, NULL);
- __netdev_init_queue_locks_one(dev, &dev->rx_queue, NULL);
-}
-
unsigned long netdev_fix_features(unsigned long features, const char *name)
{
/* Fix illegal SG+CSUM combinations. */
}
EXPORT_SYMBOL(netif_stacked_transfer_operstate);
+static int netif_alloc_rx_queues(struct net_device *dev)
+{
+#ifdef CONFIG_RPS
+ unsigned int i, count = dev->num_rx_queues;
+ struct netdev_rx_queue *rx;
+
+ BUG_ON(count < 1);
+
+ rx = kcalloc(count, sizeof(struct netdev_rx_queue), GFP_KERNEL);
+ if (!rx) {
+ pr_err("netdev: Unable to allocate %u rx queues.\n", count);
+ return -ENOMEM;
+ }
+ dev->_rx = rx;
+
+ /*
+ * Set a pointer to first element in the array which holds the
+ * reference count.
+ */
+ for (i = 0; i < count; i++)
+ rx[i].first = rx;
+#endif
+ return 0;
+}
+
+static int netif_alloc_netdev_queues(struct net_device *dev)
+{
+ unsigned int count = dev->num_tx_queues;
+ struct netdev_queue *tx;
+
+ BUG_ON(count < 1);
+
+ tx = kcalloc(count, sizeof(struct netdev_queue), GFP_KERNEL);
+ if (!tx) {
+ pr_err("netdev: Unable to allocate %u tx queues.\n",
+ count);
+ return -ENOMEM;
+ }
+ dev->_tx = tx;
+ return 0;
+}
+
+static void netdev_init_one_queue(struct net_device *dev,
+ struct netdev_queue *queue,
+ void *_unused)
+{
+ queue->dev = dev;
+
+ /* Initialize queue lock */
+ spin_lock_init(&queue->_xmit_lock);
+ netdev_set_xmit_lockdep_class(&queue->_xmit_lock, dev->type);
+ queue->xmit_lock_owner = -1;
+}
+
+static void netdev_init_queues(struct net_device *dev)
+{
+ netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL);
+ spin_lock_init(&dev->tx_global_lock);
+}
+
/**
* register_netdevice - register a network device
* @dev: device to register
spin_lock_init(&dev->addr_list_lock);
netdev_set_addr_lockdep_class(dev);
- netdev_init_queue_locks(dev);
dev->iflink = -1;
-#ifdef CONFIG_RPS
- if (!dev->num_rx_queues) {
- /*
- * Allocate a single RX queue if driver never called
- * alloc_netdev_mq
- */
+ ret = netif_alloc_rx_queues(dev);
+ if (ret)
+ goto out;
- dev->_rx = kzalloc(sizeof(struct netdev_rx_queue), GFP_KERNEL);
- if (!dev->_rx) {
- ret = -ENOMEM;
- goto out;
- }
+ ret = netif_alloc_netdev_queues(dev);
+ if (ret)
+ goto out;
+
+ netdev_init_queues(dev);
- dev->_rx->first = dev->_rx;
- atomic_set(&dev->_rx->count, 1);
- dev->num_rx_queues = 1;
- }
-#endif
/* Init, if this function is available */
if (dev->netdev_ops->ndo_init) {
ret = dev->netdev_ops->ndo_init(dev);
if (dev->features & NETIF_F_SG)
dev->features |= NETIF_F_GSO;
+ /* Enable GRO and NETIF_F_HIGHDMA for vlans by default,
+ * vlan_dev_init() will do the dev->features check, so these features
+ * are enabled only if supported by underlying device.
+ */
+ dev->vlan_features |= (NETIF_F_GRO | NETIF_F_HIGHDMA);
+
ret = call_netdevice_notifiers(NETDEV_POST_INIT, dev);
ret = notifier_to_errno(ret);
if (ret)
*/
dev->reg_state = NETREG_DUMMY;
- /* initialize the ref count */
- atomic_set(&dev->refcnt, 1);
-
/* NAPI wants this */
INIT_LIST_HEAD(&dev->napi_list);
set_bit(__LINK_STATE_PRESENT, &dev->state);
set_bit(__LINK_STATE_START, &dev->state);
+ /* Note : We dont allocate pcpu_refcnt for dummy devices,
+ * because users of this 'device' dont need to change
+ * its refcount.
+ */
+
return 0;
}
EXPORT_SYMBOL_GPL(init_dummy_netdev);
}
EXPORT_SYMBOL(register_netdev);
+int netdev_refcnt_read(const struct net_device *dev)
+{
+ int i, refcnt = 0;
+
+ for_each_possible_cpu(i)
+ refcnt += *per_cpu_ptr(dev->pcpu_refcnt, i);
+ return refcnt;
+}
+EXPORT_SYMBOL(netdev_refcnt_read);
+
/*
* netdev_wait_allrefs - wait until all references are gone.
*
static void netdev_wait_allrefs(struct net_device *dev)
{
unsigned long rebroadcast_time, warning_time;
+ int refcnt;
linkwatch_forget_dev(dev);
rebroadcast_time = warning_time = jiffies;
- while (atomic_read(&dev->refcnt) != 0) {
+ refcnt = netdev_refcnt_read(dev);
+
+ while (refcnt != 0) {
if (time_after(jiffies, rebroadcast_time + 1 * HZ)) {
rtnl_lock();
msleep(250);
+ refcnt = netdev_refcnt_read(dev);
+
if (time_after(jiffies, warning_time + 10 * HZ)) {
printk(KERN_EMERG "unregister_netdevice: "
"waiting for %s to become free. Usage "
"count = %d\n",
- dev->name, atomic_read(&dev->refcnt));
+ dev->name, refcnt);
warning_time = jiffies;
}
}
netdev_wait_allrefs(dev);
/* paranoia */
- BUG_ON(atomic_read(&dev->refcnt));
- WARN_ON(dev->ip_ptr);
+ BUG_ON(netdev_refcnt_read(dev));
+ WARN_ON(rcu_dereference_raw(dev->ip_ptr));
WARN_ON(dev->ip6_ptr);
WARN_ON(dev->dn_ptr);
if (ops->ndo_get_stats64) {
memset(storage, 0, sizeof(*storage));
- return ops->ndo_get_stats64(dev, storage);
- }
- if (ops->ndo_get_stats) {
+ ops->ndo_get_stats64(dev, storage);
+ } else if (ops->ndo_get_stats) {
netdev_stats_to_stats64(storage, ops->ndo_get_stats(dev));
- return storage;
+ } else {
+ netdev_stats_to_stats64(storage, &dev->stats);
+ dev_txq_stats_fold(dev, storage);
}
- netdev_stats_to_stats64(storage, &dev->stats);
- dev_txq_stats_fold(dev, storage);
+ storage->rx_dropped += atomic_long_read(&dev->rx_dropped);
return storage;
}
EXPORT_SYMBOL(dev_get_stats);
-static void netdev_init_one_queue(struct net_device *dev,
- struct netdev_queue *queue,
- void *_unused)
+struct netdev_queue *dev_ingress_queue_create(struct net_device *dev)
{
- queue->dev = dev;
-}
+ struct netdev_queue *queue = dev_ingress_queue(dev);
-static void netdev_init_queues(struct net_device *dev)
-{
- netdev_init_one_queue(dev, &dev->rx_queue, NULL);
- netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL);
- spin_lock_init(&dev->tx_global_lock);
+#ifdef CONFIG_NET_CLS_ACT
+ if (queue)
+ return queue;
+ queue = kzalloc(sizeof(*queue), GFP_KERNEL);
+ if (!queue)
+ return NULL;
+ netdev_init_one_queue(dev, queue, NULL);
+ queue->qdisc = &noop_qdisc;
+ queue->qdisc_sleeping = &noop_qdisc;
+ rcu_assign_pointer(dev->ingress_queue, queue);
+#endif
+ return queue;
}
/**
struct net_device *alloc_netdev_mq(int sizeof_priv, const char *name,
void (*setup)(struct net_device *), unsigned int queue_count)
{
- struct netdev_queue *tx;
struct net_device *dev;
size_t alloc_size;
struct net_device *p;
-#ifdef CONFIG_RPS
- struct netdev_rx_queue *rx;
- int i;
-#endif
BUG_ON(strlen(name) >= sizeof(dev->name));
+ if (queue_count < 1) {
+ pr_err("alloc_netdev: Unable to allocate device "
+ "with zero queues.\n");
+ return NULL;
+ }
+
alloc_size = sizeof(struct net_device);
if (sizeof_priv) {
/* ensure 32-byte alignment of private area */
return NULL;
}
- tx = kcalloc(queue_count, sizeof(struct netdev_queue), GFP_KERNEL);
- if (!tx) {
- printk(KERN_ERR "alloc_netdev: Unable to allocate "
- "tx qdiscs.\n");
- goto free_p;
- }
-
-#ifdef CONFIG_RPS
- rx = kcalloc(queue_count, sizeof(struct netdev_rx_queue), GFP_KERNEL);
- if (!rx) {
- printk(KERN_ERR "alloc_netdev: Unable to allocate "
- "rx queues.\n");
- goto free_tx;
- }
-
- atomic_set(&rx->count, queue_count);
-
- /*
- * Set a pointer to first element in the array which holds the
- * reference count.
- */
- for (i = 0; i < queue_count; i++)
- rx[i].first = rx;
-#endif
-
dev = PTR_ALIGN(p, NETDEV_ALIGN);
dev->padded = (char *)dev - (char *)p;
+ dev->pcpu_refcnt = alloc_percpu(int);
+ if (!dev->pcpu_refcnt)
+ goto free_p;
+
if (dev_addr_init(dev))
- goto free_rx;
+ goto free_pcpu;
dev_mc_init(dev);
dev_uc_init(dev);
dev_net_set(dev, &init_net);
- dev->_tx = tx;
dev->num_tx_queues = queue_count;
dev->real_num_tx_queues = queue_count;
#ifdef CONFIG_RPS
- dev->_rx = rx;
dev->num_rx_queues = queue_count;
+ dev->real_num_rx_queues = queue_count;
#endif
dev->gso_max_size = GSO_MAX_SIZE;
- netdev_init_queues(dev);
-
INIT_LIST_HEAD(&dev->ethtool_ntuple_list.list);
dev->ethtool_ntuple_list.count = 0;
INIT_LIST_HEAD(&dev->napi_list);
strcpy(dev->name, name);
return dev;
-free_rx:
-#ifdef CONFIG_RPS
- kfree(rx);
-free_tx:
-#endif
- kfree(tx);
+free_pcpu:
+ free_percpu(dev->pcpu_refcnt);
free_p:
kfree(p);
return NULL;
kfree(dev->_tx);
+ kfree(rcu_dereference_raw(dev->ingress_queue));
+
/* Flush device addresses */
dev_addr_flush(dev);
list_for_each_entry_safe(p, n, &dev->napi_list, dev_list)
netif_napi_del(p);
+ free_percpu(dev->pcpu_refcnt);
+ dev->pcpu_refcnt = NULL;
+
/* Compatibility with error handling in drivers */
if (dev->reg_state == NETREG_UNINITIALIZED) {
kfree((char *)dev - dev->padded);
/* Notify protocols, that we are about to destroy
this device. They should clean all the things.
+
+ Note that dev->reg_state stays at NETREG_REGISTERED.
+ This is wanted because this way 8021q and macvlan know
+ the device is just moving and can keep their slaves up.
*/
call_netdevice_notifiers(NETDEV_UNREGISTER, dev);
call_netdevice_notifiers(NETDEV_UNREGISTER_BATCH, dev);
unsigned long r_offset;
};
-DEFINE_PER_CPU_SHARED_ALIGNED(struct rds_page_remainder, rds_page_remainders);
+static DEFINE_PER_CPU_SHARED_ALIGNED(struct rds_page_remainder,
+ rds_page_remainders);
/*
* returns 0 on success or -errno on failure.
unsigned long ret;
void *addr;
- if (to_user)
+ addr = kmap(page);
+ if (to_user) {
rds_stats_add(s_copy_to_user, bytes);
- else
+ ret = copy_to_user(ptr, addr + offset, bytes);
+ } else {
rds_stats_add(s_copy_from_user, bytes);
-
- addr = kmap_atomic(page, KM_USER0);
- if (to_user)
- ret = __copy_to_user_inatomic(ptr, addr + offset, bytes);
- else
- ret = __copy_from_user_inatomic(addr + offset, ptr, bytes);
- kunmap_atomic(addr, KM_USER0);
-
- if (ret) {
- addr = kmap(page);
- if (to_user)
- ret = copy_to_user(ptr, addr + offset, bytes);
- else
- ret = copy_from_user(addr + offset, ptr, bytes);
- kunmap(page);
- if (ret)
- return -EFAULT;
+ ret = copy_from_user(addr + offset, ptr, bytes);
}
+ kunmap(page);
- return 0;
+ return ret ? -EFAULT : 0;
}
EXPORT_SYMBOL_GPL(rds_page_copy_user);
/* jump straight to allocation if we're trying for a huge page */
if (bytes >= PAGE_SIZE) {
page = alloc_page(gfp);
- if (page == NULL) {
+ if (!page) {
ret = -ENOMEM;
} else {
sg_set_page(scat, page, PAGE_SIZE, 0);
rem = &per_cpu(rds_page_remainders, get_cpu());
local_irq_save(flags);
- if (page == NULL) {
+ if (!page) {
ret = -ENOMEM;
break;
}
ret ? 0 : scat->length);
return ret;
}
+EXPORT_SYMBOL_GPL(rds_page_remainder_alloc);
static int rds_page_remainder_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu)
static struct top_srv topsrv = { 0 };
+ /**
+ * htohl - convert value to endianness used by destination
+ * @in: value to convert
+ * @swap: non-zero if endianness must be reversed
+ *
+ * Returns converted value
+ */
+
+ static u32 htohl(u32 in, int swap)
+ {
+ return swap ? swab32(in) : in;
+ }
+
/**
* subscr_send_event - send a message containing a tipc_event to the subscriber
*
msg_sect.iov_base = (void *)&sub->evt;
msg_sect.iov_len = sizeof(struct tipc_event);
- sub->evt.event = htonl(event);
- sub->evt.found_lower = htonl(found_lower);
- sub->evt.found_upper = htonl(found_upper);
- sub->evt.port.ref = htonl(port_ref);
- sub->evt.port.node = htonl(node);
+ sub->evt.event = htohl(event, sub->swap);
+ sub->evt.found_lower = htohl(found_lower, sub->swap);
+ sub->evt.found_upper = htohl(found_upper, sub->swap);
+ sub->evt.port.ref = htohl(port_ref, sub->swap);
+ sub->evt.port.node = htohl(node, sub->swap);
tipc_send(sub->server_ref, 1, &msg_sect);
}
{
struct subscription *sub;
struct subscription *sub_temp;
- __u32 type, lower, upper, timeout, filter;
int found = 0;
/* Find first matching subscription, exit if not found */
- type = ntohl(s->seq.type);
- lower = ntohl(s->seq.lower);
- upper = ntohl(s->seq.upper);
- timeout = ntohl(s->timeout);
- filter = ntohl(s->filter) & ~TIPC_SUB_CANCEL;
-
list_for_each_entry_safe(sub, sub_temp, &subscriber->subscription_list,
subscription_list) {
- if ((type == sub->seq.type) &&
- (lower == sub->seq.lower) &&
- (upper == sub->seq.upper) &&
- (timeout == sub->timeout) &&
- (filter == sub->filter) &&
- !memcmp(s->usr_handle,sub->evt.s.usr_handle,
- sizeof(s->usr_handle)) ){
- found = 1;
- break;
- }
+ if (!memcmp(s, &sub->evt.s, sizeof(struct tipc_subscr))) {
+ found = 1;
+ break;
+ }
}
if (!found)
return;
k_term_timer(&sub->timer);
spin_lock_bh(subscriber->lock);
}
- dbg("Cancel: removing sub %u,%u,%u from subscriber %p list\n",
+ dbg("Cancel: removing sub %u,%u,%u from subscriber %x list\n",
sub->seq.type, sub->seq.lower, sub->seq.upper, subscriber);
subscr_del(sub);
}
struct subscriber *subscriber)
{
struct subscription *sub;
+ int swap;
+
+ /* Determine subscriber's endianness */
+
+ swap = !(s->filter & (TIPC_SUB_PORTS | TIPC_SUB_SERVICE));
/* Detect & process a subscription cancellation request */
- if (ntohl(s->filter) & TIPC_SUB_CANCEL) {
+ if (s->filter & htohl(TIPC_SUB_CANCEL, swap)) {
+ s->filter &= ~htohl(TIPC_SUB_CANCEL, swap);
subscr_cancel(s, subscriber);
return NULL;
}
/* Initialize subscription object */
- sub->seq.type = ntohl(s->seq.type);
- sub->seq.lower = ntohl(s->seq.lower);
- sub->seq.upper = ntohl(s->seq.upper);
- sub->timeout = ntohl(s->timeout);
- sub->filter = ntohl(s->filter);
- if ((sub->filter && (sub->filter != TIPC_SUB_PORTS)) ||
+ sub->seq.type = htohl(s->seq.type, swap);
+ sub->seq.lower = htohl(s->seq.lower, swap);
+ sub->seq.upper = htohl(s->seq.upper, swap);
+ sub->timeout = htohl(s->timeout, swap);
+ sub->filter = htohl(s->filter, swap);
+ if ((!(sub->filter & TIPC_SUB_PORTS) ==
+ !(sub->filter & TIPC_SUB_SERVICE)) ||
(sub->seq.lower > sub->seq.upper)) {
warn("Subscription rejected, illegal request\n");
kfree(sub);
INIT_LIST_HEAD(&sub->nameseq_list);
list_add(&sub->subscription_list, &subscriber->subscription_list);
sub->server_ref = subscriber->port_ref;
+ sub->swap = swap;
memcpy(&sub->evt.s, s, sizeof(struct tipc_subscr));
atomic_inc(&topsrv.subscription_count);
if (sub->timeout != TIPC_WAIT_FOREVER) {
topsrv.user_ref = 0;
}
}
-
-
-int tipc_ispublished(struct tipc_name const *name)
-{
- u32 domain = 0;
-
- return(tipc_nametbl_translate(name->type, name->instance,&domain) != 0);
-}
-