DPDK: All TX and RX packets dropped with RPi-CM4 + Intel-i210-NIC

197 Views Asked by At

Unable to send packets using testpmd or pktgen on my Raspberry Pi CM4 with an Intel i210 NIC, although the same setup works fine on a x86 Dell station with the same NIC.

Could this issue be due to DPDK's compatibility on ARM or RPi, or I have done something wrong?

Environment:

host:
    Raspberry Pi CM4 + standard IO board

NIC:
    I tried both NIC following
    Intel i210-GE-1T-X1 (1Gb)
    Intel i210-X1-V2 (10Gb)

Kernel:
    I tried both linux-raspi 5.4 & linux-raspi 5.15

DPDK:
    dpdk-23.07
    I tried both vfio_pic (no-iommu) & uio_pci_generic driver

Test results:

Run testpmd:

./dpdk-testpmd -- -i

EAL: Detected CPU lcores: 4
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Using IOMMU type 8 (No-IOMMU)
EAL: Probe PCI driver: net_e1000_igb (8086:1533) device: 0000:01:00.0 (socket -1)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 98:B7:85:00:89:4D
Checking link statuses...
Done
testpmd> 
Port 0: link state change event

Start TX-only forward mode

set fwd txonly
start <--- In this step, I can see the RJ45 lights on, but no blink. 
stop
txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  txonly packet forwarding packets/burst=32
  packet len=64 - nb packet segments=1
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=8 hthresh=1  wthresh=16
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 55054369      TX-total: 55054369
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 55054369      TX-total: 55054369
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Check port information:

show port info 0

********************* Infos for port 0  *********************
MAC address: 98:B7:85:00:89:4D
Device name: 0000:01:00.0
Driver name: net_e1000_igb
Firmware-version: 3.16, 0x800004ff, 1.304.0
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 1 Gbps
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 16
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 40
Redirection table size: 128
Supported RSS offload flow types:
  ipv4  ipv4-tcp  ipv4-udp  ipv6  ipv6-tcp  ipv6-udp  ipv6-ex
  ipv6-tcp-ex  ipv6-udp-ex
Minimum size of RX buffer: 256
Maximum configurable length of RX packet: 16383
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 1
Max possible RX queues: 4
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 32
RXDs number alignment: 8
Current number of TX queues: 1
Max possible TX queues: 4
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 32
TXDs number alignment: 8
Max segment number per packet: 255
Max segment number per MTU/TSO: 255
Device capabilities: 0x0( )
Device error handling mode: none
Device private info:
  none

Check Linux VFIO modules:

lsmod


    vfio_pci               16384  0
    vfio_pci_core          77824  1 vfio_pci
    vfio_virqfd            16384  1 vfio_pci_core
    vfio_iommu_type1       49152  0
    vfio                   45056  2 vfio_pci_core,vfio_iommu_type1

Check device bind:

dpdk-devbind.py -s

    Network devices using DPDK-compatible driver
    ============================================
    0000:01:00.0 'I210 Gigabit Network Connection 1533' drv=vfio-pci unused=igb

Kernel logs:

[    0.565384] pci 0000:01:00.0: [8086:1533] type 00 class 0x020000
[    0.565449] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x000fffff]
[    0.565521] pci 0000:01:00.0: reg 0x1c: [mem 0x00000000-0x00003fff]
[    0.565590] pci 0000:01:00.0: reg 0x30: [mem 0x00000000-0x000fffff pref]
[    0.565881] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[    0.581511] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    0.581600] pci 0000:00:00.0: BAR 14: assigned [mem 0x600000000-0x6002fffff]
[    0.581628] pci 0000:01:00.0: BAR 0: assigned [mem 0x600000000-0x6000fffff]
[    0.581655] pci 0000:01:00.0: BAR 6: assigned [mem 0x600100000-0x6001fffff pref]
[    0.581673] pci 0000:01:00.0: BAR 3: assigned [mem 0x600200000-0x600203fff]

Kernel logs when I start testpmd:

[   54.287336] vfio_pci: unknown parameter 'enable_unsafe_noiommu_mode' ignored
[   82.338189] vfio-pci 0000:01:00.0: Adding to iommu group 0
[   82.338210] vfio-pci 0000:01:00.0: Adding kernel taint for vfio-noiommu group on device
[  121.902266] audit: type=1326 audit(1696353003.891:65): auid=1000 uid=1000 gid=1000 ses=2 subj=snap.snap-store.ubuntu-software pid=1979 comm="pool-org.gnome." exe="/snap/snap-store/639/usr/bin/snap-store" sig=0 arch=c00000b7 syscall=55 compat=0 ip=0xffff9e093ee8 code=0x50000
[  126.390013] vfio-pci 0000:01:00.0: enabling device (0000 -> 0002)
[  126.498474] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:3057)
[  215.791550] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (pktgen:3938)
[ 4084.938242] vfio-pci 0000:01:00.0: timed out waiting for pending transaction; performing function level reset anyway
[ 5253.671006] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5719)
[ 5532.162849] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5754)
[ 5541.958851] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5765)
[ 5647.800353] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5796)
[ 5709.481186] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5819)
[ 5744.089540] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5830)

Same results with using uio_pci_generic driver.

I also tried use rx_only, all receving packets are also dropped.

I have tested RPi-CM4 + i210 can work well based on Linux API.

Let me know if there is any other information I can provide.

1

There are 1 best solutions below

0
andrep On

Update on this, the issue shown above was due to missing root privileges. The error I meant to share is:

$ sudo dpdk-testpmd -- -i
EAL: Detected CPU lcores: 4
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Cannot open directory /sys/kernel/mm/hugepages to read system hugepage info
EAL: FATAL: Cannot get hugepage information.
EAL: Cannot get hugepage information.
EAL: Error - exiting with code: 1
  Cause: Cannot init EAL: Permission denied