<feed xmlns='http://www.w3.org/2005/Atom'>
<title>lwn.git/drivers/net/ethernet/amd, branch master</title>
<subtitle>Linux kernel documentation tree maintained by Jonathan Corbet</subtitle>
<id>http://mirrors.hust.edu.cn/git/lwn.git/atom?h=master</id>
<link rel='self' href='http://mirrors.hust.edu.cn/git/lwn.git/atom?h=master'/>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/'/>
<updated>2026-04-01T02:32:41+00:00</updated>
<entry>
<title>declance: Include the offending address with DMA errors</title>
<updated>2026-04-01T02:32:41+00:00</updated>
<author>
<name>Maciej W. Rozycki</name>
<email>macro@orcam.me.uk</email>
</author>
<published>2026-03-29T18:07:41+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=aae5efaeb8aa4ada710d5b0cdbab77b9539c69eb'/>
<id>urn:sha1:aae5efaeb8aa4ada710d5b0cdbab77b9539c69eb</id>
<content type='text'>
The address latched in the I/O ASIC LANCE DMA Pointer Register uses the
TURBOchannel bus address encoding and therefore bits 33:29 of location
referred occupy bits 4:0, bits 28:2 are left-shifted by 3, and bits 1:0
are hardwired to zero.  In reality no TURBOchannel system exceeds 1GiB
of RAM though, so the address reported will always fit in 8 hex digits.

Signed-off-by: Maciej W. Rozycki &lt;macro@orcam.me.uk&gt;
Link: https://patch.msgid.link/alpine.DEB.2.21.2603291839220.60268@angie.orcam.me.uk
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
</entry>
<entry>
<title>declance: Rate-limit DMA errors</title>
<updated>2026-04-01T02:32:41+00:00</updated>
<author>
<name>Maciej W. Rozycki</name>
<email>macro@orcam.me.uk</email>
</author>
<published>2026-03-29T18:07:24+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=ee769323b1bf60c0ec0338cc5ee6b1c725624ec6'/>
<id>urn:sha1:ee769323b1bf60c0ec0338cc5ee6b1c725624ec6</id>
<content type='text'>
Prevent the system from becoming unusable due to a flood of DMA error
messages.

Signed-off-by: Maciej W. Rozycki &lt;macro@orcam.me.uk&gt;
Link: https://patch.msgid.link/alpine.DEB.2.21.2603291838370.60268@angie.orcam.me.uk
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
</entry>
<entry>
<title>amd-xgbe: add TX descriptor cleanup for link-down</title>
<updated>2026-03-24T09:48:07+00:00</updated>
<author>
<name>Raju Rangoju</name>
<email>Raju.Rangoju@amd.com</email>
</author>
<published>2026-03-19T16:32:51+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=b7fb3677840d26b3fa4c5d0d63b578a2f44077d1'/>
<id>urn:sha1:b7fb3677840d26b3fa4c5d0d63b578a2f44077d1</id>
<content type='text'>
Add intelligent TX descriptor cleanup mechanism to reclaim abandoned
descriptors when the physical link goes down.

When the link goes down while TX packets are in-flight, the hardware
stops processing descriptors with the OWN bit still set. The current
driver waits indefinitely for these descriptors to complete, which
never happens. This causes:

  - TX ring exhaustion (no descriptors available for new packets)
  - Memory leaks (skbs never freed)
  - DMA mapping leaks (mappings never unmapped)
  - Network stack backpressure buildup

Add force-cleanup mechanism in xgbe_tx_poll() that detects link-down
state and reclaims abandoned descriptors. The helper functions and DMA
optimizations support efficient TX shutdown:
  - xgbe_wait_for_dma_tx_complete(): Wait for DMA completion with
    link-down optimization
  - Restructure xgbe_disable_tx() for proper shutdown sequence

Implementation:
  1. Check link state at the start of tx_poll
  2. If link is down, set force_cleanup flag
  3. For descriptors that hardware hasn't completed (!tx_complete):
     - If force_cleanup: treat as completed and reclaim resources
     - If link up: break and wait for hardware (normal behavior)

The cleanup process:
  - Frees skbs that will never be transmitted
  - Unmaps DMA mappings
  - Resets descriptors for reuse
  - Does NOT count as successful transmission (correct statistics)

Benefits:
  - Prevents TX ring starvation
  - Eliminates memory and DMA mapping leaks
  - Enables fast link recovery when link comes back up
  - Critical for link aggregation failover scenarios

Signed-off-by: Raju Rangoju &lt;Raju.Rangoju@amd.com&gt;
Link: https://patch.msgid.link/20260319163251.1808611-4-Raju.Rangoju@amd.com
Signed-off-by: Paolo Abeni &lt;pabeni@redhat.com&gt;
</content>
</entry>
<entry>
<title>amd-xgbe: optimize TX shutdown on link-down</title>
<updated>2026-03-24T09:48:07+00:00</updated>
<author>
<name>Raju Rangoju</name>
<email>Raju.Rangoju@amd.com</email>
</author>
<published>2026-03-19T16:32:50+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=0898849ad9715d163555b8f8bfd13b7691a2b3b8'/>
<id>urn:sha1:0898849ad9715d163555b8f8bfd13b7691a2b3b8</id>
<content type='text'>
Optimize the TX shutdown sequence when link goes down by skipping
futile hardware wait operations and immediately stopping TX queues.

Current behavior creates delays and resource issues during link-down:

1. xgbe_txq_prepare_tx_stop() waits up to XGBE_DMA_STOP_TIMEOUT for
   TX queues to drain, but when link is down, hardware will never
   complete the pending descriptors. This causes unnecessary delays
   during interface shutdown.

2. TX queues remain active after link-down, allowing the network stack
   to continue queuing packets that cannot be transmitted. This leads
   to resource buildup and complicates recovery.

This patch adds two optimizations:

Optimization 1: Skip TX queue drain when link is down
  In xgbe_txq_prepare_tx_stop(), detect link-down state and return
  immediately instead of waiting for hardware. Abandoned descriptors
  will be cleaned up by the force-cleanup mechanism (next patch).

Optimization 2: Immediate TX queue stop on link-down
  In xgbe_phy_adjust_link(), call netif_tx_stop_all_queues() as soon
  as link-down is detected. Also wake TX queues on link-up to resume
  transmission.

Benefits:
  - Faster interface shutdown (no pointless timeout waits)
  - Prevents packet queue buildup in network stack
  - Cleaner state management during link transitions
  - Enables orderly descriptor cleanup by NAPI poll

Note: We do not call netdev_tx_reset_queue() on link-down because
NAPI poll may still be running, which would trigger BQL assertions.
BQL state is cleaned up naturally during descriptor reclamation.

Signed-off-by: Raju Rangoju &lt;Raju.Rangoju@amd.com&gt;
Link: https://patch.msgid.link/20260319163251.1808611-3-Raju.Rangoju@amd.com
Signed-off-by: Paolo Abeni &lt;pabeni@redhat.com&gt;
</content>
</entry>
<entry>
<title>amd-xgbe: add adaptive link status polling</title>
<updated>2026-03-24T09:48:07+00:00</updated>
<author>
<name>Raju Rangoju</name>
<email>Raju.Rangoju@amd.com</email>
</author>
<published>2026-03-19T16:32:49+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=31b2d4e00260ae3fca50779ac416dd9acaacbfb9'/>
<id>urn:sha1:31b2d4e00260ae3fca50779ac416dd9acaacbfb9</id>
<content type='text'>
Implement adaptive link status polling to enable fast link-down detection
while conserving CPU resources during link-down periods.

Currently, the driver polls link status at a fixed 1-second interval
regardless of link state. This creates a trade-off:
  - Slow polling (1s): Misses rapid link state changes, causing delays
  - Fast polling: Wastes CPU when link is stable or down

This enhancement introduces state-aware polling:

When carrier is UP:
  Poll every 100ms to enable rapid link-down detection. This provides
  ~100-200ms response time to link failures, minimizing packet loss and
  enabling fast failover in link aggregation configurations.

When carrier is DOWN:
  Poll every 1s to conserve CPU resources. Link-up detection is less
  time-critical since no traffic is flowing.

Performance impact:
  - Link-down detection: 1000ms → 100-200ms (10x improvement)
  - CPU overhead when link up: 0.1% → 1% (acceptable for active links)
  - CPU overhead when link down: unchanged at 0.1%

This is particularly valuable for:
  - Link aggregation deployments requiring sub-second failover
  - Environments with flaky links or cable issues
  - Applications sensitive to connection recovery time

Signed-off-by: Raju Rangoju &lt;Raju.Rangoju@amd.com&gt;
Link: https://patch.msgid.link/20260319163251.1808611-2-Raju.Rangoju@amd.com
Signed-off-by: Paolo Abeni &lt;pabeni@redhat.com&gt;
</content>
</entry>
<entry>
<title>net: xgbe: use device_get_mac_addr</title>
<updated>2026-03-12T20:38:38+00:00</updated>
<author>
<name>Rosen Penev</name>
<email>rosenp@gmail.com</email>
</author>
<published>2026-03-10T19:46:46+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=29ca18505d58fedf2388c303156107c4ed97197b'/>
<id>urn:sha1:29ca18505d58fedf2388c303156107c4ed97197b</id>
<content type='text'>
device_get_mac_addr is basically device_property_read_u8_array with an
is_valid_ether_addr call. Allows just checking for ret.

Remove XGBE_MAC_ADDR_PROPERTY. device_get_mac_addr supports more
properties than just "mac-address".

Signed-off-by: Rosen Penev &lt;rosenp@gmail.com&gt;
Reviewed-by: Sai Krishna &lt;saikrishnag@marvell.com&gt;
Link: https://patch.msgid.link/20260310194647.3794-1-rosenp@gmail.com
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
</entry>
<entry>
<title>Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net</title>
<updated>2026-03-12T19:53:34+00:00</updated>
<author>
<name>Jakub Kicinski</name>
<email>kuba@kernel.org</email>
</author>
<published>2026-03-12T19:53:34+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=72374257ede14897ee3d5a709c2498f7b6a1764b'/>
<id>urn:sha1:72374257ede14897ee3d5a709c2498f7b6a1764b</id>
<content type='text'>
Cross-merge networking fixes after downstream PR (net-7.0-rc4).

drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
  db25c42c2e1f9 ("net/mlx5e: RX, Fix XDP multi-buf frag counting for striding RQ")
  dff1c3164a692 ("net/mlx5e: SHAMPO, Always calculate page size")
https://lore.kernel.org/aa7ORohmf67EKihj@sirena.org.uk

drivers/net/ethernet/ti/am65-cpsw-nuss.c
  840c9d13cb1ca ("net: ethernet: ti: am65-cpsw-nuss: Fix rx_filter value for PTP support")
  a23c657e332f2 ("net: ethernet: ti: am65-cpsw: Use also port number to identify timestamps")
https://lore.kernel.org/abK3EkIXuVgMyGI7@sirena.org.uk

No adjacent changes.

Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
</entry>
<entry>
<title>amd-xgbe: add PCI power management for S0i3 support</title>
<updated>2026-03-11T02:51:23+00:00</updated>
<author>
<name>Raju Rangoju</name>
<email>Raju.Rangoju@amd.com</email>
</author>
<published>2026-03-08T09:28:51+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=7644e76956baa9a6bc3d208dfd92928f9ecd6a93'/>
<id>urn:sha1:7644e76956baa9a6bc3d208dfd92928f9ecd6a93</id>
<content type='text'>
The current suspend/resume implementation does not correctly handle PCI
device power state transitions, which prevents AMD platforms from
reaching the deepest suspend state (S0i3) when the amd-xgbe driver is
enabled.

In particular, the amd_pmc driver reports:

  "Last suspend didn't reach deepest state"

when this device is present.

Implement proper PCI power management operations following the standard
PCI PM model so that the device can be cleanly powered down and resumed.

Suspend path:
- Power down the network interface
- Put the PHY into low-power mode
- Disable bus mastering to prevent DMA activity
- Save PCI configuration space
- Disable the PCI device
- Disable wake from D3 (S0i3 does not require Wake-on-LAN)
- Set the device to D3hot

Resume path:
- Restore the PCI power state to D0
- Restore PCI configuration space
- Enable the PCI device
- Re-enable bus mastering
- Re-enable device interrupts
- Clear the PHY low-power mode
- Power up the network interface

This allows systems using amd-xgbe to reach the deepest suspend state
when entering modern standby (S0i3).

Signed-off-by: Raju Rangoju &lt;Raju.Rangoju@amd.com&gt;
Link: https://patch.msgid.link/20260308092851.1510214-3-Raju.Rangoju@amd.com
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
</entry>
<entry>
<title>amd-xgbe: Simplify powerdown/powerup paths</title>
<updated>2026-03-11T02:51:22+00:00</updated>
<author>
<name>Raju Rangoju</name>
<email>Raju.Rangoju@amd.com</email>
</author>
<published>2026-03-08T09:28:50+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=fe81629217e09ed8772e63a4c9cb0d864d849174'/>
<id>urn:sha1:fe81629217e09ed8772e63a4c9cb0d864d849174</id>
<content type='text'>
The caller parameter in xgbe_powerdown() and xgbe_powerup() was intended
to differentiate between driver and ioctl contexts, but the only
remaining usage is from the driver suspend/resume path.

Simplify this by:
- Removing the unused XGMAC_DRIVER_CONTEXT and XGMAC_IOCTL_CONTEXT
  macros
- Dropping the now-unused caller parameter
- Reordering operations in xgbe_powerdown() to disable NAPI before
  stopping TX/RX, matching the order used in xgbe_stop()

This makes the powerdown/powerup paths easier to follow and keeps the
ordering consistent with the rest of the driver.

Signed-off-by: Raju Rangoju &lt;Raju.Rangoju@amd.com&gt;
Link: https://patch.msgid.link/20260308092851.1510214-2-Raju.Rangoju@amd.com
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
</entry>
<entry>
<title>amd-xgbe: reset PHY settings before starting PHY</title>
<updated>2026-03-10T11:07:07+00:00</updated>
<author>
<name>Raju Rangoju</name>
<email>Raju.Rangoju@amd.com</email>
</author>
<published>2026-03-06T11:16:29+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=a8ba129af46856112981c124850ec6a85a1c1ab6'/>
<id>urn:sha1:a8ba129af46856112981c124850ec6a85a1c1ab6</id>
<content type='text'>
commit f93505f35745 ("amd-xgbe: let the MAC manage PHY PM") moved
xgbe_phy_reset() from xgbe_open() to xgbe_start(), placing it after
phy_start(). As a result, the PHY settings were being reset after the
PHY had already started.

Reorder the calls so that the PHY settings are reset before
phy_start() is invoked.

Fixes: f93505f35745 ("amd-xgbe: let the MAC manage PHY PM")
Reviewed-by: Maxime Chevallier &lt;maxime.chevallier@bootlin.com&gt;
Signed-off-by: Raju Rangoju &lt;Raju.Rangoju@amd.com&gt;
Link: https://patch.msgid.link/20260306111629.1515676-4-Raju.Rangoju@amd.com
Signed-off-by: Paolo Abeni &lt;pabeni@redhat.com&gt;
</content>
</entry>
</feed>
