Merge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue

Tony Nguyen says:

====================
Intel Wired LAN Driver Updates 2022-12-08 (ice)

Jacob Keller says:

This series of patches primarily consists of changes to fix some corner
cases that can cause Tx timestamp failures. The issues were discovered and
reported by Siddaraju DH and primarily affect E822 hardware, though this
series also includes some improvements that affect E810 hardware as well.

The primary issue is regarding the way that E822 determines when to generate
timestamp interrupts. If the driver reads timestamp indexes which do not
have a valid timestamp, the E822 interrupt tracking logic can get stuck.
This is due to the way that E822 hardware tracks timestamp index reads
internally. I was previously unaware of this behavior as it is significantly
different in E810 hardware.

Most of the fixes target refactors to ensure that the ice driver does not
read timestamp indexes which are not valid on E822 hardware. This is done by
using the Tx timestamp ready bitmap register from the PHY. This register
indicates what timestamp indexes have outstanding timestamps waiting to be
captured.

Care must be taken in all cases where we read the timestamp registers, and
thus all flows which might have read these registers are refactored. The
ice_ptp_tx_tstamp function is modified to consolidate as much of the logic
relating to these registers as possible. It now handles discarding stale
timestamps which are old or which occurred after a PHC time update. This
replaces previously standalone thread functions like the periodic work
function and the ice_ptp_flush_tx_tracker function.

In addition, some minor cleanups noticed while writing these refactors are
included.

The remaining patches refactor the E822 implementation to remove the
"bypass" mode for timestamps. The E822 hardware has the ability to provide a
more precise timestamp by making use of measurements of the precise way that
packets flow through the hardware pipeline. These measurements are known as
"Vernier" calibration. The "bypass" mode disables many of these measurements
in favor of a faster start up time for Tx and Rx timestamping. Instead, once
these measurements were captured, the driver tries to reconfigure the PHY to
enable the vernier calibrations.

Unfortunately this recalibration does not work. Testing indicates that the
PHY simply remains in bypass mode without the increased timestamp precision.
Remove the attempt at recalibration and always use vernier mode. This has
one disadvantage that Tx and Rx timestamps cannot begin until after at least
one packet of that type goes through the hardware pipeline. Because of this,
further refactor the driver to separate Tx and Rx vernier calibration.
Complete the Tx and Rx independently, enabling the appropriate type of
timestamp as soon as the relevant packet has traversed the hardware
pipeline. This was reported by Milena Olech.

Note that although these might be considered "bug fixes", the required
changes in order to appropriately resolve these issues is large. Thus it
does not feel suitable to send this series to net.

* '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue:
  ice: reschedule ice_ptp_wait_for_offset_valid during reset
  ice: make Tx and Rx vernier offset calibration independent
  ice: only check set bits in ice_ptp_flush_tx_tracker
  ice: handle flushing stale Tx timestamps in ice_ptp_tx_tstamp
  ice: cleanup allocations in ice_ptp_alloc_tx_tracker
  ice: protect init and calibrating check in ice_ptp_request_ts
  ice: synchronize the misc IRQ when tearing down Tx tracker
  ice: check Tx timestamp memory register for ready timestamps
  ice: handle discarding old Tx requests in ice_ptp_tx_tstamp
  ice: always call ice_ptp_link_change and make it void
  ice: fix misuse of "link err" with "link status"
  ice: Reset TS memory for all quads
  ice: Remove the E822 vernier "bypass" logic
  ice: Use more generic names for ice_ptp_tx fields
====================

Link: https://lore.kernel.org/r/20221208213932.1274143-1-anthony.l.nguyen@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2022-12-09 19:42:16 -08:00
commit 043cd1e204
5 changed files with 463 additions and 475 deletions

View File

@ -1111,8 +1111,7 @@ ice_link_event(struct ice_pf *pf, struct ice_port_info *pi, bool link_up,
if (link_up == old_link && link_speed == old_link_speed)
return 0;
if (!ice_is_e810(&pf->hw))
ice_ptp_link_change(pf, pf->hw.pf_id, link_up);
ice_ptp_link_change(pf, pf->hw.pf_id, link_up);
if (ice_is_dcb_active(pf)) {
if (test_bit(ICE_FLAG_DCB_ENA, pf->flags))
@ -6340,8 +6339,7 @@ static int ice_up_complete(struct ice_vsi *vsi)
ice_print_link_msg(vsi, true);
netif_tx_start_all_queues(vsi->netdev);
netif_carrier_on(vsi->netdev);
if (!ice_is_e810(&pf->hw))
ice_ptp_link_change(pf, pf->hw.pf_id, true);
ice_ptp_link_change(pf, pf->hw.pf_id, true);
}
/* Perform an initial read of the statistics registers now to
@ -6773,8 +6771,7 @@ int ice_down(struct ice_vsi *vsi)
if (vsi->netdev && vsi->type == ICE_VSI_PF) {
vlan_err = ice_vsi_del_vlan_zero(vsi);
if (!ice_is_e810(&vsi->back->hw))
ice_ptp_link_change(vsi->back, vsi->back->hw.pf_id, false);
ice_ptp_link_change(vsi->back, vsi->back->hw.pf_id, false);
netif_carrier_off(vsi->netdev);
netif_tx_disable(vsi->netdev);
} else if (vsi->type == ICE_VSI_SWITCHDEV_CTRL) {

View File

@ -599,6 +599,23 @@ static u64 ice_ptp_extend_40b_ts(struct ice_pf *pf, u64 in_tstamp)
(in_tstamp >> 8) & mask);
}
/**
* ice_ptp_is_tx_tracker_up - Check if Tx tracker is ready for new timestamps
* @tx: the PTP Tx timestamp tracker to check
*
* Check that a given PTP Tx timestamp tracker is up, i.e. that it is ready
* to accept new timestamp requests.
*
* Assumes the tx->lock spinlock is already held.
*/
static bool
ice_ptp_is_tx_tracker_up(struct ice_ptp_tx *tx)
{
lockdep_assert_held(&tx->lock);
return tx->init && !tx->calibrating;
}
/**
* ice_ptp_tx_tstamp - Process Tx timestamps for a port
* @tx: the PTP Tx timestamp tracker
@ -608,11 +625,13 @@ static u64 ice_ptp_extend_40b_ts(struct ice_pf *pf, u64 in_tstamp)
*
* If a given index has a valid timestamp, perform the following steps:
*
* 1) copy the timestamp out of the PHY register
* 4) clear the timestamp valid bit in the PHY register
* 5) unlock the index by clearing the associated in_use bit.
* 2) extend the 40b timestamp value to get a 64bit timestamp
* 3) send that timestamp to the stack
* 1) check that the timestamp request is not stale
* 2) check that a timestamp is ready and available in the PHY memory bank
* 3) read and copy the timestamp out of the PHY register
* 4) unlock the index by clearing the associated in_use bit
* 5) check if the timestamp is stale, and discard if so
* 6) extend the 40 bit timestamp value to get a 64 bit timestamp value
* 7) send this 64 bit timestamp to the stack
*
* Returns true if all timestamps were handled, and false if any slots remain
* without a timestamp.
@ -623,24 +642,45 @@ static u64 ice_ptp_extend_40b_ts(struct ice_pf *pf, u64 in_tstamp)
* interrupt. In some cases hardware might not interrupt us again when the
* timestamp is captured.
*
* Note that we only take the tracking lock when clearing the bit and when
* checking if we need to re-queue this task. The only place where bits can be
* set is the hard xmit routine where an SKB has a request flag set. The only
* places where we clear bits are this work function, or the periodic cleanup
* thread. If the cleanup thread clears a bit we're processing we catch it
* when we lock to clear the bit and then grab the SKB pointer. If a Tx thread
* starts a new timestamp, we might not begin processing it right away but we
* will notice it at the end when we re-queue the task. If a Tx thread starts
* a new timestamp just after this function exits without re-queuing,
* the interrupt when the timestamp finishes should trigger. Avoiding holding
* the lock for the entire function is important in order to ensure that Tx
* threads do not get blocked while waiting for the lock.
* Note that we do not hold the tracking lock while reading the Tx timestamp.
* This is because reading the timestamp requires taking a mutex that might
* sleep.
*
* The only place where we set in_use is when a new timestamp is initiated
* with a slot index. This is only called in the hard xmit routine where an
* SKB has a request flag set. The only places where we clear this bit is this
* function, or during teardown when the Tx timestamp tracker is being
* removed. A timestamp index will never be re-used until the in_use bit for
* that index is cleared.
*
* If a Tx thread starts a new timestamp, we might not begin processing it
* right away but we will notice it at the end when we re-queue the task.
*
* If a Tx thread starts a new timestamp just after this function exits, the
* interrupt for that timestamp should re-trigger this function once
* a timestamp is ready.
*
* In cases where the PTP hardware clock was directly adjusted, some
* timestamps may not be able to safely use the timestamp extension math. In
* this case, software will set the stale bit for any outstanding Tx
* timestamps when the clock is adjusted. Then this function will discard
* those captured timestamps instead of sending them to the stack.
*
* If a Tx packet has been waiting for more than 2 seconds, it is not possible
* to correctly extend the timestamp using the cached PHC time. It is
* extremely unlikely that a packet will ever take this long to timestamp. If
* we detect a Tx timestamp request that has waited for this long we assume
* the packet will never be sent by hardware and discard it without reading
* the timestamp register.
*/
static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx)
{
struct ice_ptp_port *ptp_port;
bool ts_handled = true;
bool more_timestamps;
struct ice_pf *pf;
struct ice_hw *hw;
u64 tstamp_ready;
int err;
u8 idx;
if (!tx->init)
@ -648,44 +688,86 @@ static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx)
ptp_port = container_of(tx, struct ice_ptp_port, tx);
pf = ptp_port_to_pf(ptp_port);
hw = &pf->hw;
/* Read the Tx ready status first */
err = ice_get_phy_tx_tstamp_ready(hw, tx->block, &tstamp_ready);
if (err)
return false;
for_each_set_bit(idx, tx->in_use, tx->len) {
struct skb_shared_hwtstamps shhwtstamps = {};
u8 phy_idx = idx + tx->quad_offset;
u64 raw_tstamp, tstamp;
u8 phy_idx = idx + tx->offset;
u64 raw_tstamp = 0, tstamp;
bool drop_ts = false;
struct sk_buff *skb;
int err;
/* Drop packets which have waited for more than 2 seconds */
if (time_is_before_jiffies(tx->tstamps[idx].start + 2 * HZ)) {
drop_ts = true;
/* Count the number of Tx timestamps that timed out */
pf->ptp.tx_hwtstamp_timeouts++;
}
/* Only read a timestamp from the PHY if its marked as ready
* by the tstamp_ready register. This avoids unnecessary
* reading of timestamps which are not yet valid. This is
* important as we must read all timestamps which are valid
* and only timestamps which are valid during each interrupt.
* If we do not, the hardware logic for generating a new
* interrupt can get stuck on some devices.
*/
if (!(tstamp_ready & BIT_ULL(phy_idx))) {
if (drop_ts)
goto skip_ts_read;
continue;
}
ice_trace(tx_tstamp_fw_req, tx->tstamps[idx].skb, idx);
err = ice_read_phy_tstamp(&pf->hw, tx->quad, phy_idx,
&raw_tstamp);
err = ice_read_phy_tstamp(hw, tx->block, phy_idx, &raw_tstamp);
if (err)
continue;
ice_trace(tx_tstamp_fw_done, tx->tstamps[idx].skb, idx);
/* Check if the timestamp is invalid or stale */
if (!(raw_tstamp & ICE_PTP_TS_VALID) ||
/* For PHYs which don't implement a proper timestamp ready
* bitmap, verify that the timestamp value is different
* from the last cached timestamp. If it is not, skip this for
* now assuming it hasn't yet been captured by hardware.
*/
if (!drop_ts && tx->verify_cached &&
raw_tstamp == tx->tstamps[idx].cached_tstamp)
continue;
/* The timestamp is valid, so we'll go ahead and clear this
* index and then send the timestamp up to the stack.
*/
/* Discard any timestamp value without the valid bit set */
if (!(raw_tstamp & ICE_PTP_TS_VALID))
drop_ts = true;
skip_ts_read:
spin_lock(&tx->lock);
tx->tstamps[idx].cached_tstamp = raw_tstamp;
if (tx->verify_cached && raw_tstamp)
tx->tstamps[idx].cached_tstamp = raw_tstamp;
clear_bit(idx, tx->in_use);
skb = tx->tstamps[idx].skb;
tx->tstamps[idx].skb = NULL;
if (test_and_clear_bit(idx, tx->stale))
drop_ts = true;
spin_unlock(&tx->lock);
/* it's (unlikely but) possible we raced with the cleanup
* thread for discarding old timestamp requests.
/* It is unlikely but possible that the SKB will have been
* flushed at this point due to link change or teardown.
*/
if (!skb)
continue;
if (drop_ts) {
dev_kfree_skb_any(skb);
continue;
}
/* Extend the timestamp using cached PHC time */
tstamp = ice_ptp_extend_40b_ts(pf, raw_tstamp);
if (tstamp) {
@ -701,11 +783,10 @@ static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx)
* poll for remaining timestamps.
*/
spin_lock(&tx->lock);
if (!bitmap_empty(tx->in_use, tx->len))
ts_handled = false;
more_timestamps = tx->init && !bitmap_empty(tx->in_use, tx->len);
spin_unlock(&tx->lock);
return ts_handled;
return !more_timestamps;
}
/**
@ -713,26 +794,33 @@ static bool ice_ptp_tx_tstamp(struct ice_ptp_tx *tx)
* @tx: Tx tracking structure to initialize
*
* Assumes that the length has already been initialized. Do not call directly,
* use the ice_ptp_init_tx_e822 or ice_ptp_init_tx_e810 instead.
* use the ice_ptp_init_tx_* instead.
*/
static int
ice_ptp_alloc_tx_tracker(struct ice_ptp_tx *tx)
{
tx->tstamps = kcalloc(tx->len, sizeof(*tx->tstamps), GFP_KERNEL);
if (!tx->tstamps)
return -ENOMEM;
unsigned long *in_use, *stale;
struct ice_tx_tstamp *tstamps;
tstamps = kcalloc(tx->len, sizeof(*tstamps), GFP_KERNEL);
in_use = bitmap_zalloc(tx->len, GFP_KERNEL);
stale = bitmap_zalloc(tx->len, GFP_KERNEL);
if (!tstamps || !in_use || !stale) {
kfree(tstamps);
bitmap_free(in_use);
bitmap_free(stale);
tx->in_use = bitmap_zalloc(tx->len, GFP_KERNEL);
if (!tx->in_use) {
kfree(tx->tstamps);
tx->tstamps = NULL;
return -ENOMEM;
}
spin_lock_init(&tx->lock);
tx->tstamps = tstamps;
tx->in_use = in_use;
tx->stale = stale;
tx->init = 1;
spin_lock_init(&tx->lock);
return 0;
}
@ -740,30 +828,70 @@ ice_ptp_alloc_tx_tracker(struct ice_ptp_tx *tx)
* ice_ptp_flush_tx_tracker - Flush any remaining timestamps from the tracker
* @pf: Board private structure
* @tx: the tracker to flush
*
* Called during teardown when a Tx tracker is being removed.
*/
static void
ice_ptp_flush_tx_tracker(struct ice_pf *pf, struct ice_ptp_tx *tx)
{
struct ice_hw *hw = &pf->hw;
u64 tstamp_ready;
int err;
u8 idx;
for (idx = 0; idx < tx->len; idx++) {
u8 phy_idx = idx + tx->quad_offset;
err = ice_get_phy_tx_tstamp_ready(hw, tx->block, &tstamp_ready);
if (err) {
dev_dbg(ice_pf_to_dev(pf), "Failed to get the Tx tstamp ready bitmap for block %u, err %d\n",
tx->block, err);
/* If we fail to read the Tx timestamp ready bitmap just
* skip clearing the PHY timestamps.
*/
tstamp_ready = 0;
}
for_each_set_bit(idx, tx->in_use, tx->len) {
u8 phy_idx = idx + tx->offset;
struct sk_buff *skb;
/* In case this timestamp is ready, we need to clear it. */
if (!hw->reset_ongoing && (tstamp_ready & BIT_ULL(phy_idx)))
ice_clear_phy_tstamp(hw, tx->block, phy_idx);
spin_lock(&tx->lock);
if (tx->tstamps[idx].skb) {
dev_kfree_skb_any(tx->tstamps[idx].skb);
tx->tstamps[idx].skb = NULL;
pf->ptp.tx_hwtstamp_flushed++;
}
skb = tx->tstamps[idx].skb;
tx->tstamps[idx].skb = NULL;
clear_bit(idx, tx->in_use);
clear_bit(idx, tx->stale);
spin_unlock(&tx->lock);
/* Clear any potential residual timestamp in the PHY block */
if (!pf->hw.reset_ongoing)
ice_clear_phy_tstamp(&pf->hw, tx->quad, phy_idx);
/* Count the number of Tx timestamps flushed */
pf->ptp.tx_hwtstamp_flushed++;
/* Free the SKB after we've cleared the bit */
dev_kfree_skb_any(skb);
}
}
/**
* ice_ptp_mark_tx_tracker_stale - Mark unfinished timestamps as stale
* @tx: the tracker to mark
*
* Mark currently outstanding Tx timestamps as stale. This prevents sending
* their timestamp value to the stack. This is required to prevent extending
* the 40bit hardware timestamp incorrectly.
*
* This should be called when the PTP clock is modified such as after a set
* time request.
*/
static void
ice_ptp_mark_tx_tracker_stale(struct ice_ptp_tx *tx)
{
spin_lock(&tx->lock);
bitmap_or(tx->stale, tx->stale, tx->in_use, tx->len);
spin_unlock(&tx->lock);
}
/**
* ice_ptp_release_tx_tracker - Release allocated memory for Tx tracker
* @pf: Board private structure
@ -774,7 +902,12 @@ ice_ptp_flush_tx_tracker(struct ice_pf *pf, struct ice_ptp_tx *tx)
static void
ice_ptp_release_tx_tracker(struct ice_pf *pf, struct ice_ptp_tx *tx)
{
spin_lock(&tx->lock);
tx->init = 0;
spin_unlock(&tx->lock);
/* wait for potentially outstanding interrupt to complete */
synchronize_irq(pf->msix_entries[pf->oicr_idx].vector);
ice_ptp_flush_tx_tracker(pf, tx);
@ -784,6 +917,9 @@ ice_ptp_release_tx_tracker(struct ice_pf *pf, struct ice_ptp_tx *tx)
bitmap_free(tx->in_use);
tx->in_use = NULL;
bitmap_free(tx->stale);
tx->stale = NULL;
tx->len = 0;
}
@ -801,9 +937,10 @@ ice_ptp_release_tx_tracker(struct ice_pf *pf, struct ice_ptp_tx *tx)
static int
ice_ptp_init_tx_e822(struct ice_pf *pf, struct ice_ptp_tx *tx, u8 port)
{
tx->quad = port / ICE_PORTS_PER_QUAD;
tx->quad_offset = (port % ICE_PORTS_PER_QUAD) * INDEX_PER_PORT;
tx->len = INDEX_PER_PORT;
tx->block = port / ICE_PORTS_PER_QUAD;
tx->offset = (port % ICE_PORTS_PER_QUAD) * INDEX_PER_PORT_E822;
tx->len = INDEX_PER_PORT_E822;
tx->verify_cached = 0;
return ice_ptp_alloc_tx_tracker(tx);
}
@ -819,58 +956,18 @@ ice_ptp_init_tx_e822(struct ice_pf *pf, struct ice_ptp_tx *tx, u8 port)
static int
ice_ptp_init_tx_e810(struct ice_pf *pf, struct ice_ptp_tx *tx)
{
tx->quad = pf->hw.port_info->lport;
tx->quad_offset = 0;
tx->len = INDEX_PER_QUAD;
tx->block = pf->hw.port_info->lport;
tx->offset = 0;
tx->len = INDEX_PER_PORT_E810;
/* The E810 PHY does not provide a timestamp ready bitmap. Instead,
* verify new timestamps against cached copy of the last read
* timestamp.
*/
tx->verify_cached = 1;
return ice_ptp_alloc_tx_tracker(tx);
}
/**
* ice_ptp_tx_tstamp_cleanup - Cleanup old timestamp requests that got dropped
* @pf: pointer to the PF struct
* @tx: PTP Tx tracker to clean up
*
* Loop through the Tx timestamp requests and see if any of them have been
* waiting for a long time. Discard any SKBs that have been waiting for more
* than 2 seconds. This is long enough to be reasonably sure that the
* timestamp will never be captured. This might happen if the packet gets
* discarded before it reaches the PHY timestamping block.
*/
static void ice_ptp_tx_tstamp_cleanup(struct ice_pf *pf, struct ice_ptp_tx *tx)
{
struct ice_hw *hw = &pf->hw;
u8 idx;
if (!tx->init)
return;
for_each_set_bit(idx, tx->in_use, tx->len) {
struct sk_buff *skb;
u64 raw_tstamp;
/* Check if this SKB has been waiting for too long */
if (time_is_after_jiffies(tx->tstamps[idx].start + 2 * HZ))
continue;
/* Read tstamp to be able to use this register again */
ice_read_phy_tstamp(hw, tx->quad, idx + tx->quad_offset,
&raw_tstamp);
spin_lock(&tx->lock);
skb = tx->tstamps[idx].skb;
tx->tstamps[idx].skb = NULL;
clear_bit(idx, tx->in_use);
spin_unlock(&tx->lock);
/* Count the number of Tx timestamps which have timed out */
pf->ptp.tx_hwtstamp_timeouts++;
/* Free the SKB after we've cleared the bit */
dev_kfree_skb_any(skb);
}
}
/**
* ice_ptp_update_cached_phctime - Update the cached PHC time values
* @pf: Board specific private structure
@ -941,20 +1038,13 @@ static int ice_ptp_update_cached_phctime(struct ice_pf *pf)
* @pf: Board specific private structure
*
* This function must be called when the cached PHC time is no longer valid,
* such as after a time adjustment. It discards any outstanding Tx timestamps,
* and updates the cached PHC time for both the PF and Rx rings. If updating
* the PHC time cannot be done immediately, a warning message is logged and
* the work item is scheduled.
* such as after a time adjustment. It marks any currently outstanding Tx
* timestamps as stale and updates the cached PHC time for both the PF and Rx
* rings.
*
* These steps are required in order to ensure that we do not accidentally
* report a timestamp extended by the wrong PHC cached copy. Note that we
* do not directly update the cached timestamp here because it is possible
* this might produce an error when ICE_CFG_BUSY is set. If this occurred, we
* would have to try again. During that time window, timestamps might be
* requested and returned with an invalid extension. Thus, on failure to
* immediately update the cached PHC time we would need to zero the value
* anyways. For this reason, we just zero the value immediately and queue the
* update work item.
* If updating the PHC time cannot be done immediately, a warning message is
* logged and the work item is scheduled immediately to minimize the window
* with a wrong cached timestamp.
*/
static void ice_ptp_reset_cached_phctime(struct ice_pf *pf)
{
@ -978,8 +1068,12 @@ static void ice_ptp_reset_cached_phctime(struct ice_pf *pf)
msecs_to_jiffies(10));
}
/* Flush any outstanding Tx timestamps */
ice_ptp_flush_tx_tracker(pf, &pf->ptp.port.tx);
/* Mark any outstanding timestamps as stale, since they might have
* been captured in hardware before the time update. This could lead
* to us extending them with the wrong cached value resulting in
* incorrect timestamp values.
*/
ice_ptp_mark_tx_tracker_stale(&pf->ptp.port.tx);
}
/**
@ -1059,19 +1153,6 @@ static u64 ice_base_incval(struct ice_pf *pf)
return incval;
}
/**
* ice_ptp_reset_ts_memory_quad - Reset timestamp memory for one quad
* @pf: The PF private data structure
* @quad: The quad (0-4)
*/
static void ice_ptp_reset_ts_memory_quad(struct ice_pf *pf, int quad)
{
struct ice_hw *hw = &pf->hw;
ice_write_quad_reg_e822(hw, quad, Q_REG_TS_CTRL, Q_REG_TS_CTRL_M);
ice_write_quad_reg_e822(hw, quad, Q_REG_TS_CTRL, ~(u32)Q_REG_TS_CTRL_M);
}
/**
* ice_ptp_check_tx_fifo - Check whether Tx FIFO is in an OK state
* @port: PTP port for which Tx FIFO is checked
@ -1124,7 +1205,7 @@ static int ice_ptp_check_tx_fifo(struct ice_ptp_port *port)
dev_dbg(ice_pf_to_dev(pf),
"Port %d Tx FIFO still not empty; resetting quad %d\n",
port->port_num, quad);
ice_ptp_reset_ts_memory_quad(pf, quad);
ice_ptp_reset_ts_memory_quad_e822(hw, quad);
port->tx_fifo_busy_cnt = FIFO_OK;
return 0;
}
@ -1133,130 +1214,49 @@ static int ice_ptp_check_tx_fifo(struct ice_ptp_port *port)
}
/**
* ice_ptp_check_tx_offset_valid - Check if the Tx PHY offset is valid
* @port: the PTP port to check
*
* Checks whether the Tx offset for the PHY associated with this port is
* valid. Returns 0 if the offset is valid, and a non-zero error code if it is
* not.
*/
static int ice_ptp_check_tx_offset_valid(struct ice_ptp_port *port)
{
struct ice_pf *pf = ptp_port_to_pf(port);
struct device *dev = ice_pf_to_dev(pf);
struct ice_hw *hw = &pf->hw;
u32 val;
int err;
err = ice_ptp_check_tx_fifo(port);
if (err)
return err;
err = ice_read_phy_reg_e822(hw, port->port_num, P_REG_TX_OV_STATUS,
&val);
if (err) {
dev_err(dev, "Failed to read TX_OV_STATUS for port %d, err %d\n",
port->port_num, err);
return -EAGAIN;
}
if (!(val & P_REG_TX_OV_STATUS_OV_M))
return -EAGAIN;
return 0;
}
/**
* ice_ptp_check_rx_offset_valid - Check if the Rx PHY offset is valid
* @port: the PTP port to check
*
* Checks whether the Rx offset for the PHY associated with this port is
* valid. Returns 0 if the offset is valid, and a non-zero error code if it is
* not.
*/
static int ice_ptp_check_rx_offset_valid(struct ice_ptp_port *port)
{
struct ice_pf *pf = ptp_port_to_pf(port);
struct device *dev = ice_pf_to_dev(pf);
struct ice_hw *hw = &pf->hw;
int err;
u32 val;
err = ice_read_phy_reg_e822(hw, port->port_num, P_REG_RX_OV_STATUS,
&val);
if (err) {
dev_err(dev, "Failed to read RX_OV_STATUS for port %d, err %d\n",
port->port_num, err);
return err;
}
if (!(val & P_REG_RX_OV_STATUS_OV_M))
return -EAGAIN;
return 0;
}
/**
* ice_ptp_check_offset_valid - Check port offset valid bit
* @port: Port for which offset valid bit is checked
*
* Returns 0 if both Tx and Rx offset are valid, and -EAGAIN if one of the
* offset is not ready.
*/
static int ice_ptp_check_offset_valid(struct ice_ptp_port *port)
{
int tx_err, rx_err;
/* always check both Tx and Rx offset validity */
tx_err = ice_ptp_check_tx_offset_valid(port);
rx_err = ice_ptp_check_rx_offset_valid(port);
if (tx_err || rx_err)
return -EAGAIN;
return 0;
}
/**
* ice_ptp_wait_for_offset_valid - Check for valid Tx and Rx offsets
* ice_ptp_wait_for_offsets - Check for valid Tx and Rx offsets
* @work: Pointer to the kthread_work structure for this task
*
* Check whether both the Tx and Rx offsets are valid for enabling the vernier
* calibration.
* Check whether hardware has completed measuring the Tx and Rx offset values
* used to configure and enable vernier timestamp calibration.
*
* Once we have valid offsets from hardware, update the total Tx and Rx
* offsets, and exit bypass mode. This enables more precise timestamps using
* the extra data measured during the vernier calibration process.
* Once the offset in either direction is measured, configure the associated
* registers with the calibrated offset values and enable timestamping. The Tx
* and Rx directions are configured independently as soon as their associated
* offsets are known.
*
* This function reschedules itself until both Tx and Rx calibration have
* completed.
*/
static void ice_ptp_wait_for_offset_valid(struct kthread_work *work)
static void ice_ptp_wait_for_offsets(struct kthread_work *work)
{
struct ice_ptp_port *port;
int err;
struct device *dev;
struct ice_pf *pf;
struct ice_hw *hw;
int tx_err;
int rx_err;
port = container_of(work, struct ice_ptp_port, ov_work.work);
pf = ptp_port_to_pf(port);
hw = &pf->hw;
dev = ice_pf_to_dev(pf);
if (ice_is_reset_in_progress(pf->state))
return;
if (ice_ptp_check_offset_valid(port)) {
/* Offsets not ready yet, try again later */
if (ice_is_reset_in_progress(pf->state)) {
/* wait for device driver to complete reset */
kthread_queue_delayed_work(pf->ptp.kworker,
&port->ov_work,
msecs_to_jiffies(100));
return;
}
/* Offsets are valid, so it is safe to exit bypass mode */
err = ice_phy_exit_bypass_e822(hw, port->port_num);
if (err) {
dev_warn(dev, "Failed to exit bypass mode for PHY port %u, err %d\n",
port->port_num, err);
tx_err = ice_ptp_check_tx_fifo(port);
if (!tx_err)
tx_err = ice_phy_cfg_tx_offset_e822(hw, port->port_num);
rx_err = ice_phy_cfg_rx_offset_e822(hw, port->port_num);
if (tx_err || rx_err) {
/* Tx and/or Rx offset not yet configured, try again later */
kthread_queue_delayed_work(pf->ptp.kworker,
&port->ov_work,
msecs_to_jiffies(100));
return;
}
}
@ -1317,16 +1317,20 @@ ice_ptp_port_phy_restart(struct ice_ptp_port *ptp_port)
kthread_cancel_delayed_work_sync(&ptp_port->ov_work);
/* temporarily disable Tx timestamps while calibrating PHY offset */
spin_lock(&ptp_port->tx.lock);
ptp_port->tx.calibrating = true;
spin_unlock(&ptp_port->tx.lock);
ptp_port->tx_fifo_busy_cnt = 0;
/* Start the PHY timer in bypass mode */
err = ice_start_phy_timer_e822(hw, port, true);
/* Start the PHY timer in Vernier mode */
err = ice_start_phy_timer_e822(hw, port);
if (err)
goto out_unlock;
/* Enable Tx timestamps right away */
spin_lock(&ptp_port->tx.lock);
ptp_port->tx.calibrating = false;
spin_unlock(&ptp_port->tx.lock);
kthread_queue_delayed_work(pf->ptp.kworker, &ptp_port->ov_work, 0);
@ -1341,45 +1345,33 @@ out_unlock:
}
/**
* ice_ptp_link_change - Set or clear port registers for timestamping
* ice_ptp_link_change - Reconfigure PTP after link status change
* @pf: Board private structure
* @port: Port for which the PHY start is set
* @linkup: Link is up or down
*/
int ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
{
struct ice_ptp_port *ptp_port;
if (!test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags))
return 0;
if (!test_bit(ICE_FLAG_PTP, pf->flags))
return;
if (port >= ICE_NUM_EXTERNAL_PORTS)
return -EINVAL;
if (WARN_ON_ONCE(port >= ICE_NUM_EXTERNAL_PORTS))
return;
ptp_port = &pf->ptp.port;
if (ptp_port->port_num != port)
return -EINVAL;
if (WARN_ON_ONCE(ptp_port->port_num != port))
return;
/* Update cached link err for this port immediately */
/* Update cached link status for this port immediately */
ptp_port->link_up = linkup;
if (!test_bit(ICE_FLAG_PTP, pf->flags))
/* PTP is not setup */
return -EAGAIN;
/* E810 devices do not need to reconfigure the PHY */
if (ice_is_e810(&pf->hw))
return;
return ice_ptp_port_phy_restart(ptp_port);
}
/**
* ice_ptp_reset_ts_memory - Reset timestamp memory for all quads
* @pf: The PF private data structure
*/
static void ice_ptp_reset_ts_memory(struct ice_pf *pf)
{
int quad;
quad = pf->hw.port_info->lport / ICE_PORTS_PER_QUAD;
ice_ptp_reset_ts_memory_quad(pf, quad);
ice_ptp_port_phy_restart(ptp_port);
}
/**
@ -1397,7 +1389,7 @@ static int ice_ptp_tx_ena_intr(struct ice_pf *pf, bool ena, u32 threshold)
int quad;
u32 val;
ice_ptp_reset_ts_memory(pf);
ice_ptp_reset_ts_memory(hw);
for (quad = 0; quad < ICE_MAX_QUAD; quad++) {
err = ice_read_quad_reg_e822(hw, quad, Q_REG_TX_MEM_GBL_CFG,
@ -2332,11 +2324,14 @@ s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)
{
u8 idx;
/* Check if this tracker is initialized */
if (!tx->init || tx->calibrating)
return -1;
spin_lock(&tx->lock);
/* Check that this tracker is accepting new timestamp requests */
if (!ice_ptp_is_tx_tracker_up(tx)) {
spin_unlock(&tx->lock);
return -1;
}
/* Find and set the first available index */
idx = find_first_zero_bit(tx->in_use, tx->len);
if (idx < tx->len) {
@ -2345,6 +2340,7 @@ s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)
* requests.
*/
set_bit(idx, tx->in_use);
clear_bit(idx, tx->stale);
tx->tstamps[idx].start = jiffies;
tx->tstamps[idx].skb = skb_get(skb);
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
@ -2359,7 +2355,7 @@ s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)
if (idx >= tx->len)
return -1;
else
return idx + tx->quad_offset;
return idx + tx->offset;
}
/**
@ -2384,8 +2380,6 @@ static void ice_ptp_periodic_work(struct kthread_work *work)
err = ice_ptp_update_cached_phctime(pf);
ice_ptp_tx_tstamp_cleanup(pf, &pf->ptp.port.tx);
/* Run twice a second or reschedule if phc update failed */
kthread_queue_delayed_work(ptp->kworker, &ptp->work,
msecs_to_jiffies(err ? 10 : 500));
@ -2462,7 +2456,7 @@ pfr:
err = ice_ptp_init_tx_e810(pf, &ptp->port.tx);
} else {
kthread_init_delayed_work(&ptp->port.ov_work,
ice_ptp_wait_for_offset_valid);
ice_ptp_wait_for_offsets);
err = ice_ptp_init_tx_e822(pf, &ptp->port.tx,
ptp->port.port_num);
}
@ -2625,7 +2619,7 @@ static int ice_ptp_init_port(struct ice_pf *pf, struct ice_ptp_port *ptp_port)
return ice_ptp_init_tx_e810(pf, &ptp_port->tx);
kthread_init_delayed_work(&ptp_port->ov_work,
ice_ptp_wait_for_offset_valid);
ice_ptp_wait_for_offsets);
return ice_ptp_init_tx_e822(pf, &ptp_port->tx, ptp_port->port_num);
}

View File

@ -93,9 +93,14 @@ struct ice_perout_channel {
* we discard old requests that were not fulfilled within a 2 second time
* window.
* Timestamp values in the PHY are read only and do not get cleared except at
* hardware reset or when a new timestamp value is captured. The cached_tstamp
* field is used to detect the case where a new timestamp has not yet been
* captured, ensuring that we avoid sending stale timestamp data to the stack.
* hardware reset or when a new timestamp value is captured.
*
* Some PHY types do not provide a "ready" bitmap indicating which timestamp
* indexes are valid. In these cases, we use a cached_tstamp to keep track of
* the last timestamp we read for a given index. If the current timestamp
* value is the same as the cached value, we assume a new timestamp hasn't
* been captured. This avoids reporting stale timestamps to the stack. This is
* only done if the verify_cached flag is set in ice_ptp_tx structure.
*/
struct ice_tx_tstamp {
struct sk_buff *skb;
@ -105,30 +110,35 @@ struct ice_tx_tstamp {
/**
* struct ice_ptp_tx - Tracking structure for all Tx timestamp requests on a port
* @lock: lock to prevent concurrent write to in_use bitmap
* @lock: lock to prevent concurrent access to fields of this struct
* @tstamps: array of len to store outstanding requests
* @in_use: bitmap of len to indicate which slots are in use
* @quad: which quad the timestamps are captured in
* @quad_offset: offset into timestamp block of the quad to get the real index
* @stale: bitmap of len to indicate slots which have stale timestamps
* @block: which memory block (quad or port) the timestamps are captured in
* @offset: offset into timestamp block to get the real index
* @len: length of the tstamps and in_use fields.
* @init: if true, the tracker is initialized;
* @calibrating: if true, the PHY is calibrating the Tx offset. During this
* window, timestamps are temporarily disabled.
* @verify_cached: if true, verify new timestamp differs from last read value
*/
struct ice_ptp_tx {
spinlock_t lock; /* lock protecting in_use bitmap */
struct ice_tx_tstamp *tstamps;
unsigned long *in_use;
u8 quad;
u8 quad_offset;
unsigned long *stale;
u8 block;
u8 offset;
u8 len;
u8 init;
u8 calibrating;
u8 init : 1;
u8 calibrating : 1;
u8 verify_cached : 1;
};
/* Quad and port information for initializing timestamp blocks */
#define INDEX_PER_QUAD 64
#define INDEX_PER_PORT (INDEX_PER_QUAD / ICE_PORTS_PER_QUAD)
#define INDEX_PER_PORT_E822 16
#define INDEX_PER_PORT_E810 64
/**
* struct ice_ptp_port - data used to initialize an external port for PTP
@ -256,7 +266,7 @@ void ice_ptp_reset(struct ice_pf *pf);
void ice_ptp_prepare_for_reset(struct ice_pf *pf);
void ice_ptp_init(struct ice_pf *pf);
void ice_ptp_release(struct ice_pf *pf);
int ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup);
void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup);
#else /* IS_ENABLED(CONFIG_PTP_1588_CLOCK) */
static inline int ice_ptp_set_ts_config(struct ice_pf *pf, struct ifreq *ifr)
{
@ -291,7 +301,8 @@ static inline void ice_ptp_reset(struct ice_pf *pf) { }
static inline void ice_ptp_prepare_for_reset(struct ice_pf *pf) { }
static inline void ice_ptp_init(struct ice_pf *pf) { }
static inline void ice_ptp_release(struct ice_pf *pf) { }
static inline int ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
{ return 0; }
static inline void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
{
}
#endif /* IS_ENABLED(CONFIG_PTP_1588_CLOCK) */
#endif /* _ICE_PTP_H_ */

View File

@ -655,6 +655,32 @@ ice_clear_phy_tstamp_e822(struct ice_hw *hw, u8 quad, u8 idx)
return 0;
}
/**
* ice_ptp_reset_ts_memory_quad_e822 - Clear all timestamps from the quad block
* @hw: pointer to the HW struct
* @quad: the quad to read from
*
* Clear all timestamps from the PHY quad block that is shared between the
* internal PHYs on the E822 devices.
*/
void ice_ptp_reset_ts_memory_quad_e822(struct ice_hw *hw, u8 quad)
{
ice_write_quad_reg_e822(hw, quad, Q_REG_TS_CTRL, Q_REG_TS_CTRL_M);
ice_write_quad_reg_e822(hw, quad, Q_REG_TS_CTRL, ~(u32)Q_REG_TS_CTRL_M);
}
/**
* ice_ptp_reset_ts_memory_e822 - Clear all timestamps from all quad blocks
* @hw: pointer to the HW struct
*/
static void ice_ptp_reset_ts_memory_e822(struct ice_hw *hw)
{
unsigned int quad;
for (quad = 0; quad < ICE_MAX_QUAD; quad++)
ice_ptp_reset_ts_memory_quad_e822(hw, quad);
}
/**
* ice_read_cgu_reg_e822 - Read a CGU register
* @hw: pointer to the HW struct
@ -1715,21 +1741,48 @@ ice_calc_fixed_tx_offset_e822(struct ice_hw *hw, enum ice_ptp_link_spd link_spd)
* adjust Tx timestamps by. This is calculated by combining some known static
* latency along with the Vernier offset computations done by hardware.
*
* This function must be called only after the offset registers are valid,
* i.e. after the Vernier calibration wait has passed, to ensure that the PHY
* has measured the offset.
* This function will not return successfully until the Tx offset calculations
* have been completed, which requires waiting until at least one packet has
* been transmitted by the device. It is safe to call this function
* periodically until calibration succeeds, as it will only program the offset
* once.
*
* To avoid overflow, when calculating the offset based on the known static
* latency values, we use measurements in 1/100th of a nanosecond, and divide
* the TUs per second up front. This avoids overflow while allowing
* calculation of the adjustment using integer arithmetic.
*
* Returns zero on success, -EBUSY if the hardware vernier offset
* calibration has not completed, or another error code on failure.
*/
static int ice_phy_cfg_tx_offset_e822(struct ice_hw *hw, u8 port)
int ice_phy_cfg_tx_offset_e822(struct ice_hw *hw, u8 port)
{
enum ice_ptp_link_spd link_spd;
enum ice_ptp_fec_mode fec_mode;
u64 total_offset, val;
int err;
u32 reg;
/* Nothing to do if we've already programmed the offset */
err = ice_read_phy_reg_e822(hw, port, P_REG_TX_OR, &reg);
if (err) {
ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_OR for port %u, err %d\n",
port, err);
return err;
}
if (reg)
return 0;
err = ice_read_phy_reg_e822(hw, port, P_REG_TX_OV_STATUS, &reg);
if (err) {
ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_OV_STATUS for port %u, err %d\n",
port, err);
return err;
}
if (!(reg & P_REG_TX_OV_STATUS_OV_M))
return -EBUSY;
err = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode);
if (err)
@ -1783,46 +1836,8 @@ static int ice_phy_cfg_tx_offset_e822(struct ice_hw *hw, u8 port)
if (err)
return err;
return 0;
}
/**
* ice_phy_cfg_fixed_tx_offset_e822 - Configure Tx offset for bypass mode
* @hw: pointer to the HW struct
* @port: the PHY port to configure
*
* Calculate and program the fixed Tx offset, and indicate that the offset is
* ready. This can be used when operating in bypass mode.
*/
static int
ice_phy_cfg_fixed_tx_offset_e822(struct ice_hw *hw, u8 port)
{
enum ice_ptp_link_spd link_spd;
enum ice_ptp_fec_mode fec_mode;
u64 total_offset;
int err;
err = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode);
if (err)
return err;
total_offset = ice_calc_fixed_tx_offset_e822(hw, link_spd);
/* Program the fixed Tx offset into the P_REG_TOTAL_TX_OFFSET_L
* register, then indicate that the Tx offset is ready. After this,
* timestamps will be enabled.
*
* Note that this skips including the more precise offsets generated
* by the Vernier calibration.
*/
err = ice_write_64b_phy_reg_e822(hw, port, P_REG_TOTAL_TX_OFFSET_L,
total_offset);
if (err)
return err;
err = ice_write_phy_reg_e822(hw, port, P_REG_TX_OR, 1);
if (err)
return err;
dev_info(ice_hw_to_dev(hw), "Port=%d Tx vernier offset calibration complete\n",
port);
return 0;
}
@ -2026,6 +2041,11 @@ ice_calc_fixed_rx_offset_e822(struct ice_hw *hw, enum ice_ptp_link_spd link_spd)
* measurements taken in hardware with some data about known fixed delay as
* well as adjusting for multi-lane alignment delay.
*
* This function will not return successfully until the Rx offset calculations
* have been completed, which requires waiting until at least one packet has
* been received by the device. It is safe to call this function periodically
* until calibration succeeds, as it will only program the offset once.
*
* This function must be called only after the offset registers are valid,
* i.e. after the Vernier calibration wait has passed, to ensure that the PHY
* has measured the offset.
@ -2034,13 +2054,38 @@ ice_calc_fixed_rx_offset_e822(struct ice_hw *hw, enum ice_ptp_link_spd link_spd)
* latency values, we use measurements in 1/100th of a nanosecond, and divide
* the TUs per second up front. This avoids overflow while allowing
* calculation of the adjustment using integer arithmetic.
*
* Returns zero on success, -EBUSY if the hardware vernier offset
* calibration has not completed, or another error code on failure.
*/
static int ice_phy_cfg_rx_offset_e822(struct ice_hw *hw, u8 port)
int ice_phy_cfg_rx_offset_e822(struct ice_hw *hw, u8 port)
{
enum ice_ptp_link_spd link_spd;
enum ice_ptp_fec_mode fec_mode;
u64 total_offset, pmd, val;
int err;
u32 reg;
/* Nothing to do if we've already programmed the offset */
err = ice_read_phy_reg_e822(hw, port, P_REG_RX_OR, &reg);
if (err) {
ice_debug(hw, ICE_DBG_PTP, "Failed to read RX_OR for port %u, err %d\n",
port, err);
return err;
}
if (reg)
return 0;
err = ice_read_phy_reg_e822(hw, port, P_REG_RX_OV_STATUS, &reg);
if (err) {
ice_debug(hw, ICE_DBG_PTP, "Failed to read RX_OV_STATUS for port %u, err %d\n",
port, err);
return err;
}
if (!(reg & P_REG_RX_OV_STATUS_OV_M))
return -EBUSY;
err = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode);
if (err)
@ -2101,46 +2146,8 @@ static int ice_phy_cfg_rx_offset_e822(struct ice_hw *hw, u8 port)
if (err)
return err;
return 0;
}
/**
* ice_phy_cfg_fixed_rx_offset_e822 - Configure fixed Rx offset for bypass mode
* @hw: pointer to the HW struct
* @port: the PHY port to configure
*
* Calculate and program the fixed Rx offset, and indicate that the offset is
* ready. This can be used when operating in bypass mode.
*/
static int
ice_phy_cfg_fixed_rx_offset_e822(struct ice_hw *hw, u8 port)
{
enum ice_ptp_link_spd link_spd;
enum ice_ptp_fec_mode fec_mode;
u64 total_offset;
int err;
err = ice_phy_get_speed_and_fec_e822(hw, port, &link_spd, &fec_mode);
if (err)
return err;
total_offset = ice_calc_fixed_rx_offset_e822(hw, link_spd);
/* Program the fixed Rx offset into the P_REG_TOTAL_RX_OFFSET_L
* register, then indicate that the Rx offset is ready. After this,
* timestamps will be enabled.
*
* Note that this skips including the more precise offsets generated
* by Vernier calibration.
*/
err = ice_write_64b_phy_reg_e822(hw, port, P_REG_TOTAL_RX_OFFSET_L,
total_offset);
if (err)
return err;
err = ice_write_phy_reg_e822(hw, port, P_REG_RX_OR, 1);
if (err)
return err;
dev_info(ice_hw_to_dev(hw), "Port=%d Rx vernier offset calibration complete\n",
port);
return 0;
}
@ -2323,20 +2330,14 @@ ice_stop_phy_timer_e822(struct ice_hw *hw, u8 port, bool soft_reset)
* ice_start_phy_timer_e822 - Start the PHY clock timer
* @hw: pointer to the HW struct
* @port: the PHY port to start
* @bypass: if true, start the PHY in bypass mode
*
* Start the clock of a PHY port. This must be done as part of the flow to
* re-calibrate Tx and Rx timestamping offsets whenever the clock time is
* initialized or when link speed changes.
*
* Bypass mode enables timestamps immediately without waiting for Vernier
* calibration to complete. Hardware will still continue taking Vernier
* measurements on Tx or Rx of packets, but they will not be applied to
* timestamps. Use ice_phy_exit_bypass_e822 to exit bypass mode once hardware
* has completed offset calculation.
* Hardware will take Vernier measurements on Tx or Rx of packets.
*/
int
ice_start_phy_timer_e822(struct ice_hw *hw, u8 port, bool bypass)
int ice_start_phy_timer_e822(struct ice_hw *hw, u8 port)
{
u32 lo, hi, val;
u64 incval;
@ -2414,110 +2415,42 @@ ice_start_phy_timer_e822(struct ice_hw *hw, u8 port, bool bypass)
if (err)
return err;
if (bypass) {
val |= P_REG_PS_BYPASS_MODE_M;
/* Enter BYPASS mode, enabling timestamps immediately. */
err = ice_write_phy_reg_e822(hw, port, P_REG_PS, val);
if (err)
return err;
/* Program the fixed Tx offset */
err = ice_phy_cfg_fixed_tx_offset_e822(hw, port);
if (err)
return err;
/* Program the fixed Rx offset */
err = ice_phy_cfg_fixed_rx_offset_e822(hw, port);
if (err)
return err;
}
ice_debug(hw, ICE_DBG_PTP, "Enabled clock on PHY port %u\n", port);
return 0;
}
/**
* ice_phy_exit_bypass_e822 - Exit bypass mode, after vernier calculations
* ice_get_phy_tx_tstamp_ready_e822 - Read Tx memory status register
* @hw: pointer to the HW struct
* @port: the PHY port to configure
* @quad: the timestamp quad to read from
* @tstamp_ready: contents of the Tx memory status register
*
* After hardware finishes vernier calculations for the Tx and Rx offset, this
* function can be used to exit bypass mode by updating the total Tx and Rx
* offsets, and then disabling bypass. This will enable hardware to include
* the more precise offset calibrations, increasing precision of the generated
* timestamps.
*
* This cannot be done until hardware has measured the offsets, which requires
* waiting until at least one packet has been sent and received by the device.
* Read the Q_REG_TX_MEMORY_STATUS register indicating which timestamps in
* the PHY are ready. A set bit means the corresponding timestamp is valid and
* ready to be captured from the PHY timestamp block.
*/
int ice_phy_exit_bypass_e822(struct ice_hw *hw, u8 port)
static int
ice_get_phy_tx_tstamp_ready_e822(struct ice_hw *hw, u8 quad, u64 *tstamp_ready)
{
u32 hi, lo;
int err;
u32 val;
err = ice_read_phy_reg_e822(hw, port, P_REG_TX_OV_STATUS, &val);
err = ice_read_quad_reg_e822(hw, quad, Q_REG_TX_MEMORY_STATUS_U, &hi);
if (err) {
ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_OV_STATUS for port %u, err %d\n",
port, err);
ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_MEMORY_STATUS_U for quad %u, err %d\n",
quad, err);
return err;
}
if (!(val & P_REG_TX_OV_STATUS_OV_M)) {
ice_debug(hw, ICE_DBG_PTP, "Tx offset is not yet valid for port %u\n",
port);
return -EBUSY;
}
err = ice_read_phy_reg_e822(hw, port, P_REG_RX_OV_STATUS, &val);
err = ice_read_quad_reg_e822(hw, quad, Q_REG_TX_MEMORY_STATUS_L, &lo);
if (err) {
ice_debug(hw, ICE_DBG_PTP, "Failed to read RX_OV_STATUS for port %u, err %d\n",
port, err);
ice_debug(hw, ICE_DBG_PTP, "Failed to read TX_MEMORY_STATUS_L for quad %u, err %d\n",
quad, err);
return err;
}
if (!(val & P_REG_TX_OV_STATUS_OV_M)) {
ice_debug(hw, ICE_DBG_PTP, "Rx offset is not yet valid for port %u\n",
port);
return -EBUSY;
}
err = ice_phy_cfg_tx_offset_e822(hw, port);
if (err) {
ice_debug(hw, ICE_DBG_PTP, "Failed to program total Tx offset for port %u, err %d\n",
port, err);
return err;
}
err = ice_phy_cfg_rx_offset_e822(hw, port);
if (err) {
ice_debug(hw, ICE_DBG_PTP, "Failed to program total Rx offset for port %u, err %d\n",
port, err);
return err;
}
/* Exit bypass mode now that the offset has been updated */
err = ice_read_phy_reg_e822(hw, port, P_REG_PS, &val);
if (err) {
ice_debug(hw, ICE_DBG_PTP, "Failed to read P_REG_PS for port %u, err %d\n",
port, err);
return err;
}
if (!(val & P_REG_PS_BYPASS_MODE_M))
ice_debug(hw, ICE_DBG_PTP, "Port %u not in bypass mode\n",
port);
val &= ~P_REG_PS_BYPASS_MODE_M;
err = ice_write_phy_reg_e822(hw, port, P_REG_PS, val);
if (err) {
ice_debug(hw, ICE_DBG_PTP, "Failed to disable bypass for port %u, err %d\n",
port, err);
return err;
}
dev_info(ice_hw_to_dev(hw), "Exiting bypass mode on PHY port %u\n",
port);
*tstamp_ready = (u64)hi << 32 | (u64)lo;
return 0;
}
@ -3196,6 +3129,22 @@ int ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx)
return ice_clear_phy_tstamp_e822(hw, block, idx);
}
/**
* ice_get_phy_tx_tstamp_ready_e810 - Read Tx memory status register
* @hw: pointer to the HW struct
* @port: the PHY port to read
* @tstamp_ready: contents of the Tx memory status register
*
* E810 devices do not use a Tx memory status register. Instead simply
* indicate that all timestamps are currently ready.
*/
static int
ice_get_phy_tx_tstamp_ready_e810(struct ice_hw *hw, u8 port, u64 *tstamp_ready)
{
*tstamp_ready = 0xFFFFFFFFFFFFFFFF;
return 0;
}
/* E810T SMA functions
*
* The following functions operate specifically on E810T hardware and are used
@ -3378,6 +3327,18 @@ bool ice_is_pca9575_present(struct ice_hw *hw)
return !status && handle;
}
/**
* ice_ptp_reset_ts_memory - Reset timestamp memory for all blocks
* @hw: pointer to the HW struct
*/
void ice_ptp_reset_ts_memory(struct ice_hw *hw)
{
if (ice_is_e810(hw))
return;
ice_ptp_reset_ts_memory_e822(hw);
}
/**
* ice_ptp_init_phc - Initialize PTP hardware clock
* @hw: pointer to the HW struct
@ -3399,3 +3360,24 @@ int ice_ptp_init_phc(struct ice_hw *hw)
else
return ice_ptp_init_phc_e822(hw);
}
/**
* ice_get_phy_tx_tstamp_ready - Read PHY Tx memory status indication
* @hw: pointer to the HW struct
* @block: the timestamp block to check
* @tstamp_ready: storage for the PHY Tx memory status information
*
* Check the PHY for Tx timestamp memory status. This reports a 64 bit value
* which indicates which timestamps in the block may be captured. A set bit
* means the timestamp can be read. An unset bit means the timestamp is not
* ready and software should avoid reading the register.
*/
int ice_get_phy_tx_tstamp_ready(struct ice_hw *hw, u8 block, u64 *tstamp_ready)
{
if (ice_is_e810(hw))
return ice_get_phy_tx_tstamp_ready_e810(hw, block,
tstamp_ready);
else
return ice_get_phy_tx_tstamp_ready_e822(hw, block,
tstamp_ready);
}

View File

@ -133,7 +133,9 @@ int ice_ptp_write_incval_locked(struct ice_hw *hw, u64 incval);
int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj);
int ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp);
int ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx);
void ice_ptp_reset_ts_memory(struct ice_hw *hw);
int ice_ptp_init_phc(struct ice_hw *hw);
int ice_get_phy_tx_tstamp_ready(struct ice_hw *hw, u8 block, u64 *tstamp_ready);
/* E822 family functions */
int ice_read_phy_reg_e822(struct ice_hw *hw, u8 port, u16 offset, u32 *val);
@ -141,6 +143,7 @@ int ice_write_phy_reg_e822(struct ice_hw *hw, u8 port, u16 offset, u32 val);
int ice_read_quad_reg_e822(struct ice_hw *hw, u8 quad, u16 offset, u32 *val);
int ice_write_quad_reg_e822(struct ice_hw *hw, u8 quad, u16 offset, u32 val);
int ice_ptp_prep_port_adj_e822(struct ice_hw *hw, u8 port, s64 time);
void ice_ptp_reset_ts_memory_quad_e822(struct ice_hw *hw, u8 quad);
/**
* ice_e822_time_ref - Get the current TIME_REF from capabilities
@ -184,8 +187,9 @@ static inline u64 ice_e822_pps_delay(enum ice_time_ref_freq time_ref)
/* E822 Vernier calibration functions */
int ice_stop_phy_timer_e822(struct ice_hw *hw, u8 port, bool soft_reset);
int ice_start_phy_timer_e822(struct ice_hw *hw, u8 port, bool bypass);
int ice_phy_exit_bypass_e822(struct ice_hw *hw, u8 port);
int ice_start_phy_timer_e822(struct ice_hw *hw, u8 port);
int ice_phy_cfg_tx_offset_e822(struct ice_hw *hw, u8 port);
int ice_phy_cfg_rx_offset_e822(struct ice_hw *hw, u8 port);
/* E810 family functions */
int ice_ptp_init_phy_e810(struct ice_hw *hw);