Commit Graph

675 Commits

Author SHA1 Message Date
Alex Elder
8defab8bdf net: ipa: don't assume 8 modem routing table entries
Currently all platforms are assumed allot 8 routing table entries
for use by the modem.  Instead, add a new configuration data entry
that defines the number of modem routing table entries, and record
that in the IPA structure.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-10-27 13:38:13 +02:00
Alex Elder
0439e6743c net: ipa: determine route table size from memory region
Currently we assume that any routing table contains a fixed number
of entries.  The number of entries in a routing table can actually
vary, depending only on the size of the IPA-local memory region used
to hold the table.

Stop assuming that a routing table has exactly 15 entries.  Instead,
determine the number of entries in a routing table by dividing its
memory region size by the size of an entry.

The number of entries is computed early, when ipa_table_mem_valid()
is called by ipa_table_init().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-10-27 13:38:12 +02:00
Alex Elder
fc094058ce net: ipa: record the route table size in the IPA structure
The non-hashed routing tables for IPv4 and IPv6 will be the same
size.  And if supported, the hashed routing tables will be the same
size as the non-hashed tables.

Record the size (number of entries) of all routing tables in the IPA
structure.  For now, initialize this field using IPA_ROUTE_TABLE_MAX,
and just do so when the first route table is validated.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-10-27 13:38:12 +02:00
Caleb Connolly
95a0396a06 net: ipa: don't configure IDLE_INDICATION on v3.1
IPA v3.1 doesn't support the IDLE_INDICATION_CFG register, this was
causing a harmless splat in ipa_idle_indication_cfg(), add a version
check to prevent trying to fetch this register on v3.1

Fixes: 6a244b75cf ("net: ipa: introduce ipa_reg()")
Signed-off-by: Caleb Connolly <caleb.connolly@linaro.org>
Reviewed-by: Alex Elder <elder@linaro.org>
Tested-by: Jami Kettunen <jami.kettunen@somainline.org>
Link: https://lore.kernel.org/r/20221024234850.4049778-1-caleb.connolly@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-25 19:49:13 -07:00
Caleb Connolly
05a31b94af net: ipa: fix v3.1 resource limit masks
The resource group limits for IPA v3.1 mistakenly used 6 bit wide mask
values, when the hardware actually uses 8. Out of range values were
silently ignored before, so the IPA worked as expected. However the
new generalised register definitions introduce stricter checking here,
they now cause some splats and result in the value 0 being written
instead. Fix the limit bitmask widths so that the correct values can be
written.

Fixes: 1c418c4a92 ("net: ipa: define resource group/type IPA register fields")
Signed-off-by: Caleb Connolly <caleb.connolly@linaro.org>
Reviewed-by: Alex Elder <elder@linaro.org>
Tested-by: Jami Kettunen <jami.kettunen@somainline.org>
Link: https://lore.kernel.org/r/20221024210336.4014983-2-caleb.connolly@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-25 19:48:58 -07:00
Caleb Connolly
f23a566bbf net: ipa: fix v3.5.1 resource limit max values
Some resource limits on IPA v3.5.1 have their max values set to
255, this causes a few splats in ipa_reg_encode and prevents the
IPA from booting properly. The limits are all 6 bits wide so
adjust the max values to 63.

Fixes: 1c418c4a92 ("net: ipa: define resource group/type IPA register fields")
Signed-off-by: Caleb Connolly <caleb.connolly@linaro.org>
Reviewed-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20221024210336.4014983-1-caleb.connolly@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-25 19:48:57 -07:00
Alex Elder
73da9cac51 net: ipa: check table memory regions earlier
Verify that the sizes of the routing and filter table memory regions
are valid as part of memory initialization, rather than waiting for
table initialization.  The main reason to do this is that upcoming
patches use these memory region sizes to determine the number of
entries in these tables, and we'll want to know these sizes are good
sooner.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-10-25 11:15:19 +02:00
Alex Elder
39ad815244 net: ipa: kill ipa_table_valid()
What ipa_table_valid() (and ipa_table_valid_one(), which it calls)
does is ensure that the memory regions that hold routing and filter
tables have reasonable size.  Specifically, it checks that the size
of a region is sufficient (or rather, exactly the right size) to
hold the maximum number of entries supported by the driver.  (There
is an additional check that's erroneous, but in practice it is never
reached.)

Recently ipa_table_mem_valid() was added, which is called by
ipa_table_init().  That function verifies that all table memory
regions are of sufficient size, and requires hashed tables to have
zero size if hashing is not supported.  It only ensures the filter
table is large enough to hold the number of endpoints that support
filtering, but that is adequate.

Therefore everything that ipa_table_valid() does is redundant, so
get rid of it.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-10-25 11:15:19 +02:00
Alex Elder
7fd10a2aca net: ipa: introduce ipa_cmd_init()
Currently, ipa_cmd_data_valid() is called by ipa_mem_config().
Nothing it does requires access to hardware though, so it can be
done during the init phase of IPA driver startup.

Create a new function ipa_cmd_init(), whose purpose is to do early
initialization related to IPA immediate commands.  It will call the
build-time validation function, then will make the two calls made
previously by ipa_cmd_data_valid().  This make ipa_cmd_data_valid()
unnecessary, so get rid of it.

Rename ipa_cmd_header_valid() to be ipa_cmd_header_init_local_valid(),
so its name is clearer about which IPA immediate command it is
associated with.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-10-25 11:15:19 +02:00
Alex Elder
5444b0ea99 net: ipa: verify table sizes fit in commands early
We currently verify the table size and offset fit in the immediate
command fields that must encode them in ipa_table_valid_one().  We
can now make this check earlier, in ipa_table_mem_valid().

The non-hashed IPv4 filter and route tables will always exist, and
their sizes will match the IPv6 tables, as well as the hashed tables
(if supported).  So it's sufficient to verify the offset and size of
the IPv4 non-hashed tables fit into these fields.

Rename the function ipa_cmd_table_init_valid(), to reinforce that
it is the TABLE_INIT immediate command fields we're checking.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-10-25 11:15:18 +02:00
Alex Elder
cf13919654 net: ipa: validate IPA table memory earlier
Add checks in ipa_table_init() to ensure the memory regions defined
for IPA filter and routing tables are valid.

For routing tables, the checks ensure:
  - The non-hashed IPv4 and IPv6 routing tables are defined
  - The non-hashed IPv4 and IPv6 routing tables are the same size
  - The number entries in the non-hashed IPv4 routing table is enough
    to hold the number entries available to the modem, plus at least
    one usable by the AP.

For filter tables, the checks ensure:
  - The non-hashed IPv4 and IPv6 filter tables are defined
  - The non-hashed IPv4 and IPv6 filter tables are the same size
  - The number entries in the non-hashed IPv4 filter table is enough
    to hold the endpoint bitmap, plus an entry for each defined
    endpoint that supports filtering.

In addition, for both routing and filter tables:
  - If hashing isn't supported (IPA v4.2), hashed tables are zero size
  - If hashing *is* supported, all hashed tables are the same size as
    their non-hashed counterparts.

When validating the size of routing tables, require the AP to have
at least one entry (in addition to those used by the modem).

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-10-25 11:15:18 +02:00
Alex Elder
2554322b31 net: ipa: remove two memory region checks
There's no need to ensure table memory regions fit within the
IPA-local memory range.  And there's no need to ensure the modem
header memory region is in range either.  These are verified for all
memory regions in ipa_mem_size_valid(), once we have settled on the
size of IPA memory.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-10-25 11:15:18 +02:00
Alex Elder
fb4014ac76 net: ipa: kill two constant symbols
The entries in each IPA routing table are divided between the modem
and the AP.  The modem always gets some number of entries located at
the base of the table; the AP gets all those that follow.

There's no reason to think the modem will use anything different
from the first entries in a routing table, so:
  - Get rid of IPA_ROUTE_MODEM_MIN (just assume it's 0)
  - Get rid of IPA_ROUTE_AP_MIN (just assume it's IPA_ROUTE_MODEM_COUNT)
And finally:
  - Open-code IPA_ROUTE_AP_COUNT and remove its definition

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-10-25 11:15:18 +02:00
Jeff Johnson
c0facc045a net: ipa: Make QMI message rules const
Commit ff6d365898 ("soc: qcom: qmi: use const for struct
qmi_elem_info") allows QMI message encoding/decoding rules to be
const, so do that for IPA.

Signed-off-by: Jeff Johnson <quic_jjohnson@quicinc.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Reviewed-by: Sibi Sankar <quic_sibis@quicinc.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-21 12:39:16 +01:00
Kees Cook
36875a063b net: ipa: Proactively round up to kmalloc bucket size
Instead of discovering the kmalloc bucket size _after_ allocation, round
up proactively so the allocation is explicitly made for the full size,
allowing the compiler to correctly reason about the resulting size of
the buffer through the existing __alloc_size() hint.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: netdev@vger.kernel.org
Reviewed-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/lkml/4d75a9fd-1b94-7208-9de8-5a0102223e68@ieee.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20221018092724.give.735-kees@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-10-20 10:13:54 +02:00
Alex Elder
a4388da51a net: ipa: update copyrights
Some source files state copyright dates that are earlier than the
last modification of the file.  Change the copyright year to 2022 in
all such cases.

Signed-off-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20220930224549.3503434-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-03 16:49:20 -07:00
Alex Elder
ace5dc6162 net: ipa: update comments
This patch just updates comments throughout the IPA code.

Transaction state is now tracked using indexes into an array rather
than linked lists, and a few comments refer to the "old way" of
doing things.  The description of how transactions are used was
changed to refer to "operations" rather than "commands", to
(hopefully) remove a possible ambiguity.

IPA register offsets and fields are now handled differently as well,
and the register documentation is updated to better describe the
code.

A few minor updates to comments were made (e.g., adding a missing
word, fixing a typo or punctuation, etc.).

Finally, the local macro atomic_dec_not_zero() is no longer used, so
it is deleted.

Signed-off-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20220930224527.3503404-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-03 16:49:05 -07:00
Jakub Kicinski
b48b89f9c1 net: drop the weight argument from netif_napi_add
We tell driver developers to always pass NAPI_POLL_WEIGHT
as the weight to netif_napi_add(). This may be confusing
to newcomers, drop the weight argument, those who really
need to tweak the weight can use netif_napi_add_weight().

Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> # for CAN
Link: https://lore.kernel.org/r/20220927132753.750069-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28 18:57:14 -07:00
Alex Elder
181ca02026 net: ipa: define remaining IPA register fields
Define the fields for the ENDP_INIT_DEAGGR, ENDP_INIT_RSRC_GRP,
ENDP_INIT_SEQ, ENDP_STATUS, and ENDP_FILTER_ROUTER_HSH_CFG, and
IPA_IRQ_UC IPA registers for all supported IPA versions.

Create enumerated types to identify fields for these IPA registers.
Use IPA_REG_FIELDS() and IPA_REG_STRIDE_FIELDS() to specify the
field mask values defined for these registers, for each supported
version of IPA.

Use ipa_reg_encode() and ipa_reg_bit() to build up the values to be
written to these registers, remove an inline function and all the
*_FMASK symbols that are now no longer used.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:52 -07:00
Alex Elder
216b409d09 net: ipa: define more IPA endpoint register fields
Define the fields for the ENDP_INIT_MODE, ENDP_INIT_AGGR,
ENDP_INIT_HOL_BLOCK_EN, and ENDP_INIT_HOL_BLOCK_TIMER IPA
registers for all supported IPA versions.

Create enumerated types to identify fields for these IPA registers.
Use IPA_REG_STRIDE_FIELDS() to specify the field mask values defined
for these registers, for each supported version of IPA.

Change aggr_time_limit_encode() and hol_block_timer_encode() so they
take an ipa_reg pointer, and use those register's fields to compute
their encoded results.  Have aggr_time_limit_encode() take an IPA
pointer rather than version, to match hol_block_timer_encode().

Use ipa_reg_encode(), ipa_reg_bit(), and ipa_reg_field_max() to
manipulate values to be written to these registers, remove the
definitions of the various inline functions and *_FMASK symbols that
are now no longer used.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:51 -07:00
Alex Elder
4468a3448b net: ipa: define some IPA endpoint register fields
Define the fields for the ENDP_INIT_CTRL, ENDP_INIT_CFG, ENDP_INIT_NAT,
ENDP_INIT_HDR, and ENDP_INIT_HDR_EXT IPA registers for all supported
IPA versions.

Create enumerated types to identify fields for these IPA registers.
Use IPA_REG_STRIDE_FIELDS() to specify the field mask values defined
for these registers, for each supported version of IPA.

Move ipa_header_size_encoded() and ipa_metadata_offset_encoded() out
of "ipa_reg.h" and into "ipa_endpoint.c".  Change them so they take
an additional ipa_reg structure argument, and use ipa_reg_encode()
to encode the parts of the header size and offset prior to writing
to the register.  Change their names to be verbs rather than nouns.

Use ipa_reg_encode(), ipa_reg_bit, and ipa_reg_field_max() to
manipulate values to be written to these registers, remove the
definition of the no-longer-used *_FMASK symbols.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:51 -07:00
Alex Elder
1c418c4a92 net: ipa: define resource group/type IPA register fields
Define the fields for the {SRC,DST}_RSRC_GRP_{01,23,45,67}_RSRC_TYPE
IPA registers for all supported IPA versions.

Create enumerated types to identify fields for these IPA registers.
Use IPA_REG_STRIDE_FIELDS() to specify the field mask values defined
for these registers, for each supported version of IPA.

Use ipa_reg_encode() to build up the values to be written to these
registers.

Remove the definition of the no-longer-used *_FMASK symbols.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:51 -07:00
Alex Elder
9265a4f0f0 net: ipa: define even more IPA register fields
Define the fields for the FLAVOR_0, IDLE_INDICATION_CFG,
QTIME_TIMESTAMP_CFG, TIMERS_XO_CLK_DIV_CFG and TIMERS_PULSE_GRAN_CFG
IPA registers for all supported IPA versions.

Create enumerated types to identify fields for these IPA registers.
Use IPA_REG_FIELDS() to specify the field mask values defined for
these registers, for each supported version of IPA.

Use ipa_reg_bit() and ipa_reg_encode() to build up the values to be
written to these registers.  Use ipa_reg_decode() to extract field
values from the FLAVOR_0 register.

Remove the definition of the no-longer-used *_FMASK symbols.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:51 -07:00
Alex Elder
b5c35fa470 net: ipa: define more IPA register fields
Define the fields for the LOCAL_PKT_PROC_CNTXT, COUNTER_CFG, and
IPA_TX_CFG IPA registers for all supported IPA versions.

Create enumerated types to identify fields for these IPA registers.
Use IPA_REG_FIELDS() to specify the field mask values defined for
these registers, for each supported version of IPA.

Use ipa_reg_bit() and ipa_reg_encode() to build up the values to be
written to these registers.  Remove the definition of the *_FMASK
symbols as well as proc_cntxt_base_addr_encoded(), because they are
no longer needed.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:51 -07:00
Alex Elder
62b9c009a8 net: ipa: define some more IPA register fields
Define the fields for the SHARED_MEM_SIZE, QSB_MAX_WRITES,
QSB_MAX_READS, FILT_ROUT_HASH_EN, and FILT_ROUT_HASH_FLUSH IPA
registers for all supported IPA versions.

Create enumerated types to identify fields for these registers.  Use
IPA_REG_FIELDS() to specify the field mask values defined for these
registers, for each supported version of IPA.

Use ipa_reg_bit() and ipa_reg_encode() to build up the values to be
written to these registers rather than using the *_FMASK
preprocessor symbols.

Remove the definition of the now unused *_FMASK symbols.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:50 -07:00
Alex Elder
479deb3298 net: ipa: define CLKON_CFG and ROUTE IPA register fields
Create the ipa_reg_clkon_cfg_field_id enumerated type, which
identifies the fields for the CLKON_CFG IPA register.  Add "CLKON_"
to a few short names to try to avoid name conflicts.  Create the
ipa_reg_route_field_id enumerated type, which identifies the fields
for the ROUTE IPA register.

Use IPA_REG_FIELDS() to specify the field mask values defined for
these registers, for each supported version of IPA.

Use ipa_reg_bit() and ipa_reg_encode() to build up the values to be
written to these registers rather than using the *_FMASK
preprocessor symbols.

Remove the definition of the now unused *_FMASK symbols.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:50 -07:00
Alex Elder
12c7ea7dfd net: ipa: define COMP_CFG IPA register fields
Create the ipa_reg_comp_cfg_field_id enumerated type, which
identifies the fields for the COMP_CFG IPA register.

Use IPA_REG_FIELDS() to specify the field mask values defined for
this register, for each supported version of IPA.

Use ipa_reg_bit() to build up the value to be written to this
register rather than using the *_FMASK preprocessor symbols.

Remove the definition of the *_FMASK symbols, along with the inline
functions that were used to encode certain fields whose position
and/or width within the register was dependent on IPA version.

Take this opportunity to represent all one-bit fields using BIT(x)
rather than GENMASK(x, x).

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:50 -07:00
Alex Elder
a5ad8956f9 net: ipa: introduce ipa_reg field masks
Add register field descriptors to the ipa_reg structure.  A field in
a register is defined by a field mask, which is a 32-bit mask having
a single contiguous range of bits set.

For each register that has at least one field defined, an enumerated
type will identify the register's fields.  The ipa_reg structure for
that register will include an array fmask[] of field masks, indexed
by that enumerated type.  Each field mask defines the position and
bit width of a field.  An additional "fcount" records how many
fields (masks) are defined for a given register.

Introduce two macros to be used to define registers that have at
least one field.

Introduce a few new functions related to field masks.  The first
simply returns a field mask, given an IPA register pointer and field
mask ID.  A variant of that is meant to be used for the special case
of single-bit field masks.

Next, ipa_reg_encode(), identifies a field with an IPA register
pointer and a field ID, and takes a value to represent in that
field.  The result encodes the value in the appropriate place to be
stored in the register.  This is roughly modeled after the bitmask
operations (like u32_encode_bits()).

Another function (ipa_reg_decode()) similarly identifies a register
field, but the value supplied to it represents a full register
value.  The value encoded in the field is extracted from the value
and returned.  This is also roughly modeled after bitmask operations
(such as u32_get_bits()).

Finally, ipa_reg_field_max() returns the maximum value representable
by a field.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:50 -07:00
Alex Elder
6a244b75cf net: ipa: introduce ipa_reg()
Create a new function that returns a register descriptor given its
ID.  Change ipa_reg_offset() and ipa_reg_n_offset() so they take a
register descriptor argument rather than an IPA pointer and register
ID.  Have them accept null pointers (and return an invalid 0 offset),
to avoid the need for excessive error checking.  (A warning is issued
whenever ipa_reg() returns 0).

Call ipa_reg() or ipa_reg_n() to look up information about the
register before calls to ipa_reg_offset() and ipa_reg_n_offset().
Delay looking up offsets until they're needed to read or write
registers.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:50 -07:00
Alex Elder
82a0680745 net: ipa: use ipa_reg[] array for register offsets
Use the array of register descriptors assigned at initialization
time to determine the offset (and where used, stride) for IPA
registers.  Issue a warning if an offset is requested for a register
that's not valid for the current system.

Remove all IPE_REG_*_OFFSET macros, as well as inline static
functions that returned register offsets.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:50 -07:00
Alex Elder
07f120bcf7 net: ipa: add per-version IPA register definition files
Create a new subdirectory "reg", which contains a register
definition file for each supported version of IPA.  Each register
definition contains the register's offset, and for parameterized
registers, the stride (distance between consecutive instances of the
register).  Finally, it includes an all-caps printable register name.

In these files, each IPA version defines an array of IPA register
definition pointers, with unsupported registers defined with a null
pointer.  The array is indexed by the ipa_reg_id enumerated type.

At initialization time, the appropriate register definition array to
use is selected based on the IPA version, and assigned to a new
"regs" field in the IPA structure.

Extend ipa_reg_valid() so it fails if a valid register is not
defined.

This patch simply puts this infrastructure in place; the next will
use it.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:49 -07:00
Alex Elder
6bfb753850 net: ipa: use IPA register IDs to determine offsets
Expose two inline functions that return the offset for a register
whose ID is provided; one of them takes an additional argument
that's used for registers that are parameterized.  These both use
a common helper function __ipa_reg_offset(), which just uses the
offset symbols already defined.

Replace all references to the offset macros defined for IPA
registers with calls to ipa_reg_offset() or ipa_reg_n_offset().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:49 -07:00
Alex Elder
98e2dd71a8 net: ipa: introduce IPA register IDs
Create a new ipa_reg_id enumerated type, which identifies each IPA
register with a symbolic identifier.  Use short names, but in some
cases (such as "BCR") add "IPA_" to the name to help avoid name
conflicts.

Create two functions that indicate register validity.  The first
concisely indicates whether a register is valid for a given version
of IPA, and if so, whether it is defined.  The second indicates
whether a register is valid for TX or RX endpoints.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27 18:42:49 -07:00
Alex Elder
92073b1648 net: ipa: encapsulate updating three more registers
Create a new function that encapsulates setting the BCR, TX_CFG, and
CLKON_CFG register values during hardware configuration.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-23 20:56:28 -07:00
Alex Elder
1e5db0965e net: ipa: encapsulate updating the COUNTER_CFG register
Create a new function that encapsulates setting the counter
configuration register value for versions prior to IPA v4.5.
Create another small function to represent configuring hardware
timing regardless of version.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-23 20:56:28 -07:00
Alex Elder
b24627b1d9 net: ipa: encapsulate setting the FILT_ROUT_HASH_EN register
Create a new function that encapsulates setting the register flag
that disables filter and routing table hashing for IPA v4.2.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-23 20:56:28 -07:00
Alex Elder
73e0c9efb5 net: ipa: tidy up register enum definitions
Update a few enumerated type definitions in "ipa_reg.h" so that the
values assigned to each member align on the same column.  Where a
"TX" or "RX" (or both) comment is present, move that annotation into
a separate comment between the member name and its value.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-23 20:56:28 -07:00
Alex Elder
21ab2078ff net: ipa: define BCR values using an enum
The backward compatibility register (BCR) has a set of bit flags
that indicate ways in which the IPA hardware should operate in a
backward compatible way.  The register is not supported starting
with IPA v4.5, and where it is supported, defined bits all have the
same numeric value.

Redefine these flags using an enumerated type, with each member's
value representing the bit position that encodes it in the BCR.
This replaces all of the single-bit field masks previously defined.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-23 20:56:28 -07:00
Alex Elder
48395fa8e8 net: ipa: rearrange functions for similarity
Both aggr_time_limit_encode() and hol_block_timer_encode() figure
out how to encode a millisecond time value so it can be programmed
into a register.  Rearranging them a bit can make their similarity
more obvious, with both taking essentially the same form.

To do this:
  - Return 0 immediately in aggr_time_limit_encode() if the
    microseconds value supplied is zero.
  - Reverse the test at top of aggr_time_limit_encode(), so we
    compute and return the Qtime value in the "true" block,
    and compute the result the old way otherwise.
  - Open-code (and eliminate) hol_block_timer_qtime_encode() at the
    top of hol_block_timer_encode() in the case we use Qtimer.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-23 20:56:28 -07:00
Alex Elder
8be440e17b net: ipa: introduce ipa_qtime_val()
Create a new function ipa_qtime_val() which returns a value that
indicates what should be encoded for a register with a time field
expressed using Qtime.  Use it to factor out common code in
aggr_time_limit_encoded() and hol_block_timer_qtime_val().

Rename aggr_time_limit_encoded() and hol_block_timer_qtime_val() so
their names are both verbs ending in "encode".  Rename the "limit"
argument to the former to be "milliseconds" for consistency.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-23 20:56:27 -07:00
Alex Elder
a50d37b756 net: ipa: don't use u32p_replace_bits()
In two spots we use u32_replace_bits() to replace a set of bits in a
register while preserving the rest.  Both of those cases just zero
the bits being replaced, and this can be done more simply without
using that function.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-23 20:56:27 -07:00
Jakub Kicinski
0140a7168f Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
drivers/net/ethernet/freescale/fec.h
  7b15515fc1 ("Revert "fec: Restart PPS after link state change"")
  40c79ce13b ("net: fec: add stop mode support for imx8 platform")
https://lore.kernel.org/all/20220921105337.62b41047@canb.auug.org.au/

drivers/pinctrl/pinctrl-ocelot.c
  c297561bc9 ("pinctrl: ocelot: Fix interrupt controller")
  181f604b33 ("pinctrl: ocelot: add ability to be used in a non-mmio configuration")
https://lore.kernel.org/all/20220921110032.7cd28114@canb.auug.org.au/

tools/testing/selftests/drivers/net/bonding/Makefile
  bbb774d921 ("net: Add tests for bonding and team address list management")
  152e8ec776 ("selftests/bonding: add a test for bonding lladdr target")
https://lore.kernel.org/all/20220921110437.5b7dbd82@canb.auug.org.au/

drivers/net/can/usb/gs_usb.c
  5440428b3d ("can: gs_usb: gs_can_open(): fix race dev->can.state condition")
  45dfa45f52 ("can: gs_usb: add RX and TX hardware timestamp support")
https://lore.kernel.org/all/84f45a7d-92b6-4dc5-d7a1-072152fab6ff@tessares.net/

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-22 13:02:10 -07:00
Alex Elder
cf412ec333 net: ipa: properly limit modem routing table use
IPA can route packets between IPA-connected entities.  The AP and
modem are currently the only such entities supported, and no routing
is required to transfer packets between them.

The number of entries in each routing table is fixed, and defined at
initialization time.  Some of these entries are designated for use
by the modem, and the rest are available for the AP to use.  The AP
sends a QMI message to the modem which describes (among other
things) information about routing table memory available for the
modem to use.

Currently the QMI initialization packet gives wrong information in
its description of routing tables.  What *should* be supplied is the
maximum index that the modem can use for the routing table memory
located at a given location.  The current code instead supplies the
total *number* of routing table entries.  Furthermore, the modem is
granted the entire table, not just the subset it's supposed to use.

This patch fixes this.  First, the ipa_mem_bounds structure is
generalized so its "end" field can be interpreted either as a final
byte offset, or a final array index.  Second, the IPv4 and IPv6
(non-hashed and hashed) table information fields in the QMI
ipa_init_modem_driver_req structure are changed to be ipa_mem_bounds
rather than ipa_mem_array structures.  Third, we set the "end" value
for each routing table to be the last index, rather than setting the
"count" to be the number of indices.  Finally, instead of allowing
the modem to use all of a routing table's memory, it is limited to
just the portion meant to be used by the modem.  In all versions of
IPA currently supported, that is IPA_ROUTE_MODEM_COUNT (8) entries.

Update a few comments for clarity.

Fixes: 530f9216a9 ("soc: qcom: ipa: AP/modem communications")
Signed-off-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20220913204602.1803004-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-20 08:11:13 -07:00
Alex Elder
dae4af6bf2 net: ipa: fix two symbol names
All field mask symbols are defined with a "_FMASK" suffix, but
EOT_COAL_GRANULARITY and DRBIP_ACL_ENABLE are defined without one.
Fix that.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-20 07:45:47 -07:00
Alex Elder
a14d593724 net: ipa: update sequencer definition constraints
Starting with IPA v4.5, replication is done differently from before,
and as a result the "replication" portion of the how the sequencer
is specified must be zero.

Add a check for the configuration data failing that requirement, and
only update the sesquencer type value when it's supported.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-20 07:45:47 -07:00
Alex Elder
9eefd2fb96 net: ipa: don't reuse variable names
In ipa_endpoint_init_hdr(), as well as ipa_endpoint_init_hdr_ext(),
a top-level automatic variable named "offset" is used to represent
the offset of a register.

However, deeper within each of those functions is *another*
definition of a local variable with the same name, representing
something else.  Scoping rules ensure the result is what was
intended, but this variable name reuse is bad practice and makes
the code confusing.

Fix this by naming the inner variable "off".  Use "off" instead of
"checksum_offset" in ipa_endpoint_init_cfg() for consistency.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-20 07:45:47 -07:00
Alex Elder
8b3cb084b2 net: ipa: move and redefine ipa_version_valid()
Move the definition of ipa_version_valid(), making it a static
inline function defined together with the enumerated type in
"ipa_version.h".  Define a new count value in the type.

Rename the function to be ipa_version_supported(), and have it
return true only if the IPA version supplied is explicitly supported
by the driver.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-20 07:45:47 -07:00
Alex Elder
bb788de30a net: ipa: move the definition of gsi_ee_id
Move the definition of the gsi_ee_id enumerated type out of "gsi.h"
and into "ipa_version.h".  That latter header file isolates the
definition of the ipa_version enumerated type, allowing it to be
included in both IPA and GSI code.  We have the same requirement for
gsi_ee_id, and moving it here makes it easier to get only that
definition without everything else defined in "gsi.h".

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-20 07:45:46 -07:00
Alex Elder
5ea4285829 net: ipa: don't define unneeded GSI register offsets
Each GSI execution environment (EE) is able to access many of the
GSI registers associated with the other EEs.  A block of GSI
registers is contained within a region of memory, and an EE's
register offset can be determined by adding the register's base
offset to the product of the EE ID and a fixed constant.

Despite this possibility, the AP IPA code *never* accesses any GSI
registers other than its own.  So there's no need to define the
macros that compute register offsets for other EEs.

Redefine the AP access macros to compute the offset the way the more
general "any EE" macro would, and get rid of the unneeded macros.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-20 07:45:46 -07:00
Alex Elder
019e37eaef net: ipa: don't have gsi_channel_update() return a value
If it finds no completed transactions, gsi_channel_trans_complete()
calls gsi_channel_update() to check hardware.  If new transactions
have completed, gsi_channel_update() records that, then calls
gsi_channel_trans_complete() to return the first of those found.
This recursion won't go any further, but can be avoided if we
have gsi_channel_update() only be responsible for updating state
after accessing hardware.

Change gsi_channel_update() so it simply checks for and handles
new completions, without returning a value.  If it needs to call
that function, have gsi_channel_trans_complete() determine whether
there are new transactions available after the update.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-09 11:45:25 +01:00
Alex Elder
e0e3406c60 net: ipa: update channel in gsi_channel_trans_complete()
Have gsi_channel_trans_complete() update the known state from
hardware rather than doing so in gsi_channel_poll_one().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-09 11:45:25 +01:00
Alex Elder
d338ae28d8 net: ipa: kill all other transaction lists
None of the transaction lists are actually needed any more, because
transaction IDs (which have been shown to be equivalent) are used
instead.  So we can remove all of them, as well as the spinlock
that protects updates to them.

Not requiring a lock simplifies gsi_trans_free() as well; we only
need to check the reference count once to decide whether we've hit
the last reference.

This makes the links field in the gsi_trans structure unused, so get
rid of that as well.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-09 11:45:25 +01:00
Alex Elder
11902b41f2 net: ipa: kill the allocated transaction list
The only place the trans_info->alloc list is used is when
initializing it, when adding a transaction to it when allocation
finishes, and when moving a transaction from that list to the
committed list.

We can just skip putting a transaction on the allocated list, and
add it (rather than move it) to the committed list when it is
committed.

On additional caveat is that an allocated transaction that's
committed without any TREs added will be immediately freed.  Because
we aren't adding allocated transactions to a list any more, the
list links need to be initialized to ensure they're valid at the
time list_del() is called for the transaction.

Then we can safely eliminate the allocated transaction list.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-09 11:45:25 +01:00
Alex Elder
0c126ec3dd net: ipa: always use transaction IDs instead of lists
In gsi_channel_trans_complete(), use the completed and pending IDs
to determine whether there are any transactions in completed state.

Similarly, in gsi_channel_trans_cancel_pending(), use the pending
and committed IDs to mark pending transactions cancelled.  Rearrange
the logic a bit there for a simpler result.

This removes the only user of list_last_entry_or_null(), so get rid
of that macro.

Remove the temporary warnings added by the previous commit.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-09 11:45:25 +01:00
Alex Elder
8672bab7eb net: ipa: verify a few more IDs
The completed transaction list is used in gsi_channel_trans_complete()
to return the next transaction in completed state.

Add some temporary checks to verify the transaction indicated by the
completed ID matches the one first in this list.

Similarly, we use the pending and completed transaction lists when
cancelling pending transactions in gsi_channel_trans_cancel_pending().

Add temporary checks there to verify the transactions indicated by
IDs match those tracked by these lists.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-05 12:47:02 +01:00
Alex Elder
4601e75596 net: ipa: further simplify gsi_channel_trans_last()
Do a little more refactoring in gsi_channel_trans_last() to simplify
it further.  The resulting code should behave exactly as before.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-05 12:47:02 +01:00
Alex Elder
e68d1d1591 net: ipa: simplify gsi_channel_trans_last()
Using a little logic we can simplify gsi_channel_trans_last().

The first condition in that function looks like this:
    if (trans_info->allocated_id != trans_info->free_id)
And if that's false, we proceed to the next one:
    if (trans_info->committed_id != trans_info->allocated_id)

Failure of the first test implies:
    trans_info->allocated_id == trans_info->free_id
And therefore, the second one can be rewritten this way:
    if (trans_info->committed_id != trans_info->free_id)

Substituting free_id for allocated_id and committed_id can also be
done in the code blocks executed when these conditions yield true.
The net result is that all three blocks for TX endpoints can be
consolidated into just one.

The two blocks of code at the end of that function (used for both TX
and RX channels) can be similarly consolidated into a single block.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-05 12:47:02 +01:00
Alex Elder
897c0ce665 net: ipa: use IDs exclusively for last transaction
Always use transaction IDs when finding the "last" transaction to
await when quiescing a channel.  This basically extends what was
done in the previous patch to all other transaction state IDs.

As a result we are no longer updating any transaction lists inside
gsi_channel_trans_last(), so there's no need to take the spinlock.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-05 12:47:01 +01:00
Alex Elder
c30623ea0b net: ipa: use IDs for last allocated transaction
Use the allocated and free transaction IDs to determine whether the
"last" transaction used for quiescing a channel is in allocated
state.  The last allocated transaction that has not been committed
(if any) immediately precedes the first free transaction in the
transaction array.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-05 12:47:01 +01:00
Alex Elder
b2abe33d23 net: ipa: rework last transaction determination
When quiescing a channel, we find the "last" transaction, which is
the latest one to have been allocated.  (New transaction allocation
will have been prevented by the time this is called.)

Currently we do this by looking for the first non-empty transaction
list in each state, then return the last entry from that last.
Instead, determine the last entry in each list (if any) and return
that entry if found.

Temporarily (locally) introduce list_last_entry_or_null() as a
helper for this, mirroring list_first_entry_or_null().  This macro
definition will be removed by an upcoming patch.

Remove the temporary warnings added by the previous commit.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-05 12:47:01 +01:00
Alex Elder
fd3bd0398a net: ipa: track polled transactions with an ID
Add a transaction ID to track the first element in the transaction
array that has been polled.  Advance the ID when we are releasing a
transaction.

Temporarily add warnings that verify that the first polled
transaction tracked by the ID matches the first element on the
polled list, both when polling and freeing.

Remove the temporary warnings added by the previous commit.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-02 12:08:44 +01:00
Alex Elder
949cd0b5c2 net: ipa: track completed transactions with an ID
Add a transaction ID field to track the first element in the
transaction array that has completed but has not yet been polled.

Advance the ID when we are processing a transaction in the NAPI
polling loop (where completed transactions become polled).

Temporarily add warnings that verify that the first completed
transaction tracked by the ID matches the first element on the
completed list, both when pending and completing.

Remove the temporary warnings added by the previous commit.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-02 12:08:44 +01:00
Alex Elder
eeff7c14e0 net: ipa: track pending transactions with an ID
Add a transaction ID field to track the first element in the
transaction array that is pending (sent to hardware) but not yet
complete.  Advance the ID when a completion event for a channel
indicates that transactions have completed.

Temporarily add warnings that verify that the first pending
transaction tracked by the ID matches the first element on the
pending list, both when pending and completing, as well as when
resetting the channel.

Remove the temporary warnings added by the previous commit.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-02 12:08:44 +01:00
Alex Elder
fc95d958e2 net: ipa: track committed transactions with an ID
Add a transaction ID field to track the first element in a channel's
transaction array that has been committed, but not yet passed to the
hardware.  Advance the ID when the hardware is notified via doorbell
that TREs from a transaction are ready for consumption.

Temporarily add warnings that verify that the first committed
transaction tracked by the ID matches the first element on the
committed list, both when committing and pending (at doorbell).

Remove the temporary warnings added by the previous commit.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-02 12:08:44 +01:00
Alex Elder
41e2a2c054 net: ipa: track allocated transactions with an ID
Transactions for a channel are now managed in an array, with a free
transaction ID indicating which is the next one free.

Add another transaction ID field to track the first element in the
array that has been allocated.  Advance it when a transaction is
committed (because that is when that transaction leaves allocated
state).

Temporarily add warnings that verify that the first allocated
transaction tracked by the ID matches the first element on the
allocated list, both when allocating and committing a transaction.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-02 12:08:44 +01:00
Alex Elder
12382d1167 net: ipa: use an array for transactions
Transactions are always allocated one at a time.  The maximum number
of them we could ever need occurs if each TRE is assigned to a
transaction.  So a channel requires no more transactions than the
number of TREs in its transfer ring.  That number is known to be a
power-of-2 less than 65536.

The transaction pool abstraction is used for other things, but for
transactions we can use a simple array of transaction structures,
and use a free index to indicate which entry in the array is the
next one free for allocation.

By having the number of elements in the array be a power-of-2, we
can use an ever-incrementing 16-bit free index, and use it modulo
the array size.  Distinguish a "trans_id" (whose value can exceed
the number of entries in the transaction array) from a "trans_index"
(which is less than the number of entries).

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-02 12:08:44 +01:00
Alex Elder
b8d4380365 net: ipa: don't assume SMEM is page-aligned
In ipa_smem_init(), a Qualcomm SMEM region is allocated (if needed)
and then its virtual address is fetched using qcom_smem_get().  The
physical address associated with that region is also fetched.

The physical address is adjusted so that it is page-aligned, and an
attempt is made to update the size of the region to compensate for
any non-zero adjustment.

But that adjustment isn't done properly.  The physical address is
aligned twice, and as a result the size is never actually adjusted.

Fix this by *not* aligning the "addr" local variable, and instead
making the "phys" local variable be the adjusted "addr" value.

Fixes: a0036bb413 ("net: ipa: define SMEM memory region for IPA")
Signed-off-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20220818134206.567618-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-22 18:10:48 -07:00
Jason Wang
9221b2898a net: ipa: Fix comment typo
The double `is' is duplicated in the comment, remove one.

Signed-off-by: Jason Wang <wangborong@cdjrlc.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-12 11:28:14 +01:00
Jakub Kicinski
272ac32f56 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-28 18:21:16 -07:00
Slark Xiao
2540d3c999 net: ipa: Fix typo 'the the' in comment
Replace 'the the' with 'the' in the comment.

Signed-off-by: Slark Xiao <slark_xiao@163.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-07-25 10:52:28 +01:00
Paolo Abeni
32d00f62db net: ipa: fix build
After commit 2c7b9b936b ("net: ipa: move configuration data files
into a subdirectory"), build of the ipa driver fails with the
following error:

drivers/net/ipa/data/ipa_data-v3.1.c:9:10: fatal error: gsi.h: No such file or directory

After the mentioned commit, all the file included by the configuration
are in the parent directory. Fix the issue updating the include path.

Fixes: 2c7b9b936b ("net: ipa: move configuration data files into a subdirectory")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Link: https://lore.kernel.org/r/7105112c38cfe0642a2d9e1779bf784a7aa63d16.1658411666.git.pabeni@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-21 16:53:26 -07:00
Alex Elder
2c7b9b936b net: ipa: move configuration data files into a subdirectory
Reduce the clutter in the main IPA source directory by creating a
new "data" subdirectory, and locating all of the configuration data
files in there.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-20 21:05:41 -07:00
Alex Elder
ec2ea5e06c net: ipa: list supported IPA versions in the Makefile
Create a variable in the Makefile listing the IPA versions supported
by the driver.  Use that to create the list of configuration data
object files used (rather than listing them all individually).

Add a SPDX license comment.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-20 21:05:41 -07:00
Alex Elder
616c4a83b6 net: ipa: fix an outdated comment
Since commit 8797972aff ("net: ipa: remove command info pool"),
we don't allocate "command info" entries for command channel
transactions.  Fix a comment that seems to suggest we still do.
(Even before that commit, the comment was out of place.)

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-20 21:04:36 -07:00
Alex Elder
3c91c86d1b net: ipa: report when the driver has been removed
When the IPA driver has completed its initialization and setup
stages, it emits a brief message to the log.  Add a small message
that reports when it has been removed.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-20 21:04:36 -07:00
Alex Elder
4d8996cbee net: ipa: skip some cleanup for unused transactions
In gsi_trans_free(), there's no point in ipa_gsi_trans_release() if
a transaction is unused.  No used TREs means no IPA layer resources
to clean up.  So only call ipa_gsi_trans_release() if at least one
TRE was used.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-20 21:04:36 -07:00
Alex Elder
4920065888 net: ipa: rearrange transaction initialization
The transaction map is really associated with the transaction pool;
move its definition earlier in the gsi_trans_info structure.

Rearrange initialization in gsi_channel_trans_init() so it
sets the tre_avail value first, then initializes the transaction
pool, and finally allocating the transaction map.

Update comments.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-20 21:04:36 -07:00
Alex Elder
b63f507c06 net: ipa: add a transaction committed list
We currently put a transaction on the pending list when it has
been committed.  But until the channel's doorbell rings, these
transactions aren't actually "owned" by the hardware yet.

Add a new "committed" state (and list), to represent transactions
that have been committed but not yet sent to hardware.  Define
"pending" to mean committed transactions that have been sent
to hardware but have not yet completed.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-20 21:04:35 -07:00
Alex Elder
d79e4164d0 net: ipa: add an endpoint device attribute group
Create a new attribute group meant to provide a single place that
defines endpoint IDs that might be needed by user space.  Not all
defined endpoints are presented, and only those that are defined
will be made visible.

The new attributes use "extended" device attributes to hold endpoint
IDs, which is a little more compact and efficient.  Reimplement the
existing modem endpoint ID attribute files using common code.

Signed-off-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20220719191639.373249-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-20 21:03:26 -07:00
Alex Elder
5fb859f79f net: ipa: initialize ring indexes to 0
When a GSI channel is initially allocated, and after it has been
reset, the hardware assumes its ring index is 0.  And although we
do initialize channels this way, the comments in the IPA code don't
really explain this.  For event rings, it doesn't matter what value
we use initially, so using 0 is just fine.

Add some information about the assumptions made by hardware above
the definition of the gsi_ring structure in "gsi.h".

Zero the index field for all rings (channel and event) when the ring
is allocated.  As a result, that function initializes all fields in
the structure.

Stop zeroing the index the top of gsi_channel_program().  Initially
we'll use the index value set when the channel ring was allocated.
And we'll explicitly zero the index value in gsi_channel_reset()
before programming the hardware, adding a comment explaining why
it's required.

For event rings, use the index initialized by gsi_ring_alloc()
rather than 0 when ringing the doorbell in gsi_evt_ring_program().
(It'll still be zero, but we won't assume that to be the case.)

Use a local variable in gsi_evt_ring_program() that represents the
address of the event ring's ring structure.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-07-20 11:12:20 +01:00
Jiang Jian
7c0d97e4b6 net: ipa: remove unexpected word "the"
there is an unexpected word "the" in the comments that need to be removed

Signed-off-by: Jiang Jian <jiangjian@cdjrlc.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20220621085001.61320-1-jiangjian@cdjrlc.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-06-22 18:20:26 -07:00
Alex Elder
81765eeac1 net: ipa: move more code out of gsi_channel_update()
Move the processing done for TX channels in gsi_channel_update()
into gsi_evt_ring_rx_update().  The called function is called for
both RX and TX channels, so rename it to be gsi_evt_ring_update().
As a result, this code no longer assumes events in an event ring are
associated with just one channel.

Because all events in a ring are handled in that function, we can
move the call to gsi_trans_move_complete() there, and can ring the
event ring doorbell there as well after all new events in the ring
have been processed.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-06-16 20:44:04 -07:00
Alex Elder
9f1c3ad654 net: ipa: call gsi_evt_ring_rx_update() unconditionally
When an RX transaction completes, we update the trans->len field to
contain the actual number of bytes received.  This is done in a loop
in gsi_evt_ring_rx_update().

Change that function so it checks the data transfer direction
recorded in the transaction, and only updates trans->len for RX
transfers.

Then call it unconditionally.  This means events for TX endpoints
will run through the loop without otherwise doing anything, but
this will change shortly.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-06-16 20:44:04 -07:00
Alex Elder
2f48fb0edc net: ipa: pass GSI pointer to gsi_evt_ring_rx_update()
The only reason the event ring's channel pointer is needed in
gsi_evt_ring_rx_update() is so we can get at its GSI pointer.

We can pass the GSI pointer as an argument, along with the event
ring ID, and thereby avoid using the event ring channel pointer.
This is another step toward no longer assuming an event ring
services a single channel.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-06-16 20:44:03 -07:00
Alex Elder
8eec783195 net: ipa: don't pass channel when mapping transaction
Change gsi_channel_trans_map() so it derives the channel used from
the transaction.  Pass the index of the *first* TRE used by the
transaction, and have the called function account for the fact that
the last one used is what's important.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-06-16 20:44:03 -07:00
Alex Elder
dd5a046cbb net: ipa: don't assume one channel per event ring
In gsi_evt_ring_rx_update(), use gsi_event_trans() repeatedly
to find the transaction associated with an event, rather than
assuming consecutive events are associated with the same channel.
This removes the only caller of gsi_trans_pool_next(), so get rid
of it.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-06-16 20:44:03 -07:00
Alex Elder
c5bddecbb9 net: ipa: rework gsi_channel_tx_update()
Rename gsi_channel_tx_update() to be gsi_trans_tx_completed(), and
pass it just the transaction pointer, deriving the channel from the
transaction.  Update the comments above the function to provide a
more concise description of how statistics for TX endpoints are
maintained and used.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-15 09:07:58 +01:00
Alex Elder
dbad2fa719 net: ipa: stop counting total RX bytes and transactions
In gsi_evt_ring_rx_update(), we update each transaction so its len
field reflects the actual number of bytes received.  In the process,
the total number of transactions and bytes processed on the channel
are summed, and added to a running total for the channel.

But we don't actually use those running totals for RX endpoints.
They're maintained for TX channels to support CoDel when they are
associated with a "real" network device.

So stop maintaining these totals for RX endpoints, and update the
comment where the fields are defined to make it clear they're only
valid for TX channels.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-15 09:07:58 +01:00
Alex Elder
65d39497fa net: ipa: simplify TX completion statistics
When a TX request is issued, its channel's accumulated byte and
transaction counts are recorded.  This currently does *not* take
into account the transaction being committed.

Later, when the transaction completes, the number of bytes and
transactions that have completed since the transaction was committed
are reported to the network stack.  The transaction and its byte
count are accounted for at that time.

Instead, record the transaction and its bytes in the counts recorded
at commit time.  This avoids the need to do so when the transaction
completes, and provides a (small) simplification of that code.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-15 09:07:58 +01:00
Alex Elder
4e0f28e9ee net: ipa: introduce gsi_trans_tx_committed()
Create a new function that encapsulates recording information needed
for TX channel statistics when a transaction is committed.

Record the accumulated length in the transaction before the call
(for both RX and TX), so it can be used when updating TX statistics.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-15 09:07:58 +01:00
Alex Elder
3eeabea6c8 net: ipa: rename two transaction fields
There are two fields in a GSI transaction that keep track of TRE
counts.  The first represents the number of TREs reserved for the
transaction in the TRE ring; that's currently named "tre_count".
The second is the number of TREs that are actually *used* by the
transaction at the time it is committed.

Rename the "tre_count" field to be "rsvd_count", to make its meaning
a little more specific.  The "_count" is present in the name mainly
to avoid interpreting it as a reserved (not-to-be-used) field.  This
name also distinguishes it from the "tre_count" field associated
with a channel.

Rename the "used" field to be "used_count", to match the convention
used for reserved TREs.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-15 09:07:58 +01:00
Alex Elder
2295947bda net: ipa: use "tre_ring" for all TRE ring local variables
All local variables that represent event rings are named "ring".

All but two functions that represent a channel's TRE ring with a
local variable use the name "tre_ring".  For consistency, use that
name in the two functions that don't fit the pattern.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-15 09:07:58 +01:00
Alex Elder
bcec9ecbaf net: ipa: derive channel from transaction
In gsi_channel_tx_queued(), we report when a transaction gets passed
to hardware.  Change that function so it takes transaction rather
than a channel as its argument, and derive the channel from the
transaction.  Rename the function accordingly.

Delete the header comments above the function definition; the ones
above the declaration in "gsi_private.h" should suffice.  In
addition, the comments above gsi_channel_tx_update() do a fine job
of explaining what's going on.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-13 12:01:58 +01:00
Alex Elder
7dd9558fed net: ipa: determine channel from event
Each event in an event ring describes the TRE whose completion
caused the event.  Currently, every event ring is dedicated to a
single channel, so the channel is easily derived from the event
ring.

An event ring can actually be shared by more than one channel
though, and to distinguish events for one channel from another, the
event structure contains a field indicating which channel the event
is associated with.

In gsi_event_trans(), use the channel ID in an event to determine
which channel the event is for.  This makes the channel pointer now
passed to that function irrelevant; pass the GSI pointer to that
function instead.

And although it shouldn't happen, warn if an event arrives that
records a channel ID that's not in use, or if the event does not
have a transaction associated with it.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-13 12:01:58 +01:00
Alex Elder
983a1a3081 net: ipa: simplify endpoint transaction completion
When a GSI transaction completes, ipa_endpoint_trans_complete() is
eventually called.  That handles TX and RX completions separately,
but ipa_endpoint_tx_complete() is a no-op.

Instead, have ipa_endpoint_trans_complete() return immediately for a
TX transaction, and incorporate code from ipa_endpoint_rx_complete()
to handle RX transactions.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-13 12:01:58 +01:00
Alex Elder
317595d2ce net: ipa: rename endpoint->trans_tre_max
The trans_tre_max field of the IPA endpoint structure is only used
to limit the number of fragments allowed for an SKB being prepared
for transmission.  Recognizing that, rename the field skb_frag_max,
and reduce its value by 1.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-13 12:01:58 +01:00
Alex Elder
88e03057e4 net: ipa: rename channel->tlv_count
Each GSI channel has a TLV FIFO of a certain size, specified in the
configuration data for an AP channel.  That size dictates the
maximum number of TREs that are allowed in a single transaction.

The only way that value is used after initialization is as a limit
on the number of TREs in a transaction; calling it "tlv_count"
isn't helpful, and in fact gsi_channel_trans_tre_max() exists to
sort of abstract it.

Instead, rename the channel->tlv_count field trans_tre_max, and get
rid of the helper function.  Update a couple of comments as well.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-13 12:01:58 +01:00
Alex Elder
92f78f81ac net: ipa: verify command channel TLV count
In commit 8797972aff ("net: ipa: remove command info pool"), the
maximum number of IPA commands that would be sent in a single
transaction was defined.  That number can't exceed the size of the
TLV FIFO on the command channel, and we can check that at runtime.

To add this check, pass a new flag to gsi_channel_data_valid() to
indicate the channel being checked is being used for IPA commands.
Knowing that we can also verify the channel direction is correct.

Use a new local variable that refers to the command-specific portion
of the data being checked.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-13 12:01:58 +01:00
Alex Elder
70132763d5 net: ipa: fix page free in ipa_endpoint_replenish_one()
Currently the (possibly compound) pages used for receive buffers are
freed using __free_pages().  But according to this comment above the
definition of that function, that's wrong:
    If you want to use the page's reference count to decide
    when to free the allocation, you should allocate a compound
    page, and use put_page() instead of __free_pages().

Convert the call to __free_pages() in ipa_endpoint_replenish_one()
to use put_page() instead.

Fixes: 6a606b9015 ("net: ipa: allocate transaction in replenish loop")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-27 18:29:50 -07:00
Alex Elder
155c0c90bc net: ipa: fix page free in ipa_endpoint_trans_release()
Currently the (possibly compound) page used for receive buffers are
freed using __free_pages().  But according to this comment above the
definition of that function, that's wrong:
    If you want to use the page's reference count to decide when
    to free the allocation, you should allocate a compound page,
    and use put_page() instead of __free_pages().

Convert the call to __free_pages() in ipa_endpoint_trans_release()
to use put_page() instead.

Fixes: ed23f02680 ("net: ipa: define per-endpoint receive buffer size")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-27 18:29:50 -07:00
Alex Elder
a224bd4b88 net: ipa: use data space for command opcodes
The 64-bit data field in a transaction is not used for commands.
And the opcode array is *only* used for commands.  They're
(currently) the same size; save a little space in the transaction
structure by enclosing the two fields in a union.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
8797972aff net: ipa: remove command info pool
The ipa_cmd_info structure now contains only one field, and it's an
enumerated type whose values all fit in 8 bits.  Currently we'll
never use more than 8 TREs in a command transaction, and we can
represent that number of command opcodes in the same space as a 64
bit pointer to an ipa_cmd_info structure.

Define IPA_COMMAND_TRANS_TRE_MAX as the maximum number of TREs that
can be in a command transaction.  Replace the info pointer in a
transaction with a fixed-size array named cmd_opcode[] of that many
bytes.  Store the opcode in this array when adding a command TRE to
a transaction, as was done previously for the info array.  This
makes the ipa_cmd_info unused, so get rid of it.

When committing an immediate command transaction, use the channel's
Boolean command flag to determine whether to fill in the opcode,
which will be taken (as before) from the array in the transaction.

This makes the command info pool unnecessary, so get rid of it.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
4de284b72e net: ipa: remove command direction argument
We no longer use the direction argument for gsi_trans_cmd_add(), so
get rid of it in its definition, and in its seven callers.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
7ffba3bdf7 net: ipa: get rid of ipa_cmd_info->direction
The direction field of the ipa_cmd_info structure is set, but never
used.  It seems it might have been used for the DMA_SHARED_MEM
immediate command, but the DIRECTION flag is set based on the value
of the passed-in direction flag there.

Anyway, remove this unused field from the ipa_cmd_info structure.
This is done as a separate patch to make it very obvious that it's
not required.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
2091c79ac4 net: ipa: count the number of modem TX endpoints
In ipa_endpoint_modem_exception_reset_all(), a high estimate was
made of the number of endpoints that need their status register
updated.  We only used what was needed, so the high estimate didn't
matter much.

However the next few patches are going to limit the number of
commands in a single transaction, and the overestimate would exceed
that.  So count the number of modem TX endpoints at initialization
time, and use it in ipa_endpoint_modem_exception_reset_all().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
d15180b4ea net: ipa: kill gsi_trans_commit_wait_timeout()
Since the beginning gsi_trans_commit_wait_timeout() has existed to
provide a way to allow waiting a limited time for a transaction
to complete.  But that function has never been used.

In fact, there is no use for this function, because a transaction
committed to hardware should *always* complete.  The only reason it
might not complete is if there were a hardware failure, or perhaps a
system configuration error.

Furthermore, if a timeout ever did occur, the IPA hardware would be
in an indeterminate state, from which there is no recovery.  It
would require some sort of complete IPA reset, and would require the
participation of the modem, and at this time there is no such
sequence defined.

So get rid of the definition of gsi_trans_commit_wait_timeout(), and
update a few comments accordingly.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
beb90cba60 net: ipa: specify RX aggregation time limit in config data
Don't assume that a 500 microsecond time limit should be used for
all receive endpoints that support aggregation.  Instead, specify
the time limit to use in the configuration data.

Set a 500 microsecond limit for all existing RX endpoints, as before.

Checking for overflow for the time limit field is a bit complicated.
Rather than duplicate a lot of code in ipa_endpoint_data_valid_one(),
call WARN() if any value is found to be too large when encoding it.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
3cebb7c2ed net: ipa: support hard aggregation limits
Add a new flag for AP receive endpoints that indicates whether
a "hard limit" is used as a criterion for closing aggregation.
Add comments explaining the difference between "hard" and "soft"
aggregation limits.

Pass a flag to ipa_aggr_size_kb() so it computes the proper
aggregation size value whether using hard or soft limits.  Move
that function earlier in "ipa_endpoint.c" so it can be used
without a forward-reference.

Update ipa_endpoint_data_valid_one() so it validates endpoints whose
data indicate a hard aggregation limit is used, and so it reports
set aggregation flags for endpoints without aggregation enabled.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
153213f055 net: ipa: make endpoint HOLB drop configurable
Add a new Boolean flag for RX endpoints defining whether HOLB drop
is initially enabled or disabled for the endpoint.  All existing AP
endpoints should have HOLB drop disabled.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
660e52d651 net: ipa: save a copy of endpoint default config
All elements of the default endpoint configuration are used in the
code when programming an endpoint for use.  But none of the other
configuration data is ever needed once things are initialized.

So rather than saving a pointer to *all* of the configuration data,
save a copy of only the endpoint configuration portion.

This will eventually allow endpoint configuration to be modifiable
at runtime.  But even before that it means we won't keep a pointer
to configuration data after when no longer needed.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20 11:12:24 +01:00
Alex Elder
cf4e73a166 net: ipa: rename a few endpoint config data types
Rename the just-moved data structure types to drop the "_data"
suffix, to make it more obvious they are no longer meant to be used
just as read-only initialization data.  Rename the fields and
variables of these types to use "config" instead of "data" in the
name.  This is another small step meant to facilitate review.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20 11:12:24 +01:00
Alex Elder
f0488c540e net: ipa: move endpoint configuration data definitions
Move the definitions of the structures defining endpoint-specific
configuration data out of "ipa_data.h" and into "ipa_endpoint.h".
This is a trivial movement of code without any other change, to
prepare for the next few patches.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20 11:12:24 +01:00
Alex Elder
75944b040b net: ipa: open-code ether_setup()
About half of the fields set by the call in ipa_modem_netdev_setup()
are overwritten after the call.  Instead, just skip the call, and
open-code the (other) assignments it makes to the net_device
structure fields.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20 11:12:24 +01:00
Alex Elder
332ef7c814 net: ipa: ignore endianness if there is no header
If we program an RX endpoint to have no header (header length is 0),
header-related endpoint configuration values are meaningless and are
ignored.

The only case we support that defines a header is QMAP endpoints.
In ipa_endpoint_init_hdr_ext() we set the endianness mask value
unconditionally, but it should not be done if there is no header
(meaning it is not configured for QMAP).

Set the endianness conditionally, and rearrange the logic in that
function slightly to avoid testing the qmap flag twice.

Delete an incorrect comment in ipa_endpoint_init_aggr().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20 11:12:23 +01:00
Alex Elder
c9d92cf28c net: ipa: rename a GSI error code
The CHANNEL_NOT_RUNNING error condition has been generalized, so
rename it to be INCORRECT_CHANNEL_STATE.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20 11:12:23 +01:00
Alex Elder
c15f950d14 net: ipa: drop an unneeded transaction reference
In gsi_channel_update(), a reference count is taken on the last
completed transaction "to keep it from completing" before we give
the event back to the hardware.  Completion processing for that
transaction (and any other "new" ones) will not occur until after
this function returns, so there's no risk it completing early.  So
there's no need to take and drop the additional transaction
reference.

Use local variables in the call to gsi_evt_ring_doorbell().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20 11:12:23 +01:00
Jakub Kicinski
1172aa6e4a net: ipa: don't proceed to out-of-bound write
GCC 12 seems upset that we check ipa_irq against array bound
but then proceed, anyway:

drivers/net/ipa/ipa_interrupt.c: In function ‘ipa_interrupt_add’:
drivers/net/ipa/ipa_interrupt.c:196:27: warning: array subscript 30 is above array bounds of ‘void (*[30])(struct ipa *, enum ipa_irq_id)’ [-Warray-bounds]
  196 |         interrupt->handler[ipa_irq] = handler;
      |         ~~~~~~~~~~~~~~~~~~^~~~~~~~~
drivers/net/ipa/ipa_interrupt.c:42:27: note: while referencing ‘handler’
   42 |         ipa_irq_handler_t handler[IPA_IRQ_COUNT];
      |                           ^~~~~~~

Reviewed-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20220519004417.2109886-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-19 18:44:51 -07:00
Jakub Kicinski
d7e6f58360 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
drivers/net/ethernet/mellanox/mlx5/core/main.c
  b33886971d ("net/mlx5: Initialize flow steering during driver probe")
  40379a0084 ("net/mlx5_fpga: Drop INNOVA TLS support")
  f2b41b32cd ("net/mlx5: Remove ipsec_ops function table")
https://lore.kernel.org/all/20220519040345.6yrjromcdistu7vh@sx1/
  16d42d3133 ("net/mlx5: Drain fw_reset when removing device")
  8324a02c34 ("net/mlx5: Add exit route when waiting for FW")
https://lore.kernel.org/all/20220519114119.060ce014@canb.auug.org.au/

tools/testing/selftests/net/mptcp/mptcp_join.sh
  e274f71540 ("selftests: mptcp: add subflow limits test-cases")
  b6e074e171 ("selftests: mptcp: add infinite map testcase")
  5ac1d2d634 ("selftests: mptcp: Add tests for userspace PM type")
https://lore.kernel.org/all/20220516111918.366d747f@canb.auug.org.au/

net/mptcp/options.c
  ba2c89e0ea ("mptcp: fix checksum byte order")
  1e39e5a32a ("mptcp: infinite mapping sending")
  ea66758c17 ("tcp: allow MPTCP to update the announced window")
https://lore.kernel.org/all/20220519115146.751c3a37@canb.auug.org.au/

net/mptcp/pm.c
  95d6865178 ("mptcp: fix subflow accounting on close")
  4d25247d3a ("mptcp: bypass in-kernel PM restrictions for non-kernel PMs")
https://lore.kernel.org/all/20220516111435.72f35dca@canb.auug.org.au/

net/mptcp/subflow.c
  ae66fb2ba6 ("mptcp: Do TCP fallback on early DSS checksum failure")
  0348c690ed ("mptcp: add the fallback check")
  f8d4bcacff ("mptcp: infinite mapping receiving")
https://lore.kernel.org/all/20220519115837.380bb8d4@canb.auug.org.au/

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-19 11:23:59 -07:00
Alex Elder
8d017efb1e net: ipa: get rid of a duplicate initialization
In ipa_qmi_ready(), the "ipa" local variable is set when
initialized, but then set again just before it's first used.
One or the other is enough, so get rid of the first one.

References: https://lore.kernel.org/lkml/200de1bd-0f01-c334-ca18-43eed783dfac@intel.com/
Reported-by: kernel test robot <lkp@intel.com>
Fixes: 530f9216a9 ("soc: qcom: ipa: AP/modem communications")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-13 12:01:42 +01:00
Alex Elder
d8290cbe11 net: ipa: record proper RX transaction count
Each time we are notified that some number of transactions on an RX
channel has completed, we record the number of bytes that have been
transferred since the previous notification.  We also track the
number of transactions completed, but that is not currently being
calculated correctly; we're currently counting the number of such
notifications, but each notification can represent many transaction
completions.  Fix this.

Fixes: 650d160382 ("soc: qcom: ipa: the generic software interface")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-13 12:01:42 +01:00
Alex Elder
30b338ff79 net: ipa: certain dropped packets aren't accounted for
If an RX endpoint receives packets containing status headers, and a
packet in the buffer is not dropped, ipa_endpoint_skb_copy() is
responsible for wrapping the packet data in an SKB and forwarding it
to ipa_modem_skb_rx() for further processing.

If ipa_endpoint_skb_copy() gets a null pointer from build_skb(), it
just returns early.  But in the process it doesn't record that as a
dropped packet in the network device statistics.

Instead, call ipa_modem_skb_rx() whether or not the SKB pointer is
NULL; that function ensures the statistics are properly updated.

Fixes: 1b65bbcc9a ("net: ipa: skip SKB copy if no netdev")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-13 12:01:42 +01:00
Jakub Kicinski
16d083e28f net: switch to netif_napi_add_tx()
Switch net callers to the new API not requiring
the NAPI_POLL_WEIGHT argument.

Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Acked-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Acked-by: Alexandra Winter <wintera@linux.ibm.com>
Link: https://lore.kernel.org/r/20220504163725.550782-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-05 15:54:12 -07:00
Alex Elder
c5794097b2 net: ipa: compute proper aggregation limit
The aggregation byte limit for an endpoint is currently computed
based on the endpoint's receive buffer size.

However, some bytes at the front of each receive buffer are reserved
on the assumption that--as with SKBs--it might be useful to insert
data (such as headers) before what lands in the buffer.

The aggregation byte limit currently doesn't take into account that
reserved space, and as a result, aggregation could require space
past that which is available in the buffer.

Fix this by reducing the size used to compute the aggregation byte
limit by the NET_SKB_PAD offset reserved for each receive buffer.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-04-25 11:27:56 +01:00
Alex Elder
cb631a6398 net: ipa: use struct_size() for the interconnect array
In review for commit 8ee7ec4890 ("net: ipa: embed interconnect
array in the power structure"), Jakub Kicinski suggested that a
follow-up patch use struct_size() when computing the size of the
IPA power structure, which ends with a flexible array member.

Do that.

Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20220311162423.872645-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-11 22:50:07 -08:00
Alex Elder
37e0cf33f8 net: ipa: use IPA power device pointer
The ipa_power structure contains a copy of the IPA device pointer,
so there's no need to pass it to ipa_interconnect_init().  We can
also use that pointer for an error message in ipa_power_enable().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10 21:19:07 -08:00
Alex Elder
8ee7ec4890 net: ipa: embed interconnect array in the power structure
Rather than allocating the interconnect array dynamically, represent
the interconnects with a variable-length array at the end of the
ipa_power structure.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10 21:19:06 -08:00
Alex Elder
63ac8cce50 net: ipa: use bulk interconnect initialization
The previous patch used bulk interconnect operations to initialize
IPA interconnects one at a time.  This rearranges things to use the
bulk interfaces as intended--on all interconnects together.  As a
result ipa_interconnect_init_one() and ipa_interconnect_exit_one()
are no longer needed.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10 21:19:06 -08:00
Alex Elder
ba22a9778d net: ipa: use bulk operations to set up interconnects
Use of_icc_bulk_get() and icc_bulk_put(), icc_bulk_set_bw(), and
icc_bulk_enable() and icc_bulk_disable() to initialize individual
IPA interconnects.  Those functions already log messages in the
event of error so we don't need to.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10 21:19:05 -08:00
Alex Elder
90078e63e6 net: ipa: use interconnect bulk enable/disable operations
The power interconnect array is now an array of icc_bulk_data
structures, which is what the interconnect bulk enable and disable
functions require.

Get rid of ipa_interconnect_enable() and ipa_interconnect_disable(),
and just call icc_bulk_enable() and icc_bulk_disable() instead.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10 21:19:05 -08:00
Alex Elder
9dd5006891 net: ipa: use icc_enable() and icc_disable()
The interconnect framework now provides the ability to enable and
disable interconnects without having to change their recorded
"enabled" bandwidth value.  Use this mechanism, rather than setting
the bandwidth values to zero and non-zero respectively to disable
and enable the IPA interconnects.

Disable each interconnect before setting its "enabled" average and
peak bandwidth values.  Thereafter, enable and disable interconnects
when required rather than setting their bandwidths.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10 21:19:04 -08:00
Alex Elder
c7be12fa2f net: ipa: kill struct ipa_interconnect
The ipa_interconnect structure contains an icc_path pointer, plus an
average and peak bandwidth value.  Other than the interconnect name,
this matches the icc_bulk_data structure exactly.

Use the icc_bulk_data structure in place of the ipa_interconnect
structure, and add an initialization of its name field.  Then get
rid of the now unnecessary ipa_interconnect structure definition.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10 21:19:04 -08:00
Jakub Kicinski
80901bff81 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
net/batman-adv/hard-interface.c
  commit 690bb6fb64 ("batman-adv: Request iflink once in batadv-on-batadv check")
  commit 6ee3c393ee ("batman-adv: Demote batadv-on-batadv skip error message")
https://lore.kernel.org/all/20220302163049.101957-1-sw@simonwunderlich.de/

net/smc/af_smc.c
  commit 4d08b7b57e ("net/smc: Fix cleanup when register ULP fails")
  commit 462791bbfa ("net/smc: add sysctl interface for SMC")
https://lore.kernel.org/all/20220302112209.355def40@canb.auug.org.au/

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-03 11:55:12 -08:00
Alex Elder
1dba41c9d2 net: ipa: add an interconnect dependency
In order to function, the IPA driver very clearly requires the
interconnect framework to be enabled in the kernel configuration.
State that dependency in the Kconfig file.

This became a problem when CONFIG_COMPILE_TEST support was added.
Non-Qualcomm platforms won't necessarily enable CONFIG_INTERCONNECT.

Reported-by: kernel test robot <lkp@intel.com>
Fixes: 38a4066f59 ("net: ipa: support COMPILE_TEST")
Signed-off-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20220301113440.257916-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-02 22:14:05 -08:00
Alex Elder
caef14b753 net: ipa: fix a build dependency
An IPA build problem arose in the linux-next tree the other day.
The problem is that a recent commit adds a new dependency on some
code, and the Kconfig file for IPA doesn't reflect that dependency.
As a result, some configurations can fail to build (particularly
when COMPILE_TEST is enabled).

The recent patch adds calls to qmp_get(), qmp_put(), and qmp_send(),
and those are built based on the QCOM_AOSS_QMP config option.  If
that symbol is not defined, stubs are defined, so we just need to
ensure QCOM_AOSS_QMP is compatible with QCOM_IPA, or it's not
defined.

Reported-by: Randy Dunlap <rdunlap@infradead.org>
Fixes: 34a081761e ("net: ipa: request IPA register values be retained")
Signed-off-by: Alex Elder <elder@linaro.org>
Tested-by: Randy Dunlap <rdunlap@infradead.org>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-28 11:44:27 +00:00
Alex Elder
9654d8c462 net: ipa: determine replenish doorbell differently
Rather than tracking the number of receive buffer transactions that
have been submitted without a doorbell, just track the total number
of transactions that have been issued.  Then ring the doorbell when
that number modulo the replenish batch size is 0.

The effect is roughly the same, but the new count is slightly more
interesting, and this approach will someday allow the replenish
batch size to be tuned at runtime.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
5d6ac24fb1 net: ipa: replenish after delivering payload
Replenishing is now solely driven by whether transactions are
available for a channel, and it doesn't really matter whether
we replenish before or after we deliver received packets to the
network stack.

Replenishing before delivering the payload adds a little latency.
Eliminate that by requesting a replenish after the payload is
delivered.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
09b337deda net: ipa: kill replenish_backlog
We no longer use the replenish_backlog atomic variable to decide
when we've got work to do providing receive buffers to hardware.
Basically, we try to keep the hardware as full as possible, all the
time.  We keep supplying buffers until the hardware has no more
space for them.

As a result, we can get rid of the replenish_backlog field and the
atomic operations performed on it.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
5fc7f9ba2e net: ipa: introduce gsi_channel_trans_idle()
Create a new function that returns true if all transactions for a
channel are available for use.

Use it in ipa_endpoint_replenish_enable() to see whether to start
replenishing, and in ipa_endpoint_replenish() to determine whether
it's necessary after a failure to schedule delayed work to ensure a
future replenish attempt occurs.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
d0ac30e74e net: ipa: don't use replenish_backlog
Rather than determining when to stop replenishing using the
replenish backlog, just stop when we have exhausted all available
transactions.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
6a606b9015 net: ipa: allocate transaction in replenish loop
When replenishing, have ipa_endpoint_replenish() allocate a
transaction, and pass that to ipa_endpoint_replenish_one() to fill.
Then, if that produces no error, commit the transaction within the
replenish loop as well.  In this way we can distinguish between
transaction failures and buffer allocation/mapping failures.

Failure to allocate a transaction simply means the hardware already
has as many receive buffers as it can hold.  In that case we can
break out of the replenish loop because there's nothing more to do.

If we fail to allocate or map pages for the receive buffer, just
try again later.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
b9dbabc5ca net: ipa: decide on doorbell in replenish loop
Decide whether the doorbell should be signaled when committing a
replenish transaction in the main replenish loop, rather than in
ipa_endpoint_replenish_one().  This is a step to facilitate the
next patch.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
4b22d84195 net: ipa: increment backlog in replenish caller
Three spots call ipa_endpoint_replenish(), and just one of those
requests that the backlog be incremented after completing the
replenish operation.

Instead, have the caller increment the backlog, and get rid of the
add_one argument to ipa_endpoint_replenish().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:07 +00:00
Alex Elder
b4061c136b net: ipa: allocate transaction before pages when replenishing
A transaction failure only occurs if no more transactions are
available for an endpoint.  It's a very cheap test.

When replenishing an RX endpoint buffer, there's no point in
allocating pages if transactions are exhausted.  So don't bother
doing so unless the transaction allocation succeeds.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:07 +00:00
Alex Elder
a9bec7ae70 net: ipa: kill replenish_saved
The replenish_saved field keeps track of the number of times a new
buffer is added to the backlog when replenishing is disabled.  We
don't really use it though, so there's no need for us to track it
separately.  Whether replenishing is enabled or not, we can simply
increment the backlog.

Get rid of replenish_saved, and initialize and increment the backlog
where it would have otherwise been used.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:07 +00:00
Jakub Kicinski
c59400a68c Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-03 17:36:16 -08:00
Alex Elder
34a081761e net: ipa: request IPA register values be retained
In some cases, the IPA hardware needs to request the always-on
subsystem (AOSS) to coordinate with the IPA microcontroller to
retain IPA register values at power collapse.  This is done by
issuing a QMP request to the AOSS microcontroller.  A similar
request ondoes that request.

We must get and hold the "QMP" handle early, because we might get
back EPROBE_DEFER for that.  But the actual request should be sent
while we know the IPA clock is active, and when we know the
microcontroller is operational.

Fixes: 1aac309d32 ("net: ipa: use autosuspend")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-03 08:03:43 -08:00
Alex Elder
33230aeb2e net: ipa: set IPA v4.11 AP<-modem RX buffer size to 32KB
Increase the receive buffer size used for data received from the
modem to 32KB, to improve download performance by allowing much
greater aggregation.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-02 21:13:45 -08:00
Alex Elder
ed23f02680 net: ipa: define per-endpoint receive buffer size
Allow RX endpoints to have differing receive buffer sizes.  Define
the receive buffer size in the configuration data, and use that
rather than IPA_RX_BUFFER_SIZE when configuring the endpoint.

Add verification in ipa_endpoint_data_valid_one() that the receive
buffer specified for AP RX endpoints is both big enough to handle at
least one full packet, and not so big in an aggregating endpoint
that its size can't be represented when programming the hardware.
Move aggr_byte_limit_max() up in "ipa_endpoint.c" so it can be used
earlier in the file without a forward-reference.

Initially we'll just keep the 8KB receive buffer size already in use
for all AP RX endpoints..

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-02 21:13:45 -08:00
Alex Elder
998c0bd2b3 net: ipa: prevent concurrent replenish
We have seen cases where an endpoint RX completion interrupt arrives
while replenishing for the endpoint is underway.  This causes another
instance of replenishing to begin as part of completing the receive
transaction.  If this occurs it can lead to transaction corruption.

Use a new flag to ensure only one replenish instance for an endpoint
executes at a time.

Fixes: 84f9bd12d4 ("soc: qcom: ipa: IPA endpoints")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-12 14:39:53 +00:00
Alex Elder
c1aaa01dbf net: ipa: use a bitmap for endpoint replenish_enabled
Define a new replenish_flags bitmap to contain Boolean flags
associated with an endpoint's replenishing state.  Replace the
replenish_enabled field with a flag in that bitmap.  This is to
prepare for the next patch, which adds another flag.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-12 14:39:53 +00:00