Most users of IS_ERR_VALUE() in the kernel are wrong, as they
pass an 'int' into a function that takes an 'unsigned long'
argument. This happens to work because the type is sign-extended
on 64-bit architectures before it gets converted into an
unsigned type.
However, anything that passes an 'unsigned short' or 'unsigned int'
argument into IS_ERR_VALUE() is guaranteed to be broken, as are
8-bit integers and types that are wider than 'unsigned long'.
Andrzej Hajda has already fixed a lot of the worst abusers that
were causing actual bugs, but it would be nice to prevent any
users that are not passing 'unsigned long' arguments.
This patch changes all users of IS_ERR_VALUE() that I could find
on 32-bit ARM randconfig builds and x86 allmodconfig. For the
moment, this doesn't change the definition of IS_ERR_VALUE()
because there are probably still architecture specific users
elsewhere.
Almost all the warnings I got are for files that are better off
using 'if (err)' or 'if (err < 0)'.
The only legitimate user I could find that we get a warning for
is the (32-bit only) freescale fman driver, so I did not remove
the IS_ERR_VALUE() there but changed the type to 'unsigned long'.
For 9pfs, I just worked around one user whose calling conventions
are so obscure that I did not dare change the behavior.
I was using this definition for testing:
#define IS_ERR_VALUE(x) ((unsigned long*)NULL == (typeof (x)*)NULL && \
unlikely((unsigned long long)(x) >= (unsigned long long)(typeof(x))-MAX_ERRNO))
which ends up making all 16-bit or wider types work correctly with
the most plausible interpretation of what IS_ERR_VALUE() was supposed
to return according to its users, but also causes a compile-time
warning for any users that do not pass an 'unsigned long' argument.
I suggested this approach earlier this year, but back then we ended
up deciding to just fix the users that are obviously broken. After
the initial warning that caused me to get involved in the discussion
(fs/gfs2/dir.c) showed up again in the mainline kernel, Linus
asked me to send the whole thing again.
[ Updated the 9p parts as per Al Viro - Linus ]
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Andrzej Hajda <a.hajda@samsung.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: https://lkml.org/lkml/2016/1/7/363
Link: https://lkml.org/lkml/2016/5/27/486
Acked-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> # For nvmem part
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
After commit d6463f170cf0 ("mmc: sdhci: Remove redundant runtime PM calls"),
some of original sdhci_do_xx() function wrappers becomes meaningless,
so remove them.
Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
defined(CONFIG_LEDS_CLASS) || (defined(CONFIG_LEDS_CLASS_MODULE) && \
defined(CONFIG_MMC_SDHCI_MODULE))
is equivalent to:
defined(CONFIG_LEDS_CLASS) || (defined(CONFIG_LEDS_CLASS_MODULE) && \
defined(MODULE))
and it can also be written shortly as:
IS_REACHABLE(CONFIG_LEDS_CLASS)
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
SDHCI_SDR104_NEEDS_TUNING was originally named SDHCI_HS200_NEEDS_TUNING
and was added in commit 069c9f1428 ("mmc: host: Adds support for eMMC
4.5 HS200 mode").
That commit conflated SDHCI_SDR50_NEEDS_TUNING and SDHCI_HS200_NEEDS_TUNING
due to what appears to be misplaced parentheses.
Commit 156e14b126 ("mmc: sdhci: fix caps2 for HS200") made HS200
configuration equivalent to SDR104 configuration, renaming
SDHCI_HS200_NEEDS_TUNING to SDHCI_SDR104_NEEDS_TUNING despite tuning for
HS200 now being non-optional.
The mix-up with SDHCI_SDR50_NEEDS_TUNING remained and became more obvious
after commit 4b6f37d3a3 ("mmc: sdhci: clean up sdhci_execute_tuning()
decision") where the author noted the patch was "reflecting what the
original code was doing, it shows that it may not be what the author
actually intended."
The way the code is currently written, SDHCI_SDR104_NEEDS_TUNING
causes tuning to be done always for SDR50 mode if SDR104 mode is
also supported by the host controller. That makes no sense because
we already have capabilities bit SDHCI_USE_SDR50_TUNING and
corresponding flag SDHCI_SDR50_NEEDS_TUNING for that purpose.
Given the dubious origins of SDHCI_SDR104_NEEDS_TUNING, it seems
reasonable to remove it. The benefit being SDR50 mode will now not
un-nessessarily do tuning.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
ifdef's make the code more complicated and harder to read.
Move all the LED code together to reduce the ifdef's to
one place.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Some error paths in sdhci_add_host() simply returned without
cleaning up. Also the return value from mmc_add_host()
was not being checked.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
SDHCI_QUIRK2_NEED_DELAY_AFTER_INT_CLK_RST quirk is not used anymore so
remove it.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
In order to remove the SDHCI_QUIRK2_NEED_DELAY_AFTER_INT_CLK_RST and to
reduce code duplication, put the code relative to the SD clock
configuration in a function which can be used by hosts for the
implementation of the ->set_clock() callback.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
There are no need to have two versions of sdhci_runtime_pm_bus_off|on(),
depending on whether CONFIG_PM is set or unset. Thus it's easy to move the
implementation of these functions a bit earlier to avoid the unnecessary
pre-definition of them, so let's do that.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Commit 9250aea76b ("mmc: core: Enable runtime PM management of host
devices"), made some calls to the runtime PM API from the driver
redundant. Especially those which deals with runtime PM reference
counting, so let's remove them.
Moreover as SDHCI have its own wrapper functions for runtime PM these
becomes superfluous, so let's remove them as well.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Set the DMA mask in sdhci_add_host() after we determined the
capabilities of the device. 64-bit devices in particular are given the
proper mask that ensures bounce buffers are not used.
Also disable DMA if no proper DMA mask can be set, as the DMA-API
documentation specifies.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Normally the timeout clock frequency is read from the capabilities
register. It is also possible to set the value prior to calling
sdhci_add_host() in which case that value will override the
capabilities register value. However that was being done after
calculating max_busy_timeout so that max_busy_timeout was being
calculated using the wrong value of timeout_clk.
Fix that by moving the override before max_busy_timeout is
calculated.
The result is that the max_busy_timeout and max_discard
increase for BSW devices so that, for example, the time for
mkfs.ext4 on a 64GB eMMC drops from about 1 minute 40 seconds
to about 20 seconds.
Note, in the future, the capabilities setting will be tidied up
and this override won't be used anymore. However this fix is
needed for stable.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v3.18+
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Further simplify the code in sdhci_prepare_data() - we don't set
SDHCI_REQ_USE_DMA anywhere else in the driver, so there is no
need to set it, and then immediately test it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Rather than scanning the scatterlist multiple times for each quirk,
scan it once, checking for each possible quirk. This should be
cheaper due to the length and offset members commonly sharing the
same cache line than scanning the scatterlist multiple times.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Prepare to consolidate the DMA address/size quirk handling into one
single loop.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
The patch "mmc: sdhci: plug DMA mapping leak on error" added
un-mapping logic to sdhci_tasklet_finish() where it is always
called, thereby preventing the mapping leaking.
Consequently the un-mapping code in sdhci_finish_data() is no
longer needed. Remove it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[ Split from original "mmc: sdhci: plug DMA mapping leak on error" patch ]
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Commit d31911b937 ("mmc: sdhci: fix dma memory leak in sdhci_pre_req()")
added a complicated method to manage the DMA map state for the data
transfer, but this complexity is not required.
There are three states:
* Unmapped
* Mapped by sdhci_pre_req()
* Mapped by sdhci_prepare_data()
sdhci_prepare_data() needs to know when the data buffers have been
successfully mapped by sdhci_pre_req(), and if so, there is no need to
map them a second time.
When we come to tear down the mapping, we want to know whether
sdhci_post_req() will be called (which is determined by sdhci_pre_req()
having been previously called) so that we can postpone the unmap
operation.
Hence, it makes sense to simply record when the successful DMA map
happened (via COOKIE_PRE_MAPPED vs COOKIE_MAPPED) rather than having
the complex mechanics involving COOKIE_MAPPED vs COOKIE_GIVEN.
If a mapping is created by sdhci_prepare_data(), we must tear it down
ourselves, without waiting for sdhci_post_req() (hence, the new
COOKIE_MAPPED case). If the mapping is created by sdhci_pre_req()
then sdhci_post_req() is responsible for tearing the mapping down.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
If the host cookie indicates that the data buffers of a request are
mapped at sdhci_post_req() time, always unmap the data buffers.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Pass the desired cookie for a successful map. This is in preparation to
clean up the MAPPED/GIVEN states.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
In sdhci_prepare_data(), when SDHCI_REQ_USE_DMA is set, there are two
paths that prepare the data buffers for transfer. One is when
SDHCI_USE_ADMA is set, and is located inside sdhci_adma_table_pre().
The other is when SDHCI_USE_ADMA is clear, in the else clause of the
above.
Factor out the call to sdhci_pre_dma_transfer() along with its error
checking.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Move sdhci_pre_dma_transfer() to avoid needing to declare this function
before use.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
sdhci_finish_data() has two paths which result in identical DMA cleanup.
One is when SDHCI_USE_ADMA is clear, and the other is just before when
SDHCI_USE_ADMA is set, and is performed within sdhci_adma_table_post().
Simplify the code by removing the 'else' and eliminating the duplicate
inside sdhci_adma_table_post().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
If we are writing data to the card, there is no point in walking the
scatterlist to find out if there are any unaligned entries; this is a
needless waste of CPU cycles. Avoid this by checking for a non-read
tranfer first.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Allocate both the alignment and DMA descriptor buffers together. The
size of the alignment buffer will always be aligned to the hosts
required alignment, which gives appropriate alignment to the DMA
descriptors.
We have a maximum of 128 segments, and a maximum alignment of 64 bits.
This gives a maximum alignment buffer size of 1024 bytes.
The DMA descriptors are a maximum of 12 bytes, and we allocate 128 * 2
+ 1 of these, which gives a maximum DMA descriptor buffer size of 3084
bytes.
This means the allocation for a 4K page sized system will be an order-1
allocation, since the resulting overall size is 4108. This is more
prone to failure than page-sized allocations, but since this allocation
commonly occurs at startup, the chances of failure are small.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[ Changed to check ADMA table alignment ]
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
The calculation for the timeout based on the number of card clocks is
incorrect. The calculation assumed:
timeout in microseconds = clock cycles / clock in Hz
which is clearly a several orders of magnitude wrong. Fix this by
multiplying the clock cycles by 1000000 prior to dividing by the Hz
based clock. Also, as per part 1, ensure that the division rounds
up.
As this needs 64-bit math via do_div(), avoid it if the clock cycles
is zero.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v3.15+
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
The data timeout gives the minimum amount of time that should be
waited before timing out if no data is received from the card.
Simply dividing the nanosecond part by 1000 does not give this
required guarantee, since such a division rounds down. Use
DIV_ROUND_UP() to give the desired timeout.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v3.15+
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
sdhci_post_req() exists to unmap a previously mapped but already
finished request, while the next request is in progress. However, the
state of the SDHCI_REQ_USE_DMA flag depends on the last submitted
request.
This means we can end up clearing the flag due to a quirk, which then
means that sdhci_post_req() fails to unmap the DMA buffer, potentially
leading to data corruption.
We can safely ignore the SDHCI_REQ_USE_DMA here, as testing
data->host_cookie is entirely sufficient.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[ Re-based to apply as a separate fix ]
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v4.5+
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
If we terminate a command early, we fail to properly clean up the DMA
mappings for the data part of the request. Put this clean up to the
tasklet, which is the common path for finishing a request so we always
clean up after ourselves.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[ Split original patch so that it now contains only the fix ]
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v4.5+
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Unnecessarily mapping and unmapping the align buffer for SD cards is
expensive: performance measurements on iMX6 show that this gives a hit
of 10% on hdparm buffered disk reads.
MMC/SD card IO comes from the mm/vfs which gives us page based IO, so
for this case, the align buffer is not going to be used. However, we
still map and unmap this buffer.
Eliminate this by switching the align buffer to be a DMA coherent
buffer, which needs no DMA maintenance to access the buffer.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v4.5+
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
When we get a response CRC error on a command, it means that the
response we received back from the card was not correct. It does not
mean that the card did not receive the command correctly. If the
command is one which initiates a data transfer, the card can enter the
data transfer state, and start sending data.
Moreover, if the request contained a data phase, we do not clean this
up, and this results in the driver triggering DMA API debug warnings,
and also creates a race condition in the driver, between running the
finish_tasklet and the data transfer interrupts, which can trigger a
"Got data interrupt" state dump.
Fix this by handing a response CRC error slightly differently: record
the failure of the data initiating command, but allow the remainder of
the request to be processed normally. This is safe as core MMC checks
the status of all commands and data transfer phases of the request.
If the card does not initiate a data transfer, then we should time out
according to the data transfer parameters.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[ Fix missing parenthesis around bitwise-AND expression, and tweak subject ]
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v4.5+
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
When a command is started, logically it has no error. Initialise the
command's error member to zero whenever we start a command.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
[ Goes with "mmc: sdhci: fix command response CRC error handling" ]
Cc: stable@vger.kernel.org # v4.5+
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
sdhci_add_host() allows the Host Controller Capability registers
to be supplied by the calling driver by using
SDHCI_QUIRK_MISSING_CAPS, but the check for the Capabilities bit
SDHCI_CAN_64BIT doesn't use the applied value and instead reads
the Host register directly. This change uses the supplied "caps"
register instead of reading the host register.
This change will allow a calling driver to simply clear the
SDHCI_CAN_64BIT bit in "caps" to handle some cases of
SDHCI_QUIRK2_BROKEN_64_BIT_DMA.
Signed-off-by: Al Cooper <alcooperx@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Drivers may need to provide their own get_cd() mmc host op, but
currently the internals of the current op (sdhci_get_cd()) are
provided by sdhci_do_get_cd() which is also called from
sdhci_request().
To allow override of the get_cd functionality, change sdhci_request()
to call ->get_cd() instead of sdhci_do_get_cd().
Note, in the future the call to ->get_cd() will likely be removed
from sdhci_request() since most drivers don't need actually it.
However this change is being done now to facilitate a subsequent
bug fix.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v4.4+
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
In the past, fixes for specific hardware devices were implemented
in sdhci using quirks. That approach is no longer accepted because
the growing number of quirks was starting to make the code difficult
to understand and maintain.
One alternative to quirks, is to allow drivers to override the default
mmc host operations. This patch makes it easy to do that, and it is
needed for a subsequent bug fix, for which separate patches are
provided.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v4.4+
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
After commit 52221610dd ("mmc: sdhci: Improve external VDD regulator
support"), for the VDD is supplied via external regulators, we ignore
the code to convert a VDD voltage request into one of the standard
SDHCI voltage levels, then program it in the SDHCI_POWER_CONTROL. This
brings two issues:
1. SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON quirk isn't handled properly any
more.
2. What's more, once SDHCI_POWER_ON bit is set, some controllers such
as the sdhci-pxav3 used in marvell berlin SoCs require the voltage
levels programming in the SDHCI_POWER_CONTROL register, even the VDD
is supplied by external regulator. So the host in marvell berlin SoCs
still works fine after the commit. However, commit 3cbc6123a9 ("mmc:
sdhci: Set SDHCI_POWER_ON with external vmmc") sets the SDHCI_POWER_ON
bit, this would make the host in marvell berlin SoCs won't work any
more with external vmmc.
This patch restores the behavior when setting VDD through external
regulator by moving the call of mmc_regulator_set_ocr() to the end
of sdhci_set_power() function.
After this patch, the sdcard on Marvell Berlin SoC boards work again.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Fixes: 52221610dd ("mmc: sdhci: Improve external VDD ...")
Reviewed-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Tested-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
sdhci has a legacy facility to prevent runtime suspend if the
bus power is on. This is needed in cases where the power to
the card is dependent on the bus power. It is controlled by
a pair of functions: sdhci_runtime_pm_bus_on() and
sdhci_runtime_pm_bus_off(). These functions use a boolean
variable 'bus_on' to ensure changes are always paired.
There is an additional check for 'runtime_suspended' which is
the problem. In fact, its use is ill-conceived as the only
requirement for the logic is that 'on' and 'off' are paired,
which is actually broken by the check, for example if the bus
power is turned on during runtime resume. So remove the check.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v3.11+
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
The version 3.00 SDHCI spec. was a bit unclear about the
required data alignment for 64-bit DMA, whereas the version
4.10 spec. uses different language and indicates that only
4-byte alignment is required rather than the 8-byte alignment
currently implemented. That make no difference to SD and EMMC
which invariably transfer data in sector-aligned blocks.
However with SDIO, it results in using more DMA descriptors
than necessary. Theoretically that slows DMA slightly although
DMA is not the limiting factor for throughput, so there is no
discernable impact on performance. Nevertheless, the driver
should follw the spec unless there is good reason not to, so
this patch corrects the alignment criterion.
There is a more complicated criterion for the DMA descriptor
table itself. However the table is allocated by dma_alloc_coherent()
which allocates pages (i.e. aligned to a page boundary).
For simplicity just check it is 8-byte aligned, but add a comment
that some Intel controllers actually require 8-byte alignment
even when using 32-bit DMA.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
SDHCI has built-in DMA called ADMA2. ADMA2 uses a descriptor
table to define DMA scatter-gather. Each desciptor can specify
a data length up to 65536 bytes, however the length field is
only 16-bits so zero means 65536. Consequently, putting zero
when the size is zero must not be allowed. This patch fixes
one case where zero data length could be set inadvertently.
The problem happens because unaligned data gets split and the
code did not consider that the remaining aligned portion might
be zero length. That case really only happens for SDIO because
SD and eMMC cards transfer blocks that are invariably sector-
aligned. For SDIO, access to function registers is done by
data transfer (CMD53) when the register is bigger than 1 byte.
Generally registers are 4 bytes but 2-byte registers are possible.
So DMA of 4 bytes or less can happen. When 32-bit DMA is used,
the data alignment must be 4, so 4-byte transfers won't casue a
problem, but a 2-byte transfer could. However with the introduction
of 64-bit DMA, the data alignment for 64-bit DMA was made 8 bytes,
so all 4-byte transfers not on 8-byte boundaries get "split" into
a 4-byte chunk and a 0-byte chunk, thereby hitting the bug.
In fact, a closer look at the SDHCI specs indicates that only the
descriptor table requires 8-byte alignment for 64-bit DMA. That
will be dealt with in a separate patch, but the potential for a
2-byte access remains, so this fix is needed anyway.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v3.19+
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
The driver may not be able to set the power correctly but that
is not a reason to BUG().
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Venu Byravarasu <vbyravarasu@nvidia.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
This is a trivial patch which fixes printed strings split across two
or more lines in the source. I tried to grep for some error output*,
but I couldn't find it easily because it was broken across multiple
lines. This patch makes my life easier.
* in particular "Timeout waiting for hardware interrupt."
Signed-off-by: Marek Vasut <marex@denx.de>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
CMD19 tuning is also available for DDR50 mode.
Signed-off-by: Weijun Yang <york.yang@csr.com>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
sdhci_init() will clear all irqs and set the needed irqs. So
logically sdhci_init() should be called before request irq.
If not, some irqs may be triggled and handled wrongly. Take
the following into consideration, after request irq, if
SDIO card interrupt enabled, a sd card in the sd slot will
trigger a mass of interrupt(SDHCI_INT_CARD_INT), because at
this time, the vmmc-regulator still not restore, no voltage
supply for the sd card, so the pin of data0~data3 change and
keep low, interrupt(SDHCI_INT_CARD_INT) will rise up ceaselessly.
Due to we already reguest irq, system will be busy in handling
this endless irq, can't response to other event.
So we should call sdhci_init() before request irq in sd resume.
Signed-off-by: Haibo Chen <haibo.chen@freescale.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
The Atmel sdhci device needs a new quirk. sdhci_set_clock set the Clock
Control Register to 0 before computing the new value and writing it.
It disables the internal clock which causes a reset mecanism. If we
write the new value before this reset mecanism is done, it will prevent
the stabilisation of the internal clock, so a delay is needed. This
delay is about 2-3 cycles of the base clock. To be safe, a 1 ms delay is
used.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
commit bb8175a8aa ("mmc: sdhci: clarify DDR timing mode between
SD-UHS and eMMC") added MMC_DDR52 as eMMC's DDR mode to be
distinguished from SD-UHS, but it missed setting driver type for
MMC_DDR52 timing mode.
So sometimes we get the following error on Marvell BG2Q DMP board:
[ 1.559598] mmcblk0: error -84 transferring data, sector 0, nr 8, cmd
response 0x900, card status 0xb00
[ 1.569314] mmcblk0: retrying using single block read
[ 1.575676] mmcblk0: error -84 transferring data, sector 2, nr 6, cmd
response 0x900, card status 0x0
[ 1.585202] blk_update_request: I/O error, dev mmcblk0, sector 2
[ 1.591818] mmcblk0: error -84 transferring data, sector 3, nr 5, cmd
response 0x900, card status 0x0
[ 1.601341] blk_update_request: I/O error, dev mmcblk0, sector 3
This patches fixes this by adding the missing driver type setting.
Fixes: bb8175a8aa ("mmc: sdhci: clarify DDR timing mode ...")
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Currently one mrq->data maybe execute dma_map_sg() twice
when mmc subsystem prepare over one new request, and the
following log show up:
sdhci[sdhci_pre_dma_transfer] invalid cookie: 24, next-cookie 25
In this condition, mrq->date map a dma-memory(1) in sdhci_pre_req
for the first time, and map another dma-memory(2) in sdhci_prepare_data
for the second time. But driver only unmap the dma-memory(2), and
dma-memory(1) never unmapped, which cause the dma memory leak issue.
This patch use another method to map the dma memory for the mrq->data
which can fix this dma memory leak issue.
Fixes: 348487cb28 ("mmc: sdhci: use pipeline mmc requests to improve performance")
Reported-and-tested-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Haibo Chen <haibo.chen@freescale.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
In programmable mode, if the clock frequency is too high, the divider
can be too small to meet the clock frequency requirement especially to
init the SD card. In this case, switch to the divided clock mode.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>