In snd_card_disconnect(), we set card->shutdown flag at the beginning,
call callbacks and do sync for card->power_ref_sleep waiters at the
end. The callback may delete a kctl element, and this can lead to a
deadlock when the device was in the suspended state. Namely:
* A process waits for the power up at snd_power_ref_and_wait() in
snd_ctl_info() or read/write() inside card->controls_rwsem.
* The system gets disconnected meanwhile, and the driver tries to
delete a kctl via snd_ctl_remove*(); it tries to take
card->controls_rwsem again, but this is already locked by the
above. Since the sleeper isn't woken up, this deadlocks.
An easy fix is to wake up sleepers before processing the driver
disconnect callbacks but right after setting the card->shutdown flag.
Then all sleepers will abort immediately, and the code flows again.
So, basically this patch moves the wait_event() call at the right
timing. While we're at it, just to be sure, call wait_event_all()
instead of wait_event(), although we don't use exclusive events on
this queue for now.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=218816
Cc: <stable@vger.kernel.org>
Reviewed-by: Jaroslav Kysela <perex@perex.cz>
Link: https://lore.kernel.org/r/20240510101424.6279-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
*-objs suffix is reserved rather for (user-space) host programs while
usually *-y suffix is used for kernel drivers (although *-objs works
for that purpose for now).
Let's correct the old usages of *-objs in Makefiles.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Jaroslav Kysela <perex@perex.cz>
Link: https://lore.kernel.org/r/20240507135513.14919-2-tiwai@suse.de
Some data for testing is immutable. In the case, the const qualifier is
available for any loader to place it to read-only segment.
Fixes: 3e39acf56e ("ALSA: core: Add sound core KUnit test")
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Acked-by: Ivan Orlov <ivan.orlov0322@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Message-ID: <20240425233653.218434-1-o-takashi@sakamocchi.jp>
Don't populate the read-only array buf_samples on the stack at
run time, instead make it static const.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Acked-by: Ivan Orlov <ivan.orlov0322@gmail.com>
Reviewed-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Message-ID: <20240425160754.114716-1-colin.i.king@gmail.com>
Instead of reiterating the list, use list_for_each_entry_safe()
that allows to continue without starting over.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Jaroslav Kysela <perex@perex.cz>
Message-ID: <20240424145020.1057216-1-andriy.shevchenko@linux.intel.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Although the purpose of dummy seq client is a direct pass-through,
it's sometimes helpful for debugging if it can convert to a certain
UMP MIDI version. This patch adds an option to specify the UMP event
conversion. As default, it skips the conversion and does
passthrough, while user can pass ump=1 or ump=2 to enforce the
conversion to UMP MIDI1 or MIDI2 format.
Message-ID: <20240419101105.15571-1-tiwai@suse.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
The conversion from MIDI2 to MIDI1 UMP messages had a leftover
artifact (superfluous bit shift), and this resulted in the bogus type
check, leading to empty outputs. Let's fix it.
Fixes: e9e02819a9 ("ALSA: seq: Automatic conversion of UMP events")
Cc: <stable@vger.kernel.org>
Link: https://github.com/alsa-project/alsa-utils/issues/262
Message-ID: <20240419100442.14806-1-tiwai@suse.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Many modern codecs support 705.6kHz and 768kHz sample rates. Current HW
params fail to set 705.6kHz and 768kHz sample rates as these are not in the
known-rates list.
Add these new rates to the known-rates list to allow them.
Also add defines in pcm.h so that drivers can use it.
Signed-off-by: Pavel Hofman <pavel.hofman@ivitera.com>
Reviewed-by: Jaroslav Kysela <perex@perex.cz>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Message-ID: <20240416121726.628679-3-pavel.hofman@ivitera.com>
The recent conversion to the automatic kfree() forgot to mark a
variable with __free(kfree), leading to memory leaks. Fix it.
Fixes: 1052d98822 ("ALSA: control: Use automatic cleanup of kfree()")
Reported-by: Mirsad Todorovac <mirsad.todorovac@alu.unizg.hr>
Closes: https://lore.kernel.org/r/c1e2ef3c-164f-4840-9b1c-f7ca07ca422a@alu.unizg.hr
Message-ID: <20240320062722.31325-1-tiwai@suse.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Clang prior to 17.0.0 has a bug in its asm goto jump scope analysis to
determine that no variables with the cleanup attribute are skipped by an
indirect jump. Instead of only checking the scope of each label that is
a possible target of each asm goto statement, it checks the scope of
every label, which can cause an error when a variable with the cleanup
attribute is used between two asm goto statements with different scopes,
even if they have completely different label targets:
sound/core/hwdep.c:273:8: error: cannot jump from this asm goto statement to one of its possible targets
if (get_user(device, (int __user *)arg))
^
arch/powerpc/include/asm/uaccess.h:295:5: note: expanded from macro 'get_user'
__get_user(x, _gu_addr) : \
^
arch/powerpc/include/asm/uaccess.h:283:2: note: expanded from macro '__get_user'
__get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err); \
^
arch/powerpc/include/asm/uaccess.h:199:3: note: expanded from macro '__get_user_size_allowed'
__get_user_size_goto(x, ptr, size, __gus_failed); \
^
arch/powerpc/include/asm/uaccess.h:187:10: note: expanded from macro '__get_user_size_goto'
case 1: __get_user_asm_goto(x, (u8 __user *)ptr, label, "lbz"); break; \
^
arch/powerpc/include/asm/uaccess.h:158:2: note: expanded from macro '__get_user_asm_goto'
asm_volatile_goto( \
^
include/linux/compiler_types.h:366:33: note: expanded from macro 'asm_volatile_goto'
#define asm_volatile_goto(x...) asm goto(x)
^
sound/core/hwdep.c:291:9: note: possible target of asm goto statement
if (put_user(device, (int __user *)arg))
^
arch/powerpc/include/asm/uaccess.h:66:5: note: expanded from macro 'put_user'
__put_user(x, _pu_addr) : -EFAULT; \
^
arch/powerpc/include/asm/uaccess.h:52:9: note: expanded from macro '__put_user'
\
^
sound/core/hwdep.c:276:4: note: jump bypasses initialization of variable with __attribute__((cleanup))
scoped_guard(mutex, ®ister_mutex) {
^
include/linux/cleanup.h:169:20: note: expanded from macro 'scoped_guard'
for (CLASS(_name, scope)(args), \
To avoid this issue, move the put_user() call out of the scoped_guard()
scope, which allows the asm goto scope analysis to see that the variable
with the cleanup attribute will never be skipped by the asm goto
statements.
There should be no functional change because prior to the refactoring,
put_user() was not called under register_mutex, so this call does not
even need to be in the scoped_guard() in the first place.
Fixes: e6684d08cc ("ALSA: hwdep: Use guard() for locking")
Closes: https://github.com/ClangBuiltLinux/linux/issues/2003
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Link: https://lore.kernel.org/r/20240301-fix-snd-hwdep-guard-v1-1-6aab033f3f83@kernel.org
Signed-off-by: Takashi Iwai <tiwai@suse.de>
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
A couple of functions that use snd_card_ref() and *_unref() are also
cleaned up with a defined class, too.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-25-tiwai@suse.de
The setup_mutex in PCM oss code can be simplified with guard().
(params_lock is tough and not trivial to covert, though.)
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-24-tiwai@suse.de
Define guard() usage for PCM stream locking and use it in appropriate
places.
The pair of snd_pcm_stream_lock() and snd_pcm_stream_unlock() can be
presented with guard(pcm_stream_lock) now.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-23-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-22-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-21-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-20-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-19-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-18-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-17-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-16-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-15-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-14-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-13-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-12-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-11-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
There are a few remaining explicit mutex and spinlock calls, and those
are the places where the temporary unlock/relocking happens -- which
guard() doens't cover well yet.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-10-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
The lops calls under multiple rwsems are factored out as a simple
macro, so that it can be called easily from snd_ctl_dev_register()
and snd_ctl_dev_disconnect().
There are a few remaining explicit rwsem and spinlock calls, and those
are the places where the lock downgrade happens or where the temporary
unlock/relocking happens -- which guard() doens't cover well yet.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-9-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-8-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-7-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
There are still a few remaining explicit mutex_lock/unlock calls, and
those are for the places where we do temporary unlock/relock, which
doesn't fit well with the guard(), so far.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-6-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-5-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
For making changes easier, some functions widen the application of
register_mutex, but those shouldn't influence on any actual
performance.
Also, one code block was factored out as a function so that guard()
can be applied cleanly without much indentation.
There are still a few remaining explicit spin_lock/unlock calls, and
those are for the places where we do temporary unlock/relock, which
doesn't fit well with the guard(), so far.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-4-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
The explicit mutex_lock/unlock are still seen only in
snd_compress_wait_for_drain() which does temporary unlock/relocking.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-3-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-2-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240223084241.3361-5-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240223084241.3361-4-tiwai@suse.de
Now we have a nice definition of CLASS(fd) that can be applied as a
clean up for the fdget/fdput pairs in snd_pcm_link().
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240223084241.3361-2-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-10-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-9-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-8-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-7-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-6-tiwai@suse.de