The function hv_signal_on_read was defined in hyperv.h and
only used in one place in ring_buffer code. Clearer to just
move it inline there.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The elements ring_data_start_offset and priv_write_index
are not used.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
With new iterator functions (and the double mapping) the ring buffer
read function can be greatly simplified.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Presently, the order of the block devices listed in /proc/devices is not
entirely sequential. If a block device has a major number greater than
BLKDEV_MAJOR_HASH_SIZE (255), it will be ordered as if its major were
module 255. For example, 511 appears after 1.
This patch cleans that up and prints each major number in the correct
order, regardless of where they are stored in the hash table.
In order to do this, we introduce BLKDEV_MAJOR_MAX as an artificial
limit (chosen to be 512). It will then print all devices in major
order number from 0 to the maximum.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Jeff Layton <jlayton@poochiereds.net>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Presently, the order of the char devices listed in /proc/devices is not
entirely sequential. If a char device has a major number greater than
CHRDEV_MAJOR_HASH_SIZE (255), it will be ordered as if its major were
module 255. For example, 511 appears after 1.
This patch cleans that up and prints each major number in the correct
order, regardless of where they are stored in the hash table.
In order to do this, we introduce CHRDEV_MAJOR_MAX as an artificial
limit (chosen to be 511). It will then print all devices in major
order number from 0 to the maximum.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Alan Cox <alan@linux.intel.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We've run into problems with running out of dynamicly assign char
device majors particullarly on automated test systems with
all-yes-configs. Roughly 40 dynamic assignments can be made with such
kernels at this time while space is reserved for only 20.
Currently, the kernel only prints a warning when dynamic allocation
overflows the reserved region. And when this happens drivers that have
fixed assignments can randomly fail depending on the order of
initialization of other drivers. Thus, adding a new char device can cause
unexpected failures in completely unrelated parts of the kernel.
This patch solves the problem by extending dynamic major number
allocations down from 511 once the 234-254 region fills up. Fixed
majors already exist above 255 so the infrastructure to support
high number majors is already in place. The patch reserves an
additional 128 major numbers which should hopefully last us a while.
Kernels that don't require more than 20 dynamic majors assigned (which
is pretty typical) should not be affected by this change.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Alan Cox <alan@linux.intel.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Linus Walleij <linus.walleij@linaro.org>
Link: https://lkml.org/lkml/2017/6/4/107
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Device 'new_id' interface is useful for testing of not yet published
hardware on older kernels and for internally used device ids on
simulation platforms.
However currently with the device configuration held in device_id driver
data as a pointer to mei_cfg structure it is hard, as one need to locate
the address of the correct structure.
A recommended way of doing that is to use and index instead of a
pointer.
This patch adds a new list of configuration mei_cfg_list[]
indexed via enum mei_cfg_idx.
In addition it cleanups ich platform naming, renames legacy
generation to ich and what was ich to ich10.
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
On some platforms, currently Broxton, Apollo Lake and Kaby Lake,
ME FW may be busy with internal bookkeeping and answering late
to the start message.
As a mitigation, the driver requests for a synchronous probing
to prevent stalling of the overall boot process. For example,
on a Apollo Lake platform the overall boot time has reduced from
~0.9 to ~0.6 seconds on average.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
It was never used since addition of binder to linux mainstream tree.
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Arve Hjønnevåg" <arve@android.com>
Cc: Riley Andrews <riandrews@android.com>
Cc: devel@driverdev.osuosl.org
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use rlimit() helper instead of manually writing whole
chain from current task to rlim_cur
Signed-off-by: Krzysztof Opasiak <k.opasiak@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
A race existed where one thread could register
a death notification for a node, while another
thread was cleaning up that node and sending
out death notifications for its references,
causing simultaneous access to ref->death
because different locks were held.
Signed-off-by: Martijn Coenen <maco@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When printing transactions there were several race conditions
that could cause a stale pointer to be deferenced. Fixed by
reading the pointer once and using it if valid (which is
safe). The transaction buffer also needed protection via proc
lock, so it is only printed if we are holding the correct lock.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use proc->outer_lock to protect the binder_ref structure.
The outer lock allows functions operating on the binder_ref
to do nested acquires of node and inner locks as necessary
to attach refs to nodes atomically.
Binder refs must never be accesssed without holding the
outer lock.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use the inner lock to protect thread accounting fields in
proc structure: max_threads, requested_threads,
requested_threads_started and ready_threads.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This makes future changes to priority inheritance
easier, since we want to be able to look at a thread's
transaction stack when selecting a thread to inherit
priority for.
It also allows us to take just a single lock in a
few paths, where we used to take two in succession.
Signed-off-by: Martijn Coenen <maco@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
proc->threads will need to be accessed with higher
locks of other processes held so use proc->inner_lock
to protect it. proc->tmp_ref now needs to be protected
by proc->inner_lock.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When locks for binder_ref handling are added, proc->nodes
will need to be modified while holding the outer lock
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
node->node_lock is used to protect elements of node. No
need to acquire for fields that are invariant: debug_id,
ptr, cookie.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The todo lists in the proc, thread, and node structures
are accessed by other procs/threads to place work
items on the queue.
The todo lists are protected by the new proc->inner_lock.
No locks should ever be nested under these locks. As the
name suggests, an outer lock will be introduced in
a later patch.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
For correct behavior we need to hold the inner lock when
dequeuing and processing node work in binder_thread_read.
We now hold the inner lock when we enter the switch statement
and release it after processing anything that might be
affected by other threads.
We also need to hold the inner lock to protect the node
weak/strong ref tracking fields as long as node->proc
is non-NULL (if it is NULL then we are guaranteed that
we don't have any node work queued).
This means that other functions that manipulate these fields
must hold the inner lock. Refactored these functions to use
the inner lock.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There are 3 main spinlocks which must be acquired in this
order:
1) proc->outer_lock : protects most fields of binder_proc,
binder_thread, and binder_ref structures. binder_proc_lock()
and binder_proc_unlock() are used to acq/rel.
2) node->lock : protects most fields of binder_node.
binder_node_lock() and binder_node_unlock() are
used to acq/rel
3) proc->inner_lock : protects the thread and node lists
(proc->threads, proc->nodes) and all todo lists associated
with the binder_proc (proc->todo, thread->todo,
proc->delivered_death and node->async_todo).
binder_inner_proc_lock() and binder_inner_proc_unlock()
are used to acq/rel
Any lock under procA must never be nested under any lock at the same
level or below on procB.
Functions that require a lock held on entry indicate which lock
in the suffix of the function name:
foo_olocked() : requires node->outer_lock
foo_nlocked() : requires node->lock
foo_ilocked() : requires proc->inner_lock
foo_iolocked(): requires proc->outer_lock and proc->inner_lock
foo_nilocked(): requires node->lock and proc->inner_lock
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When obtaining a node via binder_get_node(),
binder_get_node_from_ref() or binder_new_node(),
increment node->tmp_refs to take a
temporary reference on the node to ensure the node
persists while being used. binder_put_node() must
be called to remove the temporary reference.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Once locks are added, binder_ref's will only be accessed
safely with the proc lock held. Refactor the inc/dec paths
to make them atomic with the binder_get_ref* paths and
node inc/dec. For example, instead of:
ref = binder_get_ref(proc, handle, strong);
...
binder_dec_ref(ref, strong);
we now have:
ret = binder_dec_ref_for_handle(proc, handle, strong, &rdata);
Since the actual ref is no longer exposed to callers, a
new struct binder_ref_data is introduced which can be used
to return a copy of ref state.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
binder_thread and binder_proc may be accessed by other
threads when processing transaction. Therefore they
must be prevented from being freed while a transaction
is in progress that references them.
This is done by introducing a temporary reference
counter for threads and procs that indicates that the
object is in use and must not be freed. binder_thread_dec_tmpref()
and binder_proc_dec_tmpref() are used to decrement
the temporary reference.
It is safe to free a binder_thread if there
is no reference and it has been released
(indicated by thread->is_dead).
It is safe to free a binder_proc if it has no
remaining threads and no reference.
A spinlock is added to the binder_transaction
to safely access and set references for t->from
and for debug code to safely access t->to_thread
and t->to_proc.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When initiating a transaction, the target_node must
have a strong ref on it. Then we take a second
strong ref to make sure the node survives until the
transaction is complete.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Since errors are tracked in the return_error/return_error2
fields of the binder_thread object and BR_TRANSACTION_COMPLETEs
can be tracked either in those fields or via the thread todo
work list, it is possible for errors to be reported ahead
of the associated txn complete.
Use the thread todo work list for errors to guarantee
order. Also changed binder_send_failed_reply to pop
the transaction even if it failed to send a reply.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
binder_pop_transaction needs to be split into 2 pieces to
to allow the proc lock to be held on entry to dequeue the
transaction stack, but no lock when kfree'ing the transaction.
Split into binder_pop_transaction_locked and binder_free_transaction
(the actual locks are still to be added).
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The log->next index for the transaction log was
not protected when incremented. This led to a
case where log->next++ resulted in an index
larger than ARRAY_SIZE(log->entry) and eventually
a bad access to memory.
Fixed by making the log index an atomic64 and
converting to an array by using "% ARRAY_SIZE(log->entry)"
Also added "complete" field to the log entry which is
written last to tell the print code whether the
entry is complete
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Display information about allocated/free space whenever
binder buffer allocation fails on synchronous
transactions.
Signed-off-by: Martijn Coenen <maco@android.com>
Signed-off-by: Siqi Lin <siqilin@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Adds protection against malicious user code freeing
the same buffer at the same time which could cause
a crash. Cannot happen under normal use.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
node is always non-NULL in binder_get_ref_for_node so the
conditional and else clause are not needed
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The looper member of struct binder_thread is a bitmask
of control bits. All of the existing bits are modified
by the affected thread except for BINDER_LOOPER_STATE_NEED_RETURN
which can be modified in binder_deferred_flush() by
another thread.
To avoid adding a spinlock around all read-mod-writes to
modify a bit, the BINDER_LOOPER_STATE_NEED_RETURN flag
is replaced by a separate field in struct binder_thread.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Currently, the transaction complete work item is queued
after the transaction. This means that it is possible
for the transaction to be handled and a reply to be
enqueued in the current thread before the transaction
complete is enqueued, which violates the protocol
with userspace who may not expect the transaction
complete. Fixed by always enqueing the transaction
complete first.
Also, once the transaction is enqueued, it is unsafe
to access since it might be freed. Currently,
t->flags is accessed to determine whether a sync
wake is needed. Changed to access tr->flags
instead.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In binder_thread_read, the BINDER_WORK_NODE command is used
to communicate the references on the node to userspace. It
can take a couple of iterations in the loop to construct
the list of commands for user space. When locking is added,
the lock would need to be release on each iteration which
means the state could change. The work item is not dequeued
during this process which prevents a simpler queue management
that can just dequeue up front and handle the work item.
Fixed by changing the BINDER_WORK_NODE algorithm in
binder_thread_read to determine which commands to send
to userspace atomically in 1 pass so it stays consistent
with the kernel view.
The work item is now dequeued immediately since only
1 pass is needed.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add additional information to determine the cause of binder
failures. Adds the following to failed transaction log and
kernel messages:
return_error : value returned for transaction
return_error_param : errno returned by binder allocator
return_error_line : line number where error detected
Also, return BR_DEAD_REPLY if an allocation error indicates
a dead proc (-ESRCH)
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use an atomic for binder_last_id to avoid locking it
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use atomics for stats to avoid needing to lock for
increments/decrements
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add binder_dead_nodes_lock, binder_procs_lock, and
binder_context_mgr_node_lock to protect the associated global lists
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
With the global lock, there was a mechanism to access
binder driver debugging information with the global
lock disabled to debug deadlocks or other issues.
This mechanism is rarely (if ever) used anymore
and wasn't needed during the development of
fine-grained locking in the binder driver.
Removing it.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Move the binder allocator functionality to its own file
Continuation of splitting the binder allocator from the binder
driver. Split binder_alloc functions from normal binder functions.
Add kernel doc comments to functions declared extern in
binder_alloc.h
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Continuation of splitting the binder allocator from the binder
driver. Separate binder_alloc functions from normal binder
functions. Protect the allocator with a separate mutex.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The buffer's transaction has already been freed before
binder_deferred_release. No need to do it again.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The binder allocator is logically separate from the rest
of the binder drivers. Separating the data structures
to prepare for splitting into separate file with separate
locking.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use wake_up_interruptible_sync() to hint to the scheduler binder
transactions are synchronous wakeups. Disable preemption while waking
to avoid ping-ponging on the binder lock.
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Omprakash Dhyade <odhyade@codeaurora.org>
Cc: stable <stable@vger.kernel.org> # 4.4+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The binder allocator assumes that the thread that
called binder_open will never die for the lifetime of
that proc. That thread is normally the group_leader,
however it may not be. Use the group_leader instead
of current.
Signed-off-by: Todd Kjos <tkjos@google.com>
Cc: stable <stable@vger.kernel.org> # 4.4+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This reverts commit a906d6931f.
The patch introduced a race in the binder driver. An attempt to fix the
race was submitted in "[PATCH v2] android: binder: fix dangling pointer
comparison", however the conclusion in the discussion for that patch
was that the original patch should be reverted.
The reversion is being done as part of the fine-grained locking
patchset since the patch would need to be refactored when
proc->vmm_vm_mm is removed from struct binder_proc and added
in the binder allocator.
Signed-off-by: Todd Kjos <tkjos@google.com>
Cc: stable <stable@vger.kernel.org> # 4.6+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
without incorporating them into the Sphinx tree. The hope is to bring some
uniformity to kernel documentation and, perhaps more importantly, have our
existing docs serve as an example of the desired formatting for those that
will be added later.
-----BEGIN PGP SIGNATURE-----
iQJDBAABCAAtFiEEUOvtSCFqLxY/7px3jc41VjAi5XoFAllpIuQPHGNvcmJldEBs
d24ubmV0AAoJEI3ONVYwIuV66VIP/1GNACrp606m7HAvfskGyt26RB7hwM3+5ct2
4LDwBW8l1IZ5efgMFsEur9QjMPexiDCkzYqNj4B8vjQLk2dVl5w+eMJ4KY5hRGHe
dfZtyTrbzh9gQtPOufNXkTiFJBQXlbrcoLb9JK0S8OdC9LM6yN5wNwTHddmoieiC
+cmSt5iPf2/olNqZjtjAK/ZS+AHMb3C+Um/tMamiV5uMXmCWA9Xsz1HWZKbB3WVe
X7zLiyt5BVAOLXKir8fCUNMh0qTkq4K+mDVC7TPnHae+fAudjA+sZycuJr2ebNtR
JK15wWArUaQj7G90eB8VM1ZCN8eyYaUxXpHGw5+2A5p7gXvytfiF2VahJfJLLSjK
gM7ZZcwynYe7/tzqf2Bmz7uqnAqm9cSMdxplGPdoqq0RUCE+nFDoslbcGw+0YS+c
OM8FxCMNHYdjtX18pKQZ4H2KJ2qy5FqA0tyRnlWDj6+pNPQeAQoQ2zUEq0w/qSk6
ArQAxaNhuhPjhaqL8qhyd3MnTAncWAIbuW6FBd5RbyYmGz/jJ0srxRwnrJEZzFc9
LaDUSDPZ8frtLd8nVNHgblD2yBWAwRMcFbwt5oZS2yIYPg6VBdHyCosZBDTsmZP0
7IIeW5trsagK2ktp2KooPZ5cOgeXzLmUnEDPEoMoJ6yPSsHvDDzpItJLot8HZrH2
ahaOgM3e
=HvQS
-----END PGP SIGNATURE-----
Merge tag 'standardize-docs' of git://git.lwn.net/linux
Pull documentation format standardization from Jonathan Corbet:
"This series converts a number of top-level documents to the RST format
without incorporating them into the Sphinx tree. The hope is to bring
some uniformity to kernel documentation and, perhaps more importantly,
have our existing docs serve as an example of the desired formatting
for those that will be added later.
Mauro has gone through and fixed up a lot of top-level documentation
files to make them conform to the RST format, but without moving or
renaming them in any way. This will help when we incorporate the ones
we want to keep into the Sphinx doctree, but the real purpose is to
bring a bit of uniformity to our documentation and let the top-level
docs serve as examples for those writing new ones"
* tag 'standardize-docs' of git://git.lwn.net/linux: (84 commits)
docs: kprobes.txt: Fix whitespacing
tee.txt: standardize document format
cgroup-v2.txt: standardize document format
dell_rbu.txt: standardize document format
zorro.txt: standardize document format
xz.txt: standardize document format
xillybus.txt: standardize document format
vfio.txt: standardize document format
vfio-mediated-device.txt: standardize document format
unaligned-memory-access.txt: standardize document format
this_cpu_ops.txt: standardize document format
svga.txt: standardize document format
static-keys.txt: standardize document format
smsc_ece1099.txt: standardize document format
SM501.txt: standardize document format
siphash.txt: standardize document format
sgi-ioc4.txt: standardize document format
SAK.txt: standardize document format
rpmsg.txt: standardize document format
robust-futexes.txt: standardize document format
...
callers can more safely get random bytes if they can block until the
CRNG is initialized.
Also print a warning if get_random_*() is called before the CRNG is
initialized. By default, only one single-line warning will be printed
per boot. If CONFIG_WARN_ALL_UNSEEDED_RANDOM is defined, then a
warning will be printed for each function which tries to get random
bytes before the CRNG is initialized. This can get spammy for certain
architecture types, so it is not enabled by default.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEK2m5VNv+CHkogTfJ8vlZVpUNgaMFAllqXNUACgkQ8vlZVpUN
gaPtAgf/aUbXZuWYsDQzslHsbzEWi+qz4QgL885/w4L00pEImTTp91Q06SDxWhtB
KPvGnZHS3IofxBh2DC+6AwN6dPMoWDCfYhhO6po3FSz0DiPRIQCTuvOb8fhKY1X7
rTdDq2xtDxPGxJ25bMJtlrgzH2XlXPpVyPUeoc9uh87zUK5aesXpUn9kBniRexoz
ume+M/cDzPKkwNQpbLq8vzhNjoWMVv0FeW2akVvrjkkWko8nZLZ0R/kIyKQlRPdG
LZDXcz0oTHpDS6+ufEo292ZuWm2IGer2YtwHsKyCAsyEWsUqBz2yurtkSj3mAVyC
hHafyS+5WNaGdgBmg0zJxxwn5qxxLg==
=ua7p
-----END PGP SIGNATURE-----
Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random
Pull random updates from Ted Ts'o:
"Add wait_for_random_bytes() and get_random_*_wait() functions so that
callers can more safely get random bytes if they can block until the
CRNG is initialized.
Also print a warning if get_random_*() is called before the CRNG is
initialized. By default, only one single-line warning will be printed
per boot. If CONFIG_WARN_ALL_UNSEEDED_RANDOM is defined, then a
warning will be printed for each function which tries to get random
bytes before the CRNG is initialized. This can get spammy for certain
architecture types, so it is not enabled by default"
* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
random: reorder READ_ONCE() in get_random_uXX
random: suppress spammy warnings about unseeded randomness
random: warn when kernel uses unseeded randomness
net/route: use get_random_int for random counter
net/neighbor: use get_random_u32 for 32-bit hash random
rhashtable: use get_random_u32 for hash_rnd
ceph: ensure RNG is seeded before using
iscsi: ensure RNG is seeded before use
cifs: use get_random_u32 for 32-bit lock random
random: add get_random_{bytes,u32,u64,int,long,once}_wait family
random: add wait_for_random_bytes() API