The ret variable is always set by the fn function.
There is no need to initialize it.
Signed-off-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Reviewed-By: Sascha Silbe <silbe@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Path verification is either done via dasd_eckd_read_conf() which is
triggered during online processing and resume or via
do_path_verification_work() which is triggered after path events.
The dasd_eckd_read_conf() version added paths unconditionally and did
not check if the path mask was empty. This led to devices having the
disconnected stop flag set but a valid path mask. So they where not
working although they had paths validated successfully. After a resume
this state could even not be solved with additional paths added.
Fix by checking for an empty path mask in dasd_eckd_read_conf() and
clearing the device stop bits for a newly added channel path.
Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
For a valid PAV assignment the DASD driver needs to notice possibly
changed configuration data. Thus the failing of read configuration
data should also fail the device restore to prevent invalid PAV
assignment. The failed device may get restored after additional paths
get available later on.
If the restore fails after the device was added to the lcu alias
handling it needs to be removed from the alias handling before exiting
the restore function.
Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The configuration data is stored per path and also the first valid
configuration data per device. When dasd_eckd_read_conf is called
again after a path got lost the device configuration data is cleared
but possibly not the per path configuration data. This might lead to a
double free when the lost path gets operational again.
Fix by clearing all per path configuration data when the first valid
configuration data is received and stored.
Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
device_ops.c should only contain functions that are called by ccw device
drivers. Move the cio internal functions that handle unconditional
reserve + release to device_pgid.c
Acked-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
device_ops.c should only contain functions that are called by ccw device
drivers. Move the cio internal function ccw_device_call_handler to
device_fsm.c where it's used. Remove some useless comments while at it.
Acked-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove all the casts to and from the machine check interruption code.
This patch changes struct mci to a union, which contains an anonymous
structure with the already known bits and in addition an unsigned
long field, which contains the raw machine check interruption code.
This allows to simply assign and decoce the interruption code value
without the need for all those casts we had all the time.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
A summary unit check occurs when the lcu updates the PAV configuration
e.g. base PAV assignment or PAV mode at all. This requires the reset
of the drivers internal pavgroups. Therefore the alias devices are
flushed and moved via a temporary list to the active_devices list
where they are not associated with a pavgroup. In conjunction with
updates to the base device the pavgroup may be removed since both
base_list and alias_list are empty. Unfortunately during alias flush
and move to the active_device list from alias_list the pavgroup
pointer is not deleted in the device private structure. This leads to
a list del_corruption if another lcu_update tries to move the device
in the non existent pavgroup.
Fix by removing the pavgroup pointer after the alias device was moved
to the active_devices list.
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
There is a system work queue system_long_wq for long running work.
Use this work queue for the AP bus scan loop.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove the code for really old crypt cards, PCICC and PCICA.
These cards have been out of service for several years.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Replace the two fields 'unregistered' and 'reset' with a device
state with 5 possible values. Introduce two events for the AP devices,
device poll and device timeout. With the state machine it is easier
to deal with device initialization and suspend/resume. Device polling
is simpler as well, the arkane 'flags' passing is gone.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
If a AP device is removed while messages are still pending, the requests
are cancelled by calling the message receive function with an error pointer
for the reply. The message type receive handler recognize this and create
a fake hardware error TYPE82_RSP_CODE / REP82_ERROR_MACHINE_FAILURE.
The message with the hardware error then causes a printk and a return
code of -EAGAIN.
Replace the intricate scheme with an explicit return code for this sitation
and avoid the error message.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Set the configuration timer at the end of the ap_scan_bus function.
Make use of setup_timer and remove some unnecessary add_timer, mod_timer
and del_timer_sync calls. Replace the complicated timer_pending, mod_timer
and add_timer code in ap_config_time_store with a simple mod_timer.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
If there are no devices on the AP bus there will not be a single
call to the per-device ap_bus_suspend function. Even worse,
there will not be a call to the per-device ap_bus_resume either
and the AP will fail so resume correctly.
Introduce a bus specific dev_pm_ops to suspend / resume the AP
bus related things. While we are at it, simplify the power management
code of the AP bus.
Reviewd-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The ap_queue_messsage function will call device_unregister if the
unregistered field of the device has been set while trying to queue
a message. This races with other device_unregister calls, e.g. from
the ap_scan_bus. Remove the call to device_unregister from
ap_queue_message and let ap_scan_bus deal with it.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The ap_query_configuration function allocates the ap_config_info
structure, but there is no code to free the structure.
Allocate the structure in the module_init function and free it
again in module_exit.
While we are at it simplify a few functions in regard to the
ap configuration data.
Reviewed-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
ap_test_queue, ap_query_facilities, __ap_query_functions all use
the same PQAP(TAPQ) command. Consolidate the three into a single
ap_test_queue function that returns the AP status and the 64-bit
result. The exception table entry for PQAP(TAPQ) can be avoided
if the T bit for the APFT facility is set only if test_facility(15)
indicated that the facility is present.
Integrate ap_query_function into ap_query queue to avoid calling
PQAP(TAPQ) twice.
Reviewed-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The sclp console and tty code currently uses several message text
objects in a single message event to print several lines with one
SCCB. This causes the output of these lines to be fused into a
block which is noticeable when selecting text in the operating system
message panel.
Instead use several message events with a single message text object
each to print every line on its own. This changes the SCCB layout
from
struct sccb_header
struct evbuf_header
struct mdb_header
struct go
struct mto
...
struct mto
to
struct sccb_header
struct evbuf_header
struct mdb_header
struct go
struct mto
...
struct evbuf_header
struct mdb_header
struct go
struct mto
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Introduce /sys/debug/kernel/diag_stat with a statistic how many diagnose
calls have been done by each CPU in the system.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We often need to correlate an 8 bit path mask with the position
in a channel path array. Introduce and use pathmask_to_pos for
that task.
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
During resume from hibernate we already reenable measurement block
updates on a per device basis. In addition to that we also need to
activate channel measurement globally using the set channel monitor
instruction.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Extended measurement blocks need to be 64 byte aligned. To achieve that
128 bytes for each measurement block are allocated and an align callback
returns a 64 byte aligned address inside this area.
Replace this code with kmem_cache allocations.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The measurement block for the extended measurement data is not freed when
switching off per device measurement. Free the measurement block after HW
stopped accessing it.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
During allocation of extended measurement blocks we check if the device is
already active for channel measurement and add the device to a list of
devices with active channel measurement. The check is done under ccwlock
protection and the list modification is guarded by a different lock.
To guarantee that both states are in sync make sure that both locks
are held during the allocation process (like it's already done for the
"normal" measurement block allocation).
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Devices with active channel measurement are included in a list. When a
device is removed without deactivating channel measurement first the
list_head is freed but still used. Fix this by making sure that
channel measurement is deactivated during device deregistration.
For devices that we deregister because they are no longer accessible
deactivating channel measurement will fail. In this case we can report
success because the FW will no longer access the measurement block.
In addition to these steps keep an extra device reference while
channel measurement is active.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Hold the device_lock during [de]activation of the channel measurement
block to synchronize concurrent usage of these functions.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Ensure that we hold the ccwlock when accessing private data. Return errors
that occur during measurement enabling to userspace. Apply some cleanups
while at it.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We were able to reduce the CPU overhead of big paging scenarios
when announcing our paging disks as non-rotational.
Almost all dasd devices are implemented in storage servers with
cache, raid, striping and lots of magic. There is no point in
optimizing the disk schedulers and swap code for a single platter
moving arm rotational disks. Given the complexity of the setup
and the fact that this change is mostly to disable the additional
overhead in swap code, lets keep the other functionality unchanged
and do not disable the this device as entropy source - unlike other
non-rotational devices.
Suggested-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
In the past only even modulus sizes were allowed for RSA keys in
CRT format. This restriction was based on limited RSA key generation
on older crypto adapters that provides only even modulus sizes. This
restriction is not valid any more.
Revoke restrictions that crypto requests can be serviced with odd
RSA modulus length in CRT format.
Signed-off-by: Ingo Tuchscherer <ingo.tuchscherer@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This fixes the virtio-test tool, and improves
the error handling for virtio-ccw.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJV+TrqAAoJECgfDbjSjVRpsBwH/0CqspPuCPR/QMHmw0w66lQh
I2kYdkvJy/nAXtY+6JHhVgxbPzJeZ2im5BK3+8pIUoMXSHRn1QMn+/a0WCA0GvC2
iXSnIGFs/gMK80k2ULrf9HmopnxVtEZUY+6/DQXYD3lRDi25k8Xbs5x4ygPwGywD
iV5/JPNjJvWLQQ4uDnlp6SoPfxa/P8lbpdZlIJHKAM0pTuWeU4rgncMo8SvHjVEL
zyrg3ofDq3V0objwLOnLk0Z8i6uwEQrfZzprvDcZglK58B+jWm2cuAiaGoLHgwN6
xkSQEWMKeV66FTuqzI+bCsvWf04+EkGbxk75RLx3GfT2LJg59XgxqEpN7yOCKFg=
=O2nr
-----END PGP SIGNATURE-----
Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
Pull virtio fixes and cleanups from Michael Tsirkin:
"This fixes the virtio-test tool, and improves the error handling for
virtio-ccw"
* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
virtio/s390: handle failures of READ_VQ_CONF ccw
tools/virtio: propagate V=X to kernel build
vhost: move features to core
tools/virtio: fix build after 4.2 changes
In virtio_ccw_read_vq_conf() the return value of ccw_io_helper()
was not checked.
If the configuration could not be read properly, we'd wrongly assume a
queue size of 0.
Let's propagate any I/O error to virtio_ccw_setup_vq() so it may
properly fail.
Signed-off-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
1/ Introduce ZONE_DEVICE and devm_memremap_pages() as a generic
mechanism for adding device-driver-discovered memory regions to the
kernel's direct map. This facility is used by the pmem driver to
enable pfn_to_page() operations on the page frames returned by DAX
('direct_access' in 'struct block_device_operations'). For now, the
'memmap' allocation for these "device" pages comes from "System
RAM". Support for allocating the memmap from device memory will
arrive in a later kernel.
2/ Introduce memremap() to replace usages of ioremap_cache() and
ioremap_wt(). memremap() drops the __iomem annotation for these
mappings to memory that do not have i/o side effects. The
replacement of ioremap_cache() with memremap() is limited to the
pmem driver to ease merging the api change in v4.3. Completion of
the conversion is targeted for v4.4.
3/ Similar to the usage of memcpy_to_pmem() + wmb_pmem() in the pmem
driver, update the VFS DAX implementation and PMEM api to provide
persistence guarantees for kernel operations on a DAX mapping.
4/ Convert the ACPI NFIT 'BLK' driver to map the block apertures as
cacheable to improve performance.
5/ Miscellaneous updates and fixes to libnvdimm including support
for issuing "address range scrub" commands, clarifying the optimal
'sector size' of pmem devices, a clarification of the usage of the
ACPI '_STA' (status) property for DIMM devices, and other minor
fixes.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJV6Nx7AAoJEB7SkWpmfYgCWyYQAI5ju6Gvw27RNFtPovHcZUf5
JGnxXejI6/AqeTQ+IulgprxtEUCrXOHjCDA5dkjr1qvsoqK1qxug+vJHOZLgeW0R
OwDtmdW4Qrgeqm+CPoxETkorJ8wDOc8mol81kTiMgeV3UqbYeeHIiTAmwe7VzZ0C
nNdCRDm5g8dHCjTKcvK3rvozgyoNoWeBiHkPe76EbnxDICxCB5dak7XsVKNMIVFQ
NuYlnw6IYN7+rMHgpgpRux38NtIW8VlYPWTmHExejc2mlioWMNBG/bmtwLyJ6M3e
zliz4/cnonTMUaizZaVozyinTa65m7wcnpjK+vlyGV2deDZPJpDRvSOtB0lH30bR
1gy+qrKzuGKpaN6thOISxFLLjmEeYwzYd7SvC9n118r32qShz+opN9XX0WmWSFlA
sajE1ehm4M7s5pkMoa/dRnAyR8RUPu4RNINdQ/Z9jFfAOx+Q26rLdQXwf9+uqbEb
bIeSQwOteK5vYYCstvpAcHSMlJAglzIX5UfZBvtEIJN7rlb0VhmGWfxAnTu+ktG1
o9cqAt+J4146xHaFwj5duTsyKhWb8BL9+xqbKPNpXEp+PbLsrnE/+WkDLFD67jxz
dgIoK60mGnVXp+16I2uMqYYDgAyO5zUdmM4OygOMnZNa1mxesjbDJC6Wat1Wsndn
slsw6DkrWT60CRE42nbK
=o57/
-----END PGP SIGNATURE-----
Merge tag 'libnvdimm-for-4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm
Pull libnvdimm updates from Dan Williams:
"This update has successfully completed a 0day-kbuild run and has
appeared in a linux-next release. The changes outside of the typical
drivers/nvdimm/ and drivers/acpi/nfit.[ch] paths are related to the
removal of IORESOURCE_CACHEABLE, the introduction of memremap(), and
the introduction of ZONE_DEVICE + devm_memremap_pages().
Summary:
- Introduce ZONE_DEVICE and devm_memremap_pages() as a generic
mechanism for adding device-driver-discovered memory regions to the
kernel's direct map.
This facility is used by the pmem driver to enable pfn_to_page()
operations on the page frames returned by DAX ('direct_access' in
'struct block_device_operations').
For now, the 'memmap' allocation for these "device" pages comes
from "System RAM". Support for allocating the memmap from device
memory will arrive in a later kernel.
- Introduce memremap() to replace usages of ioremap_cache() and
ioremap_wt(). memremap() drops the __iomem annotation for these
mappings to memory that do not have i/o side effects. The
replacement of ioremap_cache() with memremap() is limited to the
pmem driver to ease merging the api change in v4.3.
Completion of the conversion is targeted for v4.4.
- Similar to the usage of memcpy_to_pmem() + wmb_pmem() in the pmem
driver, update the VFS DAX implementation and PMEM api to provide
persistence guarantees for kernel operations on a DAX mapping.
- Convert the ACPI NFIT 'BLK' driver to map the block apertures as
cacheable to improve performance.
- Miscellaneous updates and fixes to libnvdimm including support for
issuing "address range scrub" commands, clarifying the optimal
'sector size' of pmem devices, a clarification of the usage of the
ACPI '_STA' (status) property for DIMM devices, and other minor
fixes"
* tag 'libnvdimm-for-4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: (34 commits)
libnvdimm, pmem: direct map legacy pmem by default
libnvdimm, pmem: 'struct page' for pmem
libnvdimm, pfn: 'struct page' provider infrastructure
x86, pmem: clarify that ARCH_HAS_PMEM_API implies PMEM mapped WB
add devm_memremap_pages
mm: ZONE_DEVICE for "device memory"
mm: move __phys_to_pfn and __pfn_to_phys to asm/generic/memory_model.h
dax: drop size parameter to ->direct_access()
nd_blk: change aperture mapping from WC to WB
nvdimm: change to use generic kvfree()
pmem, dax: have direct_access use __pmem annotation
dax: update I/O path to do proper PMEM flushing
pmem: add copy_from_iter_pmem() and clear_pmem()
pmem, x86: clean up conditional pmem includes
pmem: remove layer when calling arch_has_wmb_pmem()
pmem, x86: move x86 PMEM API to new pmem.h header
libnvdimm, e820: make CONFIG_X86_PMEM_LEGACY a tristate option
pmem: switch to devm_ allocations
devres: add devm_memremap
libnvdimm, btt: write and validate parent_uuid
...
Pull locking and atomic updates from Ingo Molnar:
"Main changes in this cycle are:
- Extend atomic primitives with coherent logic op primitives
(atomic_{or,and,xor}()) and deprecate the old partial APIs
(atomic_{set,clear}_mask())
The old ops were incoherent with incompatible signatures across
architectures and with incomplete support. Now every architecture
supports the primitives consistently (by Peter Zijlstra)
- Generic support for 'relaxed atomics':
- _acquire/release/relaxed() flavours of xchg(), cmpxchg() and {add,sub}_return()
- atomic_read_acquire()
- atomic_set_release()
This came out of porting qwrlock code to arm64 (by Will Deacon)
- Clean up the fragile static_key APIs that were causing repeat bugs,
by introducing a new one:
DEFINE_STATIC_KEY_TRUE(name);
DEFINE_STATIC_KEY_FALSE(name);
which define a key of different types with an initial true/false
value.
Then allow:
static_branch_likely()
static_branch_unlikely()
to take a key of either type and emit the right instruction for the
case. To be able to know the 'type' of the static key we encode it
in the jump entry (by Peter Zijlstra)
- Static key self-tests (by Jason Baron)
- qrwlock optimizations (by Waiman Long)
- small futex enhancements (by Davidlohr Bueso)
- ... and misc other changes"
* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (63 commits)
jump_label/x86: Work around asm build bug on older/backported GCCs
locking, ARM, atomics: Define our SMP atomics in terms of _relaxed() operations
locking, include/llist: Use linux/atomic.h instead of asm/cmpxchg.h
locking/qrwlock: Make use of _{acquire|release|relaxed}() atomics
locking/qrwlock: Implement queue_write_unlock() using smp_store_release()
locking/lockref: Remove homebrew cmpxchg64_relaxed() macro definition
locking, asm-generic: Add _{relaxed|acquire|release}() variants for 'atomic_long_t'
locking, asm-generic: Rework atomic-long.h to avoid bulk code duplication
locking/atomics: Add _{acquire|release|relaxed}() variants of some atomic operations
locking, compiler.h: Cast away attributes in the WRITE_ONCE() magic
locking/static_keys: Make verify_keys() static
jump label, locking/static_keys: Update docs
locking/static_keys: Provide a selftest
jump_label: Provide a self-test
s390/uaccess, locking/static_keys: employ static_branch_likely()
x86, tsc, locking/static_keys: Employ static_branch_likely()
locking/static_keys: Add selftest
locking/static_keys: Add a new static_key interface
locking/static_keys: Rework update logic
locking/static_keys: Add static_key_{en,dis}able() helpers
...
Pull core block updates from Jens Axboe:
"This first core part of the block IO changes contains:
- Cleanup of the bio IO error signaling from Christoph. We used to
rely on the uptodate bit and passing around of an error, now we
store the error in the bio itself.
- Improvement of the above from myself, by shrinking the bio size
down again to fit in two cachelines on x86-64.
- Revert of the max_hw_sectors cap removal from a revision again,
from Jeff Moyer. This caused performance regressions in various
tests. Reinstate the limit, bump it to a more reasonable size
instead.
- Make /sys/block/<dev>/queue/discard_max_bytes writeable, by me.
Most devices have huge trim limits, which can cause nasty latencies
when deleting files. Enable the admin to configure the size down.
We will look into having a more sane default instead of UINT_MAX
sectors.
- Improvement of the SGP gaps logic from Keith Busch.
- Enable the block core to handle arbitrarily sized bios, which
enables a nice simplification of bio_add_page() (which is an IO hot
path). From Kent.
- Improvements to the partition io stats accounting, making it
faster. From Ming Lei.
- Also from Ming Lei, a basic fixup for overflow of the sysfs pending
file in blk-mq, as well as a fix for a blk-mq timeout race
condition.
- Ming Lin has been carrying Kents above mentioned patches forward
for a while, and testing them. Ming also did a few fixes around
that.
- Sasha Levin found and fixed a use-after-free problem introduced by
the bio->bi_error changes from Christoph.
- Small blk cgroup cleanup from Viresh Kumar"
* 'for-4.3/core' of git://git.kernel.dk/linux-block: (26 commits)
blk: Fix bio_io_vec index when checking bvec gaps
block: Replace SG_GAPS with new queue limits mask
block: bump BLK_DEF_MAX_SECTORS to 2560
Revert "block: remove artifical max_hw_sectors cap"
blk-mq: fix race between timeout and freeing request
blk-mq: fix buffer overflow when reading sysfs file of 'pending'
Documentation: update notes in biovecs about arbitrarily sized bios
block: remove bio_get_nr_vecs()
fs: use helper bio_add_page() instead of open coding on bi_io_vec
block: kill merge_bvec_fn() completely
md/raid5: get rid of bio_fits_rdev()
md/raid5: split bio for chunk_aligned_read
block: remove split code in blkdev_issue_{discard,write_same}
btrfs: remove bio splitting and merge_bvec_fn() calls
bcache: remove driver private bio splitting code
block: simplify bio_add_page()
block: make generic_make_request handle arbitrarily sized bios
blk-cgroup: Drop unlikely before IS_ERR(_OR_NULL)
block: don't access bio->bi_error after bio_put()
block: shrink struct bio down to 2 cache lines again
...
Pull s390 updates from Martin Schwidefsky:
"The big one is support for fake NUMA, splitting a really large machine
in more manageable piece improves performance in some cases, e.g. for
a KVM host.
The FICON Link Incident handling has been improved, this helps the
operator to identify degraded or non-operational FICON connections.
The save and restore of floating point and vector registers has been
overhauled to allow the future use of vector registers in the kernel.
A few small enhancement, magic sys-requests for the vt220 console via
SCLP, some more assembler code has been converted to C, the PCI error
handling is improved.
And the usual cleanup and bug fixing"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (59 commits)
s390/jump_label: Use %*ph to print small buffers
s390/sclp_vt220: support magic sysrequests
s390/ctrlchar: improve handling of magic sysrequests
s390/numa: remove superfluous ARCH_WANT defines
s390/3270: redraw screen on unsolicited device end
s390/dcssblk: correct out of bounds array indexes
s390/mm: simplify page table alloc/free code
s390/pci: move debug messages to debugfs
s390/nmi: initialize control register 0 earlier
s390/zcrypt: use msleep() instead of mdelay()
s390/hmcdrv: fix interrupt registration
s390/setup: fix novx parameter
s390/uaccess: remove uaccess_primary kernel parameter
s390: remove unneeded sizeof(void *) comparisons
s390/facilities: remove transactional-execution bits
s390/numa: re-add DIE sched_domain_topology_level
s390/dasd: enhance CUIR scope detection
s390/dasd: fix failing path verification
s390/vdso: emit a GNU hash
s390/numa: make core to node mapping data dynamic
...
None of the implementations currently use it. The common
bdev_direct_access() entry point handles all the size checks before
calling ->direct_access().
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Implement magic sysrequest handling for the VT220 terminal (also known as
the Integrated ASCII console on the HMC/SE).
To invoke a "magic sysrequest" function, press "Ctrl+o" followed by a
second character that designates the debugging function.
The handling of the sysrq is scheduled away from the SCLP IRQ context;
because large amounts of sysrq output might fill up the console buffers.
The console might deadlock because it cannot empty the buffers while still
in the receiving IRQ context. This behavior is the same as for the SCLP
console.
Reported-by: Horst Weber <hweber@de.ibm.com>
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Extract the sysrq handling from the ctrlchar_handle() into a separate
function that can be directly used by other users.
Introduce a new sysrq_work structure to embed the work_struct and to
specify the magic sysrq function to be invoked.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
If a 3270 terminal is disconnected and later reconnected again,
it gets an unsolicited device end. This is currently ignored and
you have to hit the clear key to get the screen redrawn.
Add an automatic full redraw of the screen for this case.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Update the annotation for the kaddr pointer returned by direct_access()
so that it is a __pmem pointer. This is consistent with the PMEM driver
and with how this direct_access() pointer is used in the DAX code.
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Fix a couple of warnings like this:
[linux-4.2-rc7/drivers/s390/block/dcssblk.c:553]:
(style) Array index 'j' is used before limits check.
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
There is no need to busy loop and monopolize a cpu for up to ~2 seconds.
The code in question that calls mdelay() is preemptible anyway, so better
let the kernel schedule different processes than just looping and causing
unnecessary delays.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The z/VM driver sets bit "63-22" in control register zero to one in order
to enable the CP Service interrupt (0x2603). However the irq subclass mask
that normally corresponds to the CP Service interrupt is
"63-54" (== "63-22-32").
So it looks like the author read the documentation with the 32 bit sized
cr0 register bit positions (== 22), but didn't realize that bit numbers
change, if applied to a 64 bit register (== 54) due to the numbering
scheme.
Also use irq_subclass_register() instead if ctl_set_bit() since multiple
services depend on the service signal subclass mask, which is the correct
bit. This also explains why nobody noticed the bug, since the bit is always
enabled anyway (e.g. pfault).
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove two more statements which always evaluate to 'false'.
These are more leftovers from the 31 bit era.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The way the block layer is currently written, it goes to great lengths
to avoid having to split bios; upper layer code (such as bio_add_page())
checks what the underlying device can handle and tries to always create
bios that don't need to be split.
But this approach becomes unwieldy and eventually breaks down with
stacked devices and devices with dynamic limits, and it adds a lot of
complexity. If the block layer could split bios as needed, we could
eliminate a lot of complexity elsewhere - particularly in stacked
drivers. Code that creates bios can then create whatever size bios are
convenient, and more importantly stacked drivers don't have to deal with
both their own bio size limitations and the limitations of the
(potentially multiple) devices underneath them. In the future this will
let us delete merge_bvec_fn and a bunch of other code.
We do this by adding calls to blk_queue_split() to the various
make_request functions that need it - a few can already handle arbitrary
size bios. Note that we add the call _after_ any call to
blk_queue_bounce(); this means that blk_queue_split() and
blk_recalc_rq_segments() don't need to be concerned with bouncing
affecting segment merging.
Some make_request_fn() callbacks were simple enough to audit and verify
they don't need blk_queue_split() calls. The skipped ones are:
* nfhd_make_request (arch/m68k/emu/nfblock.c)
* axon_ram_make_request (arch/powerpc/sysdev/axonram.c)
* simdisk_make_request (arch/xtensa/platforms/iss/simdisk.c)
* brd_make_request (ramdisk - drivers/block/brd.c)
* mtip_submit_request (drivers/block/mtip32xx/mtip32xx.c)
* loop_make_request
* null_queue_bio
* bcache's make_request fns
Some others are almost certainly safe to remove now, but will be left
for future patches.
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Ming Lei <ming.lei@canonical.com>
Cc: Neil Brown <neilb@suse.de>
Cc: Alasdair Kergon <agk@redhat.com>
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: dm-devel@redhat.com
Cc: Lars Ellenberg <drbd-dev@lists.linbit.com>
Cc: drbd-user@lists.linbit.com
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Jim Paris <jim@jtan.com>
Cc: Philip Kelleher <pjk1939@linux.vnet.ibm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Oleg Drokin <oleg.drokin@intel.com>
Cc: Andreas Dilger <andreas.dilger@intel.com>
Acked-by: NeilBrown <neilb@suse.de> (for the 'md/md.c' bits)
Acked-by: Mike Snitzer <snitzer@redhat.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
[dpark: skip more mq-based drivers, resolve merge conflicts, etc.]
Signed-off-by: Dongsu Park <dpark@posteo.net>
Signed-off-by: Ming Lin <ming.l@ssi.samsung.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
This patch adds an enhanced detection for control unit initiated
reconfiguration request scope.
The first approach assumed the scope of the reconfiguration request
to be restricted to the path on which the message was received.
The enhanced approach determines the full scope of the reconfiguration
request by evaluating additional path and device selection information
contained in the reconfiguration message.
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
DASD path verification requires the usage of sleep_on_immediatly to
ensure that no other I/O request is blocking the recovery of
disconnected devices. But two concurrent path verification workers for
the same device may kill each others requests due to the usage of the
immediate sleep_on function. This may lead to unsuccessful path
verifications.
Prevent that two parallel path verification workers conflict with
each other by implementing a device flag signalling a already running
worker.
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This patch remove unneeded variables used to store return values.
These issues were detected with the Coccinelle script:
scripts/coccinelle/misc/returnvar.cocci
[heiko.carstens@de.ibm.com]: make qeth_l[2/3]_stop() return void
Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove unneeded semicolon.
The semantic patch that detects this change is available
at scripts/coccinelle/misc/semicolon.cocci.
Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>