linux/fs/jffs2
Linus Torvalds 35ce8ae9ae Merge branch 'signal-for-v5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
Pull signal/exit/ptrace updates from Eric Biederman:
 "This set of changes deletes some dead code, makes a lot of cleanups
  which hopefully make the code easier to follow, and fixes bugs found
  along the way.

  The end-game which I have not yet reached yet is for fatal signals
  that generate coredumps to be short-circuit deliverable from
  complete_signal, for force_siginfo_to_task not to require changing
  userspace configured signal delivery state, and for the ptrace stops
  to always happen in locations where we can guarantee on all
  architectures that the all of the registers are saved and available on
  the stack.

  Removal of profile_task_ext, profile_munmap, and profile_handoff_task
  are the big successes for dead code removal this round.

  A bunch of small bug fixes are included, as most of the issues
  reported were small enough that they would not affect bisection so I
  simply added the fixes and did not fold the fixes into the changes
  they were fixing.

  There was a bug that broke coredumps piped to systemd-coredump. I
  dropped the change that caused that bug and replaced it entirely with
  something much more restrained. Unfortunately that required some
  rebasing.

  Some successes after this set of changes: There are few enough calls
  to do_exit to audit in a reasonable amount of time. The lifetime of
  struct kthread now matches the lifetime of struct task, and the
  pointer to struct kthread is no longer stored in set_child_tid. The
  flag SIGNAL_GROUP_COREDUMP is removed. The field group_exit_task is
  removed. Issues where task->exit_code was examined with
  signal->group_exit_code should been examined were fixed.

  There are several loosely related changes included because I am
  cleaning up and if I don't include them they will probably get lost.

  The original postings of these changes can be found at:
     https://lkml.kernel.org/r/87a6ha4zsd.fsf@email.froward.int.ebiederm.org
     https://lkml.kernel.org/r/87bl1kunjj.fsf@email.froward.int.ebiederm.org
     https://lkml.kernel.org/r/87r19opkx1.fsf_-_@email.froward.int.ebiederm.org

  I trimmed back the last set of changes to only the obviously correct
  once. Simply because there was less time for review than I had hoped"

* 'signal-for-v5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: (44 commits)
  ptrace/m68k: Stop open coding ptrace_report_syscall
  ptrace: Remove unused regs argument from ptrace_report_syscall
  ptrace: Remove second setting of PT_SEIZED in ptrace_attach
  taskstats: Cleanup the use of task->exit_code
  exit: Use the correct exit_code in /proc/<pid>/stat
  exit: Fix the exit_code for wait_task_zombie
  exit: Coredumps reach do_group_exit
  exit: Remove profile_handoff_task
  exit: Remove profile_task_exit & profile_munmap
  signal: clean up kernel-doc comments
  signal: Remove the helper signal_group_exit
  signal: Rename group_exit_task group_exec_task
  coredump: Stop setting signal->group_exit_task
  signal: Remove SIGNAL_GROUP_COREDUMP
  signal: During coredumps set SIGNAL_GROUP_EXIT in zap_process
  signal: Make coredump handling explicit in complete_signal
  signal: Have prepare_signal detect coredumps using signal->core_state
  signal: Have the oom killer detect coredumps using signal->core_state
  exit: Move force_uaccess back into do_exit
  exit: Guarantee make_task_dead leaks the tsk when calling do_task_exit
  ...
2022-01-17 05:49:30 +02:00
..
acl.c vfs: add rcu argument to ->get_acl() callback 2021-08-18 22:08:24 +02:00
acl.h vfs: add rcu argument to ->get_acl() callback 2021-08-18 22:08:24 +02:00
background.c exit: Rename complete_and_exit to kthread_complete_and_exit 2021-12-13 12:04:45 -06:00
build.c
compr_lzo.c
compr_rtime.c jffs2: check the validity of dstlen in jffs2_zlib_compress() 2021-02-12 21:53:23 +01:00
compr_rubin.c
compr_zlib.c
compr.c
compr.h
debug.c
debug.h
dir.c fs: make helpers idmap mount aware 2021-01-24 14:27:20 +01:00
erase.c
file.c jffs2: GC deadlock reading a page that is used in jffs2_write_begin() 2021-12-23 22:33:41 +01:00
fs.c fs: make helpers idmap mount aware 2021-01-24 14:27:20 +01:00
gc.c
ioctl.c
jffs2_fs_i.h
jffs2_fs_sb.h jffs2: Allow setting rp_size to zero during remounting 2020-12-13 21:56:24 +01:00
Kconfig
LICENCE
Makefile
malloc.c
nodelist.c
nodelist.h jffs2: remove trailing semicolon in macro definition 2020-12-13 21:57:20 +01:00
nodemgmt.c
os-linux.h fs: make helpers idmap mount aware 2021-01-24 14:27:20 +01:00
read.c
readinode.c jffs2: Fix GC exit abnormally 2020-12-13 21:55:39 +01:00
README.Locking
scan.c jffs2: Fix kasan slab-out-of-bounds problem 2021-04-15 22:00:46 +02:00
security.c acl: handle idmapped mounts 2021-01-24 14:27:17 +01:00
summary.c jffs2: fix use after free in jffs2_sum_write_data() 2021-02-12 21:53:22 +01:00
summary.h jffs2: avoid Wempty-body warnings 2021-04-15 22:01:11 +02:00
super.c jffs2: Fix NULL pointer dereference in rp_size fs option parsing 2020-12-13 21:57:21 +01:00
symlink.c
wbuf.c
write.c
writev.c
xattr_trusted.c acl: handle idmapped mounts 2021-01-24 14:27:17 +01:00
xattr_user.c acl: handle idmapped mounts 2021-01-24 14:27:17 +01:00
xattr.c
xattr.h

	JFFS2 LOCKING DOCUMENTATION
	---------------------------

This document attempts to describe the existing locking rules for
JFFS2. It is not expected to remain perfectly up to date, but ought to
be fairly close.


	alloc_sem
	---------

The alloc_sem is a per-filesystem mutex, used primarily to ensure
contiguous allocation of space on the medium. It is automatically
obtained during space allocations (jffs2_reserve_space()) and freed
upon write completion (jffs2_complete_reservation()). Note that
the garbage collector will obtain this right at the beginning of
jffs2_garbage_collect_pass() and release it at the end, thereby
preventing any other write activity on the file system during a
garbage collect pass.

When writing new nodes, the alloc_sem must be held until the new nodes
have been properly linked into the data structures for the inode to
which they belong. This is for the benefit of NAND flash - adding new
nodes to an inode may obsolete old ones, and by holding the alloc_sem
until this happens we ensure that any data in the write-buffer at the
time this happens are part of the new node, not just something that
was written afterwards. Hence, we can ensure the newly-obsoleted nodes
don't actually get erased until the write-buffer has been flushed to
the medium.

With the introduction of NAND flash support and the write-buffer, 
the alloc_sem is also used to protect the wbuf-related members of the
jffs2_sb_info structure. Atomically reading the wbuf_len member to see
if the wbuf is currently holding any data is permitted, though.

Ordering constraints: See f->sem.


	File Mutex f->sem
	---------------------

This is the JFFS2-internal equivalent of the inode mutex i->i_sem.
It protects the contents of the jffs2_inode_info private inode data,
including the linked list of node fragments (but see the notes below on
erase_completion_lock), etc.

The reason that the i_sem itself isn't used for this purpose is to
avoid deadlocks with garbage collection -- the VFS will lock the i_sem
before calling a function which may need to allocate space. The
allocation may trigger garbage-collection, which may need to move a
node belonging to the inode which was locked in the first place by the
VFS. If the garbage collection code were to attempt to lock the i_sem
of the inode from which it's garbage-collecting a physical node, this
lead to deadlock, unless we played games with unlocking the i_sem
before calling the space allocation functions.

Instead of playing such games, we just have an extra internal
mutex, which is obtained by the garbage collection code and also
by the normal file system code _after_ allocation of space.

Ordering constraints: 

	1. Never attempt to allocate space or lock alloc_sem with 
	   any f->sem held.
	2. Never attempt to lock two file mutexes in one thread.
	   No ordering rules have been made for doing so.
	3. Never lock a page cache page with f->sem held.


	erase_completion_lock spinlock
	------------------------------

This is used to serialise access to the eraseblock lists, to the
per-eraseblock lists of physical jffs2_raw_node_ref structures, and
(NB) the per-inode list of physical nodes. The latter is a special
case - see below.

As the MTD API no longer permits erase-completion callback functions
to be called from bottom-half (timer) context (on the basis that nobody
ever actually implemented such a thing), it's now sufficient to use
a simple spin_lock() rather than spin_lock_bh().

Note that the per-inode list of physical nodes (f->nodes) is a special
case. Any changes to _valid_ nodes (i.e. ->flash_offset & 1 == 0) in
the list are protected by the file mutex f->sem. But the erase code
may remove _obsolete_ nodes from the list while holding only the
erase_completion_lock. So you can walk the list only while holding the
erase_completion_lock, and can drop the lock temporarily mid-walk as
long as the pointer you're holding is to a _valid_ node, not an
obsolete one.

The erase_completion_lock is also used to protect the c->gc_task
pointer when the garbage collection thread exits. The code to kill the
GC thread locks it, sends the signal, then unlocks it - while the GC
thread itself locks it, zeroes c->gc_task, then unlocks on the exit path.


	inocache_lock spinlock
	----------------------

This spinlock protects the hashed list (c->inocache_list) of the
in-core jffs2_inode_cache objects (each inode in JFFS2 has the
correspondent jffs2_inode_cache object). So, the inocache_lock
has to be locked while walking the c->inocache_list hash buckets.

This spinlock also covers allocation of new inode numbers, which is
currently just '++->highest_ino++', but might one day get more complicated
if we need to deal with wrapping after 4 milliard inode numbers are used.

Note, the f->sem guarantees that the correspondent jffs2_inode_cache
will not be removed. So, it is allowed to access it without locking
the inocache_lock spinlock. 

Ordering constraints: 

	If both erase_completion_lock and inocache_lock are needed, the
	c->erase_completion has to be acquired first.


	erase_free_sem
	--------------

This mutex is only used by the erase code which frees obsolete node
references and the jffs2_garbage_collect_deletion_dirent() function.
The latter function on NAND flash must read _obsolete_ nodes to
determine whether the 'deletion dirent' under consideration can be
discarded or whether it is still required to show that an inode has
been unlinked. Because reading from the flash may sleep, the
erase_completion_lock cannot be held, so an alternative, more
heavyweight lock was required to prevent the erase code from freeing
the jffs2_raw_node_ref structures in question while the garbage
collection code is looking at them.

Suggestions for alternative solutions to this problem would be welcomed.


	wbuf_sem
	--------

This read/write semaphore protects against concurrent access to the
write-behind buffer ('wbuf') used for flash chips where we must write
in blocks. It protects both the contents of the wbuf and the metadata
which indicates which flash region (if any) is currently covered by 
the buffer.

Ordering constraints:
	Lock wbuf_sem last, after the alloc_sem or and f->sem.


	c->xattr_sem
	------------

This read/write semaphore protects against concurrent access to the
xattr related objects which include stuff in superblock and ic->xref.
In read-only path, write-semaphore is too much exclusion. It's enough
by read-semaphore. But you must hold write-semaphore when updating,
creating or deleting any xattr related object.

Once xattr_sem released, there would be no assurance for the existence
of those objects. Thus, a series of processes is often required to retry,
when updating such a object is necessary under holding read semaphore.
For example, do_jffs2_getxattr() holds read-semaphore to scan xref and
xdatum at first. But it retries this process with holding write-semaphore
after release read-semaphore, if it's necessary to load name/value pair
from medium.

Ordering constraints:
	Lock xattr_sem last, after the alloc_sem.