2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-12 15:24:00 +08:00

cramfs: Convert cramfs to read_folio

This is a "weak" conversion which converts straight back to using pages.
A full conversion should be performed at some point, hopefully by
someone familiar with the filesystem.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
This commit is contained in:
Matthew Wilcox (Oracle) 2022-04-29 11:12:16 -04:00
parent 65c0d259cb
commit 5aab331ad6
2 changed files with 8 additions and 7 deletions

View File

@ -115,7 +115,7 @@ Block Size
(Block size in cramfs refers to the size of input data that is
compressed at a time. It's intended to be somewhere around
PAGE_SIZE for cramfs_readpage's convenience.)
PAGE_SIZE for cramfs_read_folio's convenience.)
The superblock ought to indicate the block size that the fs was
written for, since comments in <linux/pagemap.h> indicate that
@ -161,7 +161,7 @@ size. The options are:
PAGE_SIZE.
It's easy enough to change the kernel to use a smaller value than
PAGE_SIZE: just make cramfs_readpage read multiple blocks.
PAGE_SIZE: just make cramfs_read_folio read multiple blocks.
The cost of option 1 is that kernels with a larger PAGE_SIZE
value don't get as good compression as they can.
@ -173,9 +173,9 @@ they don't mind their cramfs being inaccessible to kernels with
smaller PAGE_SIZE values.
Option 3 is easy to implement if we don't mind being CPU-inefficient:
e.g. get readpage to decompress to a buffer of size MAX_BLKSIZE (which
e.g. get read_folio to decompress to a buffer of size MAX_BLKSIZE (which
must be no larger than 32KB) and discard what it doesn't need.
Getting readpage to read into all the covered pages is harder.
Getting read_folio to read into all the covered pages is harder.
The main advantage of option 3 over 1, 2, is better compression. The
cost is greater complexity. Probably not worth it, but I hope someone

View File

@ -414,7 +414,7 @@ static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma)
/*
* Let's create a mixed map if we can't map it all.
* The normal paging machinery will take care of the
* unpopulated ptes via cramfs_readpage().
* unpopulated ptes via cramfs_read_folio().
*/
int i;
vma->vm_flags |= VM_MIXEDMAP;
@ -814,8 +814,9 @@ out:
return d_splice_alias(inode, dentry);
}
static int cramfs_readpage(struct file *file, struct page *page)
static int cramfs_read_folio(struct file *file, struct folio *folio)
{
struct page *page = &folio->page;
struct inode *inode = page->mapping->host;
u32 maxblock;
int bytes_filled;
@ -925,7 +926,7 @@ err:
}
static const struct address_space_operations cramfs_aops = {
.readpage = cramfs_readpage
.read_folio = cramfs_read_folio
};
/*