mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-11 04:18:39 +08:00
e0c1b49f5b
Upgrade to the latest upstream zstd version 1.4.10. This patch is 100% generated from upstream zstd commit 20821a46f412 [0]. This patch is very large because it is transitioning from the custom kernel zstd to using upstream directly. The new zstd follows upstreams file structure which is different. Future update patches will be much smaller because they will only contain the changes from one upstream zstd release. As an aid for review I've created a commit [1] that shows the diff between upstream zstd as-is (which doesn't compile), and the zstd code imported in this patch. The verion of zstd in this patch is generated from upstream with changes applied by automation to replace upstreams libc dependencies, remove unnecessary portability macros, replace `/**` comments with `/*` comments, and use the kernel's xxhash instead of bundling it. The benefits of this patch are as follows: 1. Using upstream directly with automated script to generate kernel code. This allows us to update the kernel every upstream release, so the kernel gets the latest bug fixes and performance improvements, and doesn't get 3 years out of date again. The automation and the translated code are tested every upstream commit to ensure it continues to work. 2. Upgrades from a custom zstd based on 1.3.1 to 1.4.10, getting 3 years of performance improvements and bug fixes. On x86_64 I've measured 15% faster BtrFS and SquashFS decompression+read speeds, 35% faster kernel decompression, and 30% faster ZRAM decompression+read speeds. 3. Zstd-1.4.10 supports negative compression levels, which allow zstd to match or subsume lzo's performance. 4. Maintains the same kernel-specific wrapper API, so no callers have to be modified with zstd version updates. One concern that was brought up was stack usage. Upstream zstd had already removed most of its heavy stack usage functions, but I just removed the last functions that allocate arrays on the stack. I've measured the high water mark for both compression and decompression before and after this patch. Decompression is approximately neutral, using about 1.2KB of stack space. Compression levels up to 3 regressed from 1.4KB -> 1.6KB, and higher compression levels regressed from 1.5KB -> 2KB. We've added unit tests upstream to prevent further regression. I believe that this is a reasonable increase, and if it does end up causing problems, this commit can be cleanly reverted, because it only touches zstd. I chose the bulk update instead of replaying upstream commits because there have been ~3500 upstream commits since the 1.3.1 release, zstd wasn't ready to be used in the kernel as-is before a month ago, and not all upstream zstd commits build. The bulk update preserves bisectablity because bugs can be bisected to the zstd version update. At that point the update can be reverted, and we can work with upstream to find and fix the bug. Note that upstream zstd release 1.4.10 doesn't exist yet. I have cut a staging branch at 20821a46f412 [0] and will apply any changes requested to the staging branch. Once we're ready to merge this update I will cut a zstd release at the commit we merge, so we have a known zstd release in the kernel. The implementation of the kernel API is contained in zstd_compress_module.c and zstd_decompress_module.c. [0]20821a46f4
[1]e0fa481d0e
Signed-off-by: Nick Terrell <terrelln@fb.com> Tested By: Paul Jones <paul@pauljones.id.au> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # LLVM/Clang v13.0.0 on x86-64 Tested-by: Jean-Denis Girard <jd.girard@sysnux.pf>
76 lines
3.4 KiB
C
76 lines
3.4 KiB
C
/* ******************************************************************
|
|
* hist : Histogram functions
|
|
* part of Finite State Entropy project
|
|
* Copyright (c) Yann Collet, Facebook, Inc.
|
|
*
|
|
* You can contact the author at :
|
|
* - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy
|
|
* - Public forum : https://groups.google.com/forum/#!forum/lz4c
|
|
*
|
|
* This source code is licensed under both the BSD-style license (found in the
|
|
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
|
|
* in the COPYING file in the root directory of this source tree).
|
|
* You may select, at your option, one of the above-listed licenses.
|
|
****************************************************************** */
|
|
|
|
/* --- dependencies --- */
|
|
#include "../common/zstd_deps.h" /* size_t */
|
|
|
|
|
|
/* --- simple histogram functions --- */
|
|
|
|
/*! HIST_count():
|
|
* Provides the precise count of each byte within a table 'count'.
|
|
* 'count' is a table of unsigned int, of minimum size (*maxSymbolValuePtr+1).
|
|
* Updates *maxSymbolValuePtr with actual largest symbol value detected.
|
|
* @return : count of the most frequent symbol (which isn't identified).
|
|
* or an error code, which can be tested using HIST_isError().
|
|
* note : if return == srcSize, there is only one symbol.
|
|
*/
|
|
size_t HIST_count(unsigned* count, unsigned* maxSymbolValuePtr,
|
|
const void* src, size_t srcSize);
|
|
|
|
unsigned HIST_isError(size_t code); /*< tells if a return value is an error code */
|
|
|
|
|
|
/* --- advanced histogram functions --- */
|
|
|
|
#define HIST_WKSP_SIZE_U32 1024
|
|
#define HIST_WKSP_SIZE (HIST_WKSP_SIZE_U32 * sizeof(unsigned))
|
|
/* HIST_count_wksp() :
|
|
* Same as HIST_count(), but using an externally provided scratch buffer.
|
|
* Benefit is this function will use very little stack space.
|
|
* `workSpace` is a writable buffer which must be 4-bytes aligned,
|
|
* `workSpaceSize` must be >= HIST_WKSP_SIZE
|
|
*/
|
|
size_t HIST_count_wksp(unsigned* count, unsigned* maxSymbolValuePtr,
|
|
const void* src, size_t srcSize,
|
|
void* workSpace, size_t workSpaceSize);
|
|
|
|
/* HIST_countFast() :
|
|
* same as HIST_count(), but blindly trusts that all byte values within src are <= *maxSymbolValuePtr.
|
|
* This function is unsafe, and will segfault if any value within `src` is `> *maxSymbolValuePtr`
|
|
*/
|
|
size_t HIST_countFast(unsigned* count, unsigned* maxSymbolValuePtr,
|
|
const void* src, size_t srcSize);
|
|
|
|
/* HIST_countFast_wksp() :
|
|
* Same as HIST_countFast(), but using an externally provided scratch buffer.
|
|
* `workSpace` is a writable buffer which must be 4-bytes aligned,
|
|
* `workSpaceSize` must be >= HIST_WKSP_SIZE
|
|
*/
|
|
size_t HIST_countFast_wksp(unsigned* count, unsigned* maxSymbolValuePtr,
|
|
const void* src, size_t srcSize,
|
|
void* workSpace, size_t workSpaceSize);
|
|
|
|
/*! HIST_count_simple() :
|
|
* Same as HIST_countFast(), this function is unsafe,
|
|
* and will segfault if any value within `src` is `> *maxSymbolValuePtr`.
|
|
* It is also a bit slower for large inputs.
|
|
* However, it does not need any additional memory (not even on stack).
|
|
* @return : count of the most frequent symbol.
|
|
* Note this function doesn't produce any error (i.e. it must succeed).
|
|
*/
|
|
unsigned HIST_count_simple(unsigned* count, unsigned* maxSymbolValuePtr,
|
|
const void* src, size_t srcSize);
|