diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2024-09-21 17:29:05 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2024-09-21 17:29:05 +0300 |
commit | 617a814f14b8914271f7a70366d72c6196d17663 (patch) | |
tree | 31d32f73bef107862101ded103a76b314cea3705 /drivers/block/zram | |
parent | 1868f9d0260e9afaf7c6436d14923ae12eaea465 (diff) | |
parent | 684826f8271ad97580b138b9ffd462005e470b99 (diff) | |
download | linux-617a814f14b8914271f7a70366d72c6196d17663.tar.xz |
Merge tag 'mm-stable-2024-09-20-02-31' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton:
"Along with the usual shower of singleton patches, notable patch series
in this pull request are:
- "Align kvrealloc() with krealloc()" from Danilo Krummrich. Adds
consistency to the APIs and behaviour of these two core allocation
functions. This also simplifies/enables Rustification.
- "Some cleanups for shmem" from Baolin Wang. No functional changes -
mode code reuse, better function naming, logic simplifications.
- "mm: some small page fault cleanups" from Josef Bacik. No
functional changes - code cleanups only.
- "Various memory tiering fixes" from Zi Yan. A small fix and a
little cleanup.
- "mm/swap: remove boilerplate" from Yu Zhao. Code cleanups and
simplifications and .text shrinkage.
- "Kernel stack usage histogram" from Pasha Tatashin and Shakeel
Butt. This is a feature, it adds new feilds to /proc/vmstat such as
$ grep kstack /proc/vmstat
kstack_1k 3
kstack_2k 188
kstack_4k 11391
kstack_8k 243
kstack_16k 0
which tells us that 11391 processes used 4k of stack while none at
all used 16k. Useful for some system tuning things, but
partivularly useful for "the dynamic kernel stack project".
- "kmemleak: support for percpu memory leak detect" from Pavel
Tikhomirov. Teaches kmemleak to detect leaksage of percpu memory.
- "mm: memcg: page counters optimizations" from Roman Gushchin. "3
independent small optimizations of page counters".
- "mm: split PTE/PMD PT table Kconfig cleanups+clarifications" from
David Hildenbrand. Improves PTE/PMD splitlock detection, makes
powerpc/8xx work correctly by design rather than by accident.
- "mm: remove arch_make_page_accessible()" from David Hildenbrand.
Some folio conversions which make arch_make_page_accessible()
unneeded.
- "mm, memcg: cg2 memory{.swap,}.peak write handlers" fro David
Finkel. Cleans up and fixes our handling of the resetting of the
cgroup/process peak-memory-use detector.
- "Make core VMA operations internal and testable" from Lorenzo
Stoakes. Rationalizaion and encapsulation of the VMA manipulation
APIs. With a view to better enable testing of the VMA functions,
even from a userspace-only harness.
- "mm: zswap: fixes for global shrinker" from Takero Funaki. Fix
issues in the zswap global shrinker, resulting in improved
performance.
- "mm: print the promo watermark in zoneinfo" from Kaiyang Zhao. Fill
in some missing info in /proc/zoneinfo.
- "mm: replace follow_page() by folio_walk" from David Hildenbrand.
Code cleanups and rationalizations (conversion to folio_walk())
resulting in the removal of follow_page().
- "improving dynamic zswap shrinker protection scheme" from Nhat
Pham. Some tuning to improve zswap's dynamic shrinker. Significant
reductions in swapin and improvements in performance are shown.
- "mm: Fix several issues with unaccepted memory" from Kirill
Shutemov. Improvements to the new unaccepted memory feature,
- "mm/mprotect: Fix dax puds" from Peter Xu. Implements mprotect on
DAX PUDs. This was missing, although nobody seems to have notied
yet.
- "Introduce a store type enum for the Maple tree" from Sidhartha
Kumar. Cleanups and modest performance improvements for the maple
tree library code.
- "memcg: further decouple v1 code from v2" from Shakeel Butt. Move
more cgroup v1 remnants away from the v2 memcg code.
- "memcg: initiate deprecation of v1 features" from Shakeel Butt.
Adds various warnings telling users that memcg v1 features are
deprecated.
- "mm: swap: mTHP swap allocator base on swap cluster order" from
Chris Li. Greatly improves the success rate of the mTHP swap
allocation.
- "mm: introduce numa_memblks" from Mike Rapoport. Moves various
disparate per-arch implementations of numa_memblk code into generic
code.
- "mm: batch free swaps for zap_pte_range()" from Barry Song. Greatly
improves the performance of munmap() of swap-filled ptes.
- "support large folio swap-out and swap-in for shmem" from Baolin
Wang. With this series we no longer split shmem large folios into
simgle-page folios when swapping out shmem.
- "mm/hugetlb: alloc/free gigantic folios" from Yu Zhao. Nice
performance improvements and code reductions for gigantic folios.
- "support shmem mTHP collapse" from Baolin Wang. Adds support for
khugepaged's collapsing of shmem mTHP folios.
- "mm: Optimize mseal checks" from Pedro Falcato. Fixes an mprotect()
performance regression due to the addition of mseal().
- "Increase the number of bits available in page_type" from Matthew
Wilcox. Increases the number of bits available in page_type!
- "Simplify the page flags a little" from Matthew Wilcox. Many legacy
page flags are now folio flags, so the page-based flags and their
accessors/mutators can be removed.
- "mm: store zero pages to be swapped out in a bitmap" from Usama
Arif. An optimization which permits us to avoid writing/reading
zero-filled zswap pages to backing store.
- "Avoid MAP_FIXED gap exposure" from Liam Howlett. Fixes a race
window which occurs when a MAP_FIXED operqtion is occurring during
an unrelated vma tree walk.
- "mm: remove vma_merge()" from Lorenzo Stoakes. Major rotorooting of
the vma_merge() functionality, making ot cleaner, more testable and
better tested.
- "misc fixups for DAMON {self,kunit} tests" from SeongJae Park.
Minor fixups of DAMON selftests and kunit tests.
- "mm: memory_hotplug: improve do_migrate_range()" from Kefeng Wang.
Code cleanups and folio conversions.
- "Shmem mTHP controls and stats improvements" from Ryan Roberts.
Cleanups for shmem controls and stats.
- "mm: count the number of anonymous THPs per size" from Barry Song.
Expose additional anon THP stats to userspace for improved tuning.
- "mm: finish isolate/putback_lru_page()" from Kefeng Wang: more
folio conversions and removal of now-unused page-based APIs.
- "replace per-quota region priorities histogram buffer with
per-context one" from SeongJae Park. DAMON histogram
rationalization.
- "Docs/damon: update GitHub repo URLs and maintainer-profile" from
SeongJae Park. DAMON documentation updates.
- "mm/vdpa: correct misuse of non-direct-reclaim __GFP_NOFAIL and
improve related doc and warn" from Jason Wang: fixes usage of page
allocator __GFP_NOFAIL and GFP_ATOMIC flags.
- "mm: split underused THPs" from Yu Zhao. Improve THP=always policy.
This was overprovisioning THPs in sparsely accessed memory areas.
- "zram: introduce custom comp backends API" frm Sergey Senozhatsky.
Add support for zram run-time compression algorithm tuning.
- "mm: Care about shadow stack guard gap when getting an unmapped
area" from Mark Brown. Fix up the various arch_get_unmapped_area()
implementations to better respect guard areas.
- "Improve mem_cgroup_iter()" from Kinsey Ho. Improve the reliability
of mem_cgroup_iter() and various code cleanups.
- "mm: Support huge pfnmaps" from Peter Xu. Extends the usage of huge
pfnmap support.
- "resource: Fix region_intersects() vs add_memory_driver_managed()"
from Huang Ying. Fix a bug in region_intersects() for systems with
CXL memory.
- "mm: hwpoison: two more poison recovery" from Kefeng Wang. Teaches
a couple more code paths to correctly recover from the encountering
of poisoned memry.
- "mm: enable large folios swap-in support" from Barry Song. Support
the swapin of mTHP memory into appropriately-sized folios, rather
than into single-page folios"
* tag 'mm-stable-2024-09-20-02-31' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (416 commits)
zram: free secondary algorithms names
uprobes: turn xol_area->pages[2] into xol_area->page
uprobes: introduce the global struct vm_special_mapping xol_mapping
Revert "uprobes: use vm_special_mapping close() functionality"
mm: support large folios swap-in for sync io devices
mm: add nr argument in mem_cgroup_swapin_uncharge_swap() helper to support large folios
mm: fix swap_read_folio_zeromap() for large folios with partial zeromap
mm/debug_vm_pgtable: Use pxdp_get() for accessing page table entries
set_memory: add __must_check to generic stubs
mm/vma: return the exact errno in vms_gather_munmap_vmas()
memcg: cleanup with !CONFIG_MEMCG_V1
mm/show_mem.c: report alloc tags in human readable units
mm: support poison recovery from copy_present_page()
mm: support poison recovery from do_cow_fault()
resource, kunit: add test case for region_intersects()
resource: make alloc_free_mem_region() works for iomem_resource
mm: z3fold: deprecate CONFIG_Z3FOLD
vfio/pci: implement huge_fault support
mm/arm64: support large pfn mappings
mm/x86: support large pfn mappings
...
Diffstat (limited to 'drivers/block/zram')
-rw-r--r-- | drivers/block/zram/Kconfig | 77 | ||||
-rw-r--r-- | drivers/block/zram/Makefile | 8 | ||||
-rw-r--r-- | drivers/block/zram/backend_842.c | 61 | ||||
-rw-r--r-- | drivers/block/zram/backend_842.h | 10 | ||||
-rw-r--r-- | drivers/block/zram/backend_deflate.c | 146 | ||||
-rw-r--r-- | drivers/block/zram/backend_deflate.h | 10 | ||||
-rw-r--r-- | drivers/block/zram/backend_lz4.c | 127 | ||||
-rw-r--r-- | drivers/block/zram/backend_lz4.h | 10 | ||||
-rw-r--r-- | drivers/block/zram/backend_lz4hc.c | 128 | ||||
-rw-r--r-- | drivers/block/zram/backend_lz4hc.h | 10 | ||||
-rw-r--r-- | drivers/block/zram/backend_lzo.c | 59 | ||||
-rw-r--r-- | drivers/block/zram/backend_lzo.h | 10 | ||||
-rw-r--r-- | drivers/block/zram/backend_lzorle.c | 59 | ||||
-rw-r--r-- | drivers/block/zram/backend_lzorle.h | 10 | ||||
-rw-r--r-- | drivers/block/zram/backend_zstd.c | 226 | ||||
-rw-r--r-- | drivers/block/zram/backend_zstd.h | 10 | ||||
-rw-r--r-- | drivers/block/zram/zcomp.c | 194 | ||||
-rw-r--r-- | drivers/block/zram/zcomp.h | 71 | ||||
-rw-r--r-- | drivers/block/zram/zram_drv.c | 141 | ||||
-rw-r--r-- | drivers/block/zram/zram_drv.h | 1 |
20 files changed, 1241 insertions, 127 deletions
diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig index eacf1cba7bf4..6aea609b795c 100644 --- a/drivers/block/zram/Kconfig +++ b/drivers/block/zram/Kconfig @@ -2,8 +2,6 @@ config ZRAM tristate "Compressed RAM block device support" depends on BLOCK && SYSFS && MMU - depends on HAVE_ZSMALLOC - depends on CRYPTO_LZO || CRYPTO_ZSTD || CRYPTO_LZ4 || CRYPTO_LZ4HC || CRYPTO_842 select ZSMALLOC help Creates virtual block devices called /dev/zramX (X = 0, 1, ...). @@ -16,6 +14,49 @@ config ZRAM See Documentation/admin-guide/blockdev/zram.rst for more information. +config ZRAM_BACKEND_LZ4 + bool "lz4 compression support" + depends on ZRAM + select LZ4_COMPRESS + select LZ4_DECOMPRESS + +config ZRAM_BACKEND_LZ4HC + bool "lz4hc compression support" + depends on ZRAM + select LZ4HC_COMPRESS + select LZ4_DECOMPRESS + +config ZRAM_BACKEND_ZSTD + bool "zstd compression support" + depends on ZRAM + select ZSTD_COMPRESS + select ZSTD_DECOMPRESS + +config ZRAM_BACKEND_DEFLATE + bool "deflate compression support" + depends on ZRAM + select ZLIB_DEFLATE + select ZLIB_INFLATE + +config ZRAM_BACKEND_842 + bool "842 compression support" + depends on ZRAM + select 842_COMPRESS + select 842_DECOMPRESS + +config ZRAM_BACKEND_FORCE_LZO + depends on ZRAM + def_bool !ZRAM_BACKEND_LZ4 && !ZRAM_BACKEND_LZ4HC && \ + !ZRAM_BACKEND_ZSTD && !ZRAM_BACKEND_DEFLATE && \ + !ZRAM_BACKEND_842 + +config ZRAM_BACKEND_LZO + bool "lzo and lzo-rle compression support" if !ZRAM_BACKEND_FORCE_LZO + depends on ZRAM + default ZRAM_BACKEND_FORCE_LZO + select LZO_COMPRESS + select LZO_DECOMPRESS + choice prompt "Default zram compressor" default ZRAM_DEF_COMP_LZORLE @@ -23,38 +64,44 @@ choice config ZRAM_DEF_COMP_LZORLE bool "lzo-rle" - depends on CRYPTO_LZO + depends on ZRAM_BACKEND_LZO -config ZRAM_DEF_COMP_ZSTD - bool "zstd" - depends on CRYPTO_ZSTD +config ZRAM_DEF_COMP_LZO + bool "lzo" + depends on ZRAM_BACKEND_LZO config ZRAM_DEF_COMP_LZ4 bool "lz4" - depends on CRYPTO_LZ4 - -config ZRAM_DEF_COMP_LZO - bool "lzo" - depends on CRYPTO_LZO + depends on ZRAM_BACKEND_LZ4 config ZRAM_DEF_COMP_LZ4HC bool "lz4hc" - depends on CRYPTO_LZ4HC + depends on ZRAM_BACKEND_LZ4HC + +config ZRAM_DEF_COMP_ZSTD + bool "zstd" + depends on ZRAM_BACKEND_ZSTD + +config ZRAM_DEF_COMP_DEFLATE + bool "deflate" + depends on ZRAM_BACKEND_DEFLATE config ZRAM_DEF_COMP_842 bool "842" - depends on CRYPTO_842 + depends on ZRAM_BACKEND_842 endchoice config ZRAM_DEF_COMP string default "lzo-rle" if ZRAM_DEF_COMP_LZORLE - default "zstd" if ZRAM_DEF_COMP_ZSTD - default "lz4" if ZRAM_DEF_COMP_LZ4 default "lzo" if ZRAM_DEF_COMP_LZO + default "lz4" if ZRAM_DEF_COMP_LZ4 default "lz4hc" if ZRAM_DEF_COMP_LZ4HC + default "zstd" if ZRAM_DEF_COMP_ZSTD + default "deflate" if ZRAM_DEF_COMP_DEFLATE default "842" if ZRAM_DEF_COMP_842 + default "unset-value" config ZRAM_WRITEBACK bool "Write back incompressible or idle page to backing device" diff --git a/drivers/block/zram/Makefile b/drivers/block/zram/Makefile index de9e457907b1..0fdefd576691 100644 --- a/drivers/block/zram/Makefile +++ b/drivers/block/zram/Makefile @@ -1,4 +1,12 @@ # SPDX-License-Identifier: GPL-2.0-only + zram-y := zcomp.o zram_drv.o +zram-$(CONFIG_ZRAM_BACKEND_LZO) += backend_lzorle.o backend_lzo.o +zram-$(CONFIG_ZRAM_BACKEND_LZ4) += backend_lz4.o +zram-$(CONFIG_ZRAM_BACKEND_LZ4HC) += backend_lz4hc.o +zram-$(CONFIG_ZRAM_BACKEND_ZSTD) += backend_zstd.o +zram-$(CONFIG_ZRAM_BACKEND_DEFLATE) += backend_deflate.o +zram-$(CONFIG_ZRAM_BACKEND_842) += backend_842.o + obj-$(CONFIG_ZRAM) += zram.o diff --git a/drivers/block/zram/backend_842.c b/drivers/block/zram/backend_842.c new file mode 100644 index 000000000000..10d9d5c60f53 --- /dev/null +++ b/drivers/block/zram/backend_842.c @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include <linux/kernel.h> +#include <linux/slab.h> +#include <linux/sw842.h> +#include <linux/vmalloc.h> + +#include "backend_842.h" + +static void release_params_842(struct zcomp_params *params) +{ +} + +static int setup_params_842(struct zcomp_params *params) +{ + return 0; +} + +static void destroy_842(struct zcomp_ctx *ctx) +{ + kfree(ctx->context); +} + +static int create_842(struct zcomp_params *params, struct zcomp_ctx *ctx) +{ + ctx->context = kmalloc(SW842_MEM_COMPRESS, GFP_KERNEL); + if (!ctx->context) + return -ENOMEM; + return 0; +} + +static int compress_842(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + unsigned int dlen = req->dst_len; + int ret; + + ret = sw842_compress(req->src, req->src_len, req->dst, &dlen, + ctx->context); + if (ret == 0) + req->dst_len = dlen; + return ret; +} + +static int decompress_842(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + unsigned int dlen = req->dst_len; + + return sw842_decompress(req->src, req->src_len, req->dst, &dlen); +} + +const struct zcomp_ops backend_842 = { + .compress = compress_842, + .decompress = decompress_842, + .create_ctx = create_842, + .destroy_ctx = destroy_842, + .setup_params = setup_params_842, + .release_params = release_params_842, + .name = "842", +}; diff --git a/drivers/block/zram/backend_842.h b/drivers/block/zram/backend_842.h new file mode 100644 index 000000000000..4dc85c188799 --- /dev/null +++ b/drivers/block/zram/backend_842.h @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#ifndef __BACKEND_842_H__ +#define __BACKEND_842_H__ + +#include "zcomp.h" + +extern const struct zcomp_ops backend_842; + +#endif /* __BACKEND_842_H__ */ diff --git a/drivers/block/zram/backend_deflate.c b/drivers/block/zram/backend_deflate.c new file mode 100644 index 000000000000..0f7f252c12f4 --- /dev/null +++ b/drivers/block/zram/backend_deflate.c @@ -0,0 +1,146 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include <linux/kernel.h> +#include <linux/slab.h> +#include <linux/vmalloc.h> +#include <linux/zlib.h> + +#include "backend_deflate.h" + +/* Use the same value as crypto API */ +#define DEFLATE_DEF_WINBITS 11 +#define DEFLATE_DEF_MEMLEVEL MAX_MEM_LEVEL + +struct deflate_ctx { + struct z_stream_s cctx; + struct z_stream_s dctx; +}; + +static void deflate_release_params(struct zcomp_params *params) +{ +} + +static int deflate_setup_params(struct zcomp_params *params) +{ + if (params->level == ZCOMP_PARAM_NO_LEVEL) + params->level = Z_DEFAULT_COMPRESSION; + + return 0; +} + +static void deflate_destroy(struct zcomp_ctx *ctx) +{ + struct deflate_ctx *zctx = ctx->context; + + if (!zctx) + return; + + if (zctx->cctx.workspace) { + zlib_deflateEnd(&zctx->cctx); + vfree(zctx->cctx.workspace); + } + if (zctx->dctx.workspace) { + zlib_inflateEnd(&zctx->dctx); + vfree(zctx->dctx.workspace); + } + kfree(zctx); +} + +static int deflate_create(struct zcomp_params *params, struct zcomp_ctx *ctx) +{ + struct deflate_ctx *zctx; + size_t sz; + int ret; + + zctx = kzalloc(sizeof(*zctx), GFP_KERNEL); + if (!zctx) + return -ENOMEM; + + ctx->context = zctx; + sz = zlib_deflate_workspacesize(-DEFLATE_DEF_WINBITS, MAX_MEM_LEVEL); + zctx->cctx.workspace = vzalloc(sz); + if (!zctx->cctx.workspace) + goto error; + + ret = zlib_deflateInit2(&zctx->cctx, params->level, Z_DEFLATED, + -DEFLATE_DEF_WINBITS, DEFLATE_DEF_MEMLEVEL, + Z_DEFAULT_STRATEGY); + if (ret != Z_OK) + goto error; + + sz = zlib_inflate_workspacesize(); + zctx->dctx.workspace = vzalloc(sz); + if (!zctx->dctx.workspace) + goto error; + + ret = zlib_inflateInit2(&zctx->dctx, -DEFLATE_DEF_WINBITS); + if (ret != Z_OK) + goto error; + + return 0; + +error: + deflate_destroy(ctx); + return -EINVAL; +} + +static int deflate_compress(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + struct deflate_ctx *zctx = ctx->context; + struct z_stream_s *deflate; + int ret; + + deflate = &zctx->cctx; + ret = zlib_deflateReset(deflate); + if (ret != Z_OK) + return -EINVAL; + + deflate->next_in = (u8 *)req->src; + deflate->avail_in = req->src_len; + deflate->next_out = (u8 *)req->dst; + deflate->avail_out = req->dst_len; + + ret = zlib_deflate(deflate, Z_FINISH); + if (ret != Z_STREAM_END) + return -EINVAL; + + req->dst_len = deflate->total_out; + return 0; +} + +static int deflate_decompress(struct zcomp_params *params, + struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + struct deflate_ctx *zctx = ctx->context; + struct z_stream_s *inflate; + int ret; + + inflate = &zctx->dctx; + + ret = zlib_inflateReset(inflate); + if (ret != Z_OK) + return -EINVAL; + + inflate->next_in = (u8 *)req->src; + inflate->avail_in = req->src_len; + inflate->next_out = (u8 *)req->dst; + inflate->avail_out = req->dst_len; + + ret = zlib_inflate(inflate, Z_SYNC_FLUSH); + if (ret != Z_STREAM_END) + return -EINVAL; + + return 0; +} + +const struct zcomp_ops backend_deflate = { + .compress = deflate_compress, + .decompress = deflate_decompress, + .create_ctx = deflate_create, + .destroy_ctx = deflate_destroy, + .setup_params = deflate_setup_params, + .release_params = deflate_release_params, + .name = "deflate", +}; diff --git a/drivers/block/zram/backend_deflate.h b/drivers/block/zram/backend_deflate.h new file mode 100644 index 000000000000..a39ac12b114c --- /dev/null +++ b/drivers/block/zram/backend_deflate.h @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#ifndef __BACKEND_DEFLATE_H__ +#define __BACKEND_DEFLATE_H__ + +#include "zcomp.h" + +extern const struct zcomp_ops backend_deflate; + +#endif /* __BACKEND_DEFLATE_H__ */ diff --git a/drivers/block/zram/backend_lz4.c b/drivers/block/zram/backend_lz4.c new file mode 100644 index 000000000000..847f3334eb38 --- /dev/null +++ b/drivers/block/zram/backend_lz4.c @@ -0,0 +1,127 @@ +#include <linux/kernel.h> +#include <linux/lz4.h> +#include <linux/slab.h> +#include <linux/vmalloc.h> + +#include "backend_lz4.h" + +struct lz4_ctx { + void *mem; + + LZ4_streamDecode_t *dstrm; + LZ4_stream_t *cstrm; +}; + +static void lz4_release_params(struct zcomp_params *params) +{ +} + +static int lz4_setup_params(struct zcomp_params *params) +{ + if (params->level == ZCOMP_PARAM_NO_LEVEL) + params->level = LZ4_ACCELERATION_DEFAULT; + + return 0; +} + +static void lz4_destroy(struct zcomp_ctx *ctx) +{ + struct lz4_ctx *zctx = ctx->context; + + if (!zctx) + return; + + vfree(zctx->mem); + kfree(zctx->dstrm); + kfree(zctx->cstrm); + kfree(zctx); +} + +static int lz4_create(struct zcomp_params *params, struct zcomp_ctx *ctx) +{ + struct lz4_ctx *zctx; + + zctx = kzalloc(sizeof(*zctx), GFP_KERNEL); + if (!zctx) + return -ENOMEM; + + ctx->context = zctx; + if (params->dict_sz == 0) { + zctx->mem = vmalloc(LZ4_MEM_COMPRESS); + if (!zctx->mem) + goto error; + } else { + zctx->dstrm = kzalloc(sizeof(*zctx->dstrm), GFP_KERNEL); + if (!zctx->dstrm) + goto error; + + zctx->cstrm = kzalloc(sizeof(*zctx->cstrm), GFP_KERNEL); + if (!zctx->cstrm) + goto error; + } + + return 0; + +error: + lz4_destroy(ctx); + return -ENOMEM; +} + +static int lz4_compress(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + struct lz4_ctx *zctx = ctx->context; + int ret; + + if (!zctx->cstrm) { + ret = LZ4_compress_fast(req->src, req->dst, req->src_len, + req->dst_len, params->level, + zctx->mem); + } else { + /* Cstrm needs to be reset */ + ret = LZ4_loadDict(zctx->cstrm, params->dict, params->dict_sz); + if (ret != params->dict_sz) + return -EINVAL; + ret = LZ4_compress_fast_continue(zctx->cstrm, req->src, + req->dst, req->src_len, + req->dst_len, params->level); + } + if (!ret) + return -EINVAL; + req->dst_len = ret; + return 0; +} + +static int lz4_decompress(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + struct lz4_ctx *zctx = ctx->context; + int ret; + + if (!zctx->dstrm) { + ret = LZ4_decompress_safe(req->src, req->dst, req->src_len, + req->dst_len); + } else { + /* Dstrm needs to be reset */ + ret = LZ4_setStreamDecode(zctx->dstrm, params->dict, + params->dict_sz); + if (!ret) + return -EINVAL; + ret = LZ4_decompress_safe_continue(zctx->dstrm, req->src, + req->dst, req->src_len, + req->dst_len); + } + if (ret < 0) + return -EINVAL; + return 0; +} + +const struct zcomp_ops backend_lz4 = { + .compress = lz4_compress, + .decompress = lz4_decompress, + .create_ctx = lz4_create, + .destroy_ctx = lz4_destroy, + .setup_params = lz4_setup_params, + .release_params = lz4_release_params, + .name = "lz4", +}; diff --git a/drivers/block/zram/backend_lz4.h b/drivers/block/zram/backend_lz4.h new file mode 100644 index 000000000000..c11fa602a703 --- /dev/null +++ b/drivers/block/zram/backend_lz4.h @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#ifndef __BACKEND_LZ4_H__ +#define __BACKEND_LZ4_H__ + +#include "zcomp.h" + +extern const struct zcomp_ops backend_lz4; + +#endif /* __BACKEND_LZ4_H__ */ diff --git a/drivers/block/zram/backend_lz4hc.c b/drivers/block/zram/backend_lz4hc.c new file mode 100644 index 000000000000..5f37d5abcaeb --- /dev/null +++ b/drivers/block/zram/backend_lz4hc.c @@ -0,0 +1,128 @@ +#include <linux/kernel.h> +#include <linux/lz4.h> +#include <linux/slab.h> +#include <linux/vmalloc.h> + +#include "backend_lz4hc.h" + +struct lz4hc_ctx { + void *mem; + + LZ4_streamDecode_t *dstrm; + LZ4_streamHC_t *cstrm; +}; + +static void lz4hc_release_params(struct zcomp_params *params) +{ +} + +static int lz4hc_setup_params(struct zcomp_params *params) +{ + if (params->level == ZCOMP_PARAM_NO_LEVEL) + params->level = LZ4HC_DEFAULT_CLEVEL; + + return 0; +} + +static void lz4hc_destroy(struct zcomp_ctx *ctx) +{ + struct lz4hc_ctx *zctx = ctx->context; + + if (!zctx) + return; + + kfree(zctx->dstrm); + kfree(zctx->cstrm); + vfree(zctx->mem); + kfree(zctx); +} + +static int lz4hc_create(struct zcomp_params *params, struct zcomp_ctx *ctx) +{ + struct lz4hc_ctx *zctx; + + zctx = kzalloc(sizeof(*zctx), GFP_KERNEL); + if (!zctx) + return -ENOMEM; + + ctx->context = zctx; + if (params->dict_sz == 0) { + zctx->mem = vmalloc(LZ4HC_MEM_COMPRESS); + if (!zctx->mem) + goto error; + } else { + zctx->dstrm = kzalloc(sizeof(*zctx->dstrm), GFP_KERNEL); + if (!zctx->dstrm) + goto error; + + zctx->cstrm = kzalloc(sizeof(*zctx->cstrm), GFP_KERNEL); + if (!zctx->cstrm) + goto error; + } + + return 0; + +error: + lz4hc_destroy(ctx); + return -EINVAL; +} + +static int lz4hc_compress(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + struct lz4hc_ctx *zctx = ctx->context; + int ret; + + if (!zctx->cstrm) { + ret = LZ4_compress_HC(req->src, req->dst, req->src_len, + req->dst_len, params->level, + zctx->mem); + } else { + /* Cstrm needs to be reset */ + LZ4_resetStreamHC(zctx->cstrm, params->level); + ret = LZ4_loadDictHC(zctx->cstrm, params->dict, + params->dict_sz); + if (ret != params->dict_sz) + return -EINVAL; + ret = LZ4_compress_HC_continue(zctx->cstrm, req->src, req->dst, + req->src_len, req->dst_len); + } + if (!ret) + return -EINVAL; + req->dst_len = ret; + return 0; +} + +static int lz4hc_decompress(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + struct lz4hc_ctx *zctx = ctx->context; + int ret; + + if (!zctx->dstrm) { + ret = LZ4_decompress_safe(req->src, req->dst, req->src_len, + req->dst_len); + } else { + /* Dstrm needs to be reset */ + ret = LZ4_setStreamDecode(zctx->dstrm, params->dict, + params->dict_sz); + if (!ret) + return -EINVAL; + ret = LZ4_decompress_safe_continue(zctx->dstrm, req->src, + req->dst, req->src_len, + req->dst_len); + } + if (ret < 0) + return -EINVAL; + return 0; +} + +const struct zcomp_ops backend_lz4hc = { + .compress = lz4hc_compress, + .decompress = lz4hc_decompress, + .create_ctx = lz4hc_create, + .destroy_ctx = lz4hc_destroy, + .setup_params = lz4hc_setup_params, + .release_params = lz4hc_release_params, + .name = "lz4hc", +}; diff --git a/drivers/block/zram/backend_lz4hc.h b/drivers/block/zram/backend_lz4hc.h new file mode 100644 index 000000000000..6de03551ed4d --- /dev/null +++ b/drivers/block/zram/backend_lz4hc.h @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#ifndef __BACKEND_LZ4HC_H__ +#define __BACKEND_LZ4HC_H__ + +#include "zcomp.h" + +extern const struct zcomp_ops backend_lz4hc; + +#endif /* __BACKEND_LZ4HC_H__ */ diff --git a/drivers/block/zram/backend_lzo.c b/drivers/block/zram/backend_lzo.c new file mode 100644 index 000000000000..4c906beaae6b --- /dev/null +++ b/drivers/block/zram/backend_lzo.c @@ -0,0 +1,59 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include <linux/kernel.h> +#include <linux/slab.h> +#include <linux/lzo.h> + +#include "backend_lzo.h" + +static void lzo_release_params(struct zcomp_params *params) +{ +} + +static int lzo_setup_params(struct zcomp_params *params) +{ + return 0; +} + +static int lzo_create(struct zcomp_params *params, struct zcomp_ctx *ctx) +{ + ctx->context = kzalloc(LZO1X_MEM_COMPRESS, GFP_KERNEL); + if (!ctx->context) + return -ENOMEM; + return 0; +} + +static void lzo_destroy(struct zcomp_ctx *ctx) +{ + kfree(ctx->context); +} + +static int lzo_compress(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + int ret; + + ret = lzo1x_1_compress(req->src, req->src_len, req->dst, + &req->dst_len, ctx->context); + return ret == LZO_E_OK ? 0 : ret; +} + +static int lzo_decompress(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + int ret; + + ret = lzo1x_decompress_safe(req->src, req->src_len, + req->dst, &req->dst_len); + return ret == LZO_E_OK ? 0 : ret; +} + +const struct zcomp_ops backend_lzo = { + .compress = lzo_compress, + .decompress = lzo_decompress, + .create_ctx = lzo_create, + .destroy_ctx = lzo_destroy, + .setup_params = lzo_setup_params, + .release_params = lzo_release_params, + .name = "lzo", +}; diff --git a/drivers/block/zram/backend_lzo.h b/drivers/block/zram/backend_lzo.h new file mode 100644 index 000000000000..93d54749e63c --- /dev/null +++ b/drivers/block/zram/backend_lzo.h @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#ifndef __BACKEND_LZO_H__ +#define __BACKEND_LZO_H__ + +#include "zcomp.h" + +extern const struct zcomp_ops backend_lzo; + +#endif /* __BACKEND_LZO_H__ */ diff --git a/drivers/block/zram/backend_lzorle.c b/drivers/block/zram/backend_lzorle.c new file mode 100644 index 000000000000..10640c96cbfc --- /dev/null +++ b/drivers/block/zram/backend_lzorle.c @@ -0,0 +1,59 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include <linux/kernel.h> +#include <linux/slab.h> +#include <linux/lzo.h> + +#include "backend_lzorle.h" + +static void lzorle_release_params(struct zcomp_params *params) +{ +} + +static int lzorle_setup_params(struct zcomp_params *params) +{ + return 0; +} + +static int lzorle_create(struct zcomp_params *params, struct zcomp_ctx *ctx) +{ + ctx->context = kzalloc(LZO1X_MEM_COMPRESS, GFP_KERNEL); + if (!ctx->context) + return -ENOMEM; + return 0; +} + +static void lzorle_destroy(struct zcomp_ctx *ctx) +{ + kfree(ctx->context); +} + +static int lzorle_compress(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + int ret; + + ret = lzorle1x_1_compress(req->src, req->src_len, req->dst, + &req->dst_len, ctx->context); + return ret == LZO_E_OK ? 0 : ret; +} + +static int lzorle_decompress(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + int ret; + + ret = lzo1x_decompress_safe(req->src, req->src_len, + req->dst, &req->dst_len); + return ret == LZO_E_OK ? 0 : ret; +} + +const struct zcomp_ops backend_lzorle = { + .compress = lzorle_compress, + .decompress = lzorle_decompress, + .create_ctx = lzorle_create, + .destroy_ctx = lzorle_destroy, + .setup_params = lzorle_setup_params, + .release_params = lzorle_release_params, + .name = "lzo-rle", +}; diff --git a/drivers/block/zram/backend_lzorle.h b/drivers/block/zram/backend_lzorle.h new file mode 100644 index 000000000000..6ecb163b09f1 --- /dev/null +++ b/drivers/block/zram/backend_lzorle.h @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#ifndef __BACKEND_LZORLE_H__ +#define __BACKEND_LZORLE_H__ + +#include "zcomp.h" + +extern const struct zcomp_ops backend_lzorle; + +#endif /* __BACKEND_LZORLE_H__ */ diff --git a/drivers/block/zram/backend_zstd.c b/drivers/block/zram/backend_zstd.c new file mode 100644 index 000000000000..1184c0036f44 --- /dev/null +++ b/drivers/block/zram/backend_zstd.c @@ -0,0 +1,226 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include <linux/kernel.h> +#include <linux/slab.h> +#include <linux/vmalloc.h> +#include <linux/zstd.h> + +#include "backend_zstd.h" + +struct zstd_ctx { + zstd_cctx *cctx; + zstd_dctx *dctx; + void *cctx_mem; + void *dctx_mem; +}; + +struct zstd_params { + zstd_custom_mem custom_mem; + zstd_cdict *cdict; + zstd_ddict *ddict; + zstd_parameters cprm; +}; + +/* + * For C/D dictionaries we need to provide zstd with zstd_custom_mem, + * which zstd uses internally to allocate/free memory when needed. + * + * This means that allocator.customAlloc() can be called from zcomp_compress() + * under local-lock (per-CPU compression stream), in which case we must use + * GFP_ATOMIC. + * + * Another complication here is that we can be configured as a swap device. + */ +static void *zstd_custom_alloc(void *opaque, size_t size) +{ + if (!preemptible()) + return kvzalloc(size, GFP_ATOMIC); + + return kvzalloc(size, __GFP_KSWAPD_RECLAIM | __GFP_NOWARN); +} + +static void zstd_custom_free(void *opaque, void *address) +{ + kvfree(address); +} + +static void zstd_release_params(struct zcomp_params *params) +{ + struct zstd_params *zp = params->drv_data; + + params->drv_data = NULL; + if (!zp) + return; + + zstd_free_cdict(zp->cdict); + zstd_free_ddict(zp->ddict); + kfree(zp); +} + +static int zstd_setup_params(struct zcomp_params *params) +{ + zstd_compression_parameters prm; + struct zstd_params *zp; + + zp = kzalloc(sizeof(*zp), GFP_KERNEL); + if (!zp) + return -ENOMEM; + + params->drv_data = zp; + if (params->level == ZCOMP_PARAM_NO_LEVEL) + params->level = zstd_default_clevel(); + + zp->cprm = zstd_get_params(params->level, PAGE_SIZE); + + zp->custom_mem.customAlloc = zstd_custom_alloc; + zp->custom_mem.customFree = zstd_custom_free; + + prm = zstd_get_cparams(params->level, PAGE_SIZE, + params->dict_sz); + + zp->cdict = zstd_create_cdict_byreference(params->dict, + params->dict_sz, + prm, + zp->custom_mem); + if (!zp->cdict) + goto error; + + zp->ddict = zstd_create_ddict_byreference(params->dict, + params->dict_sz, + zp->custom_mem); + if (!zp->ddict) + goto error; + + return 0; + +error: + zstd_release_params(params); + return -EINVAL; +} + +static void zstd_destroy(struct zcomp_ctx *ctx) +{ + struct zstd_ctx *zctx = ctx->context; + + if (!zctx) + return; + + /* + * If ->cctx_mem and ->dctx_mem were allocated then we didn't use + * C/D dictionary and ->cctx / ->dctx were "embedded" into these + * buffers. + * + * If otherwise then we need to explicitly release ->cctx / ->dctx. + */ + if (zctx->cctx_mem) + vfree(zctx->cctx_mem); + else + zstd_free_cctx(zctx->cctx); + + if (zctx->dctx_mem) + vfree(zctx->dctx_mem); + else + zstd_free_dctx(zctx->dctx); + + kfree(zctx); +} + +static int zstd_create(struct zcomp_params *params, struct zcomp_ctx *ctx) +{ + struct zstd_ctx *zctx; + zstd_parameters prm; + size_t sz; + + zctx = kzalloc(sizeof(*zctx), GFP_KERNEL); + if (!zctx) + return -ENOMEM; + + ctx->context = zctx; + if (params->dict_sz == 0) { + prm = zstd_get_params(params->level, PAGE_SIZE); + sz = zstd_cctx_workspace_bound(&prm.cParams); + zctx->cctx_mem = vzalloc(sz); + if (!zctx->cctx_mem) + goto error; + + zctx->cctx = zstd_init_cctx(zctx->cctx_mem, sz); + if (!zctx->cctx) + goto error; + + sz = zstd_dctx_workspace_bound(); + zctx->dctx_mem = vzalloc(sz); + if (!zctx->dctx_mem) + goto error; + + zctx->dctx = zstd_init_dctx(zctx->dctx_mem, sz); + if (!zctx->dctx) + goto error; + } else { + struct zstd_params *zp = params->drv_data; + + zctx->cctx = zstd_create_cctx_advanced(zp->custom_mem); + if (!zctx->cctx) + goto error; + + zctx->dctx = zstd_create_dctx_advanced(zp->custom_mem); + if (!zctx->dctx) + goto error; + } + + return 0; + +error: + zstd_release_params(params); + zstd_destroy(ctx); + return -EINVAL; +} + +static int zstd_compress(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + struct zstd_params *zp = params->drv_data; + struct zstd_ctx *zctx = ctx->context; + size_t ret; + + if (params->dict_sz == 0) + ret = zstd_compress_cctx(zctx->cctx, req->dst, req->dst_len, + req->src, req->src_len, &zp->cprm); + else + ret = zstd_compress_using_cdict(zctx->cctx, req->dst, + req->dst_len, req->src, + req->src_len, + zp->cdict); + if (zstd_is_error(ret)) + return -EINVAL; + req->dst_len = ret; + return 0; +} + +static int zstd_decompress(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req) +{ + struct zstd_params *zp = params->drv_data; + struct zstd_ctx *zctx = ctx->context; + size_t ret; + + if (params->dict_sz == 0) + ret = zstd_decompress_dctx(zctx->dctx, req->dst, req->dst_len, + req->src, req->src_len); + else + ret = zstd_decompress_using_ddict(zctx->dctx, req->dst, + req->dst_len, req->src, + req->src_len, zp->ddict); + if (zstd_is_error(ret)) + return -EINVAL; + return 0; +} + +const struct zcomp_ops backend_zstd = { + .compress = zstd_compress, + .decompress = zstd_decompress, + .create_ctx = zstd_create, + .destroy_ctx = zstd_destroy, + .setup_params = zstd_setup_params, + .release_params = zstd_release_params, + .name = "zstd", +}; diff --git a/drivers/block/zram/backend_zstd.h b/drivers/block/zram/backend_zstd.h new file mode 100644 index 000000000000..10fdfff1ec1c --- /dev/null +++ b/drivers/block/zram/backend_zstd.h @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#ifndef __BACKEND_ZSTD_H__ +#define __BACKEND_ZSTD_H__ + +#include "zcomp.h" + +extern const struct zcomp_ops backend_zstd; + +#endif /* __BACKEND_ZSTD_H__ */ diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index 8237b08c49d8..bb514403e305 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -1,7 +1,4 @@ // SPDX-License-Identifier: GPL-2.0-or-later -/* - * Copyright (C) 2014 Sergey Senozhatsky. - */ #include <linux/kernel.h> #include <linux/string.h> @@ -15,91 +12,97 @@ #include "zcomp.h" -static const char * const backends[] = { -#if IS_ENABLED(CONFIG_CRYPTO_LZO) - "lzo", - "lzo-rle", +#include "backend_lzo.h" +#include "backend_lzorle.h" +#include "backend_lz4.h" +#include "backend_lz4hc.h" +#include "backend_zstd.h" +#include "backend_deflate.h" +#include "backend_842.h" + +static const struct zcomp_ops *backends[] = { +#if IS_ENABLED(CONFIG_ZRAM_BACKEND_LZO) + &backend_lzorle, + &backend_lzo, #endif -#if IS_ENABLED(CONFIG_CRYPTO_LZ4) - "lz4", +#if IS_ENABLED(CONFIG_ZRAM_BACKEND_LZ4) + &backend_lz4, #endif -#if IS_ENABLED(CONFIG_CRYPTO_LZ4HC) - "lz4hc", +#if IS_ENABLED(CONFIG_ZRAM_BACKEND_LZ4HC) + &backend_lz4hc, #endif -#if IS_ENABLED(CONFIG_CRYPTO_842) - "842", +#if IS_ENABLED(CONFIG_ZRAM_BACKEND_ZSTD) + &backend_zstd, #endif -#if IS_ENABLED(CONFIG_CRYPTO_ZSTD) - "zstd", +#if IS_ENABLED(CONFIG_ZRAM_BACKEND_DEFLATE) + &backend_deflate, #endif +#if IS_ENABLED(CONFIG_ZRAM_BACKEND_842) + &backend_842, +#endif + NULL }; -static void zcomp_strm_free(struct zcomp_strm *zstrm) +static void zcomp_strm_free(struct zcomp *comp, struct zcomp_strm *zstrm) { - if (!IS_ERR_OR_NULL(zstrm->tfm)) - crypto_free_comp(zstrm->tfm); + comp->ops->destroy_ctx(&zstrm->ctx); vfree(zstrm->buffer); - zstrm->tfm = NULL; zstrm->buffer = NULL; } -/* - * Initialize zcomp_strm structure with ->tfm initialized by backend, and - * ->buffer. Return a negative value on error. - */ -static int zcomp_strm_init(struct zcomp_strm *zstrm, struct zcomp *comp) +static int zcomp_strm_init(struct zcomp *comp, struct zcomp_strm *zstrm) { - zstrm->tfm = crypto_alloc_comp(comp->name, 0, 0); + int ret; + + ret = comp->ops->create_ctx(comp->params, &zstrm->ctx); + if (ret) + return ret; + /* * allocate 2 pages. 1 for compressed data, plus 1 extra for the * case when compressed size is larger than the original one */ zstrm->buffer = vzalloc(2 * PAGE_SIZE); - if (IS_ERR_OR_NULL(zstrm->tfm) || !zstrm->buffer) { - zcomp_strm_free(zstrm); + if (!zstrm->buffer) { + zcomp_strm_free(comp, zstrm); return -ENOMEM; } return 0; } +static const struct zcomp_ops *lookup_backend_ops(const char *comp) +{ + int i = 0; + + while (backends[i]) { + if (sysfs_streq(comp, backends[i]->name)) + break; + i++; + } + return backends[i]; +} + bool zcomp_available_algorithm(const char *comp) { - /* - * Crypto does not ignore a trailing new line symbol, - * so make sure you don't supply a string containing - * one. - * This also means that we permit zcomp initialisation - * with any compressing algorithm known to crypto api. - */ - return crypto_has_comp(comp, 0, 0) == 1; + return lookup_backend_ops(comp) != NULL; } /* show available compressors */ ssize_t zcomp_available_show(const char *comp, char *buf) { - bool known_algorithm = false; ssize_t sz = 0; int i; - for (i = 0; i < ARRAY_SIZE(backends); i++) { - if (!strcmp(comp, backends[i])) { - known_algorithm = true; + for (i = 0; i < ARRAY_SIZE(backends) - 1; i++) { + if (!strcmp(comp, backends[i]->name)) { sz += scnprintf(buf + sz, PAGE_SIZE - sz - 2, - "[%s] ", backends[i]); + "[%s] ", backends[i]->name); } else { sz += scnprintf(buf + sz, PAGE_SIZE - sz - 2, - "%s ", backends[i]); + "%s ", backends[i]->name); } } - /* - * Out-of-tree module known to crypto api or a missing - * entry in `backends'. - */ - if (!known_algorithm && crypto_has_comp(comp, 0, 0) == 1) - sz += scnprintf(buf + sz, PAGE_SIZE - sz - 2, - "[%s] ", comp); - sz += scnprintf(buf + sz, PAGE_SIZE - sz, "\n"); return sz; } @@ -115,38 +118,34 @@ void zcomp_stream_put(struct zcomp *comp) local_unlock(&comp->stream->lock); } -int zcomp_compress(struct zcomp_strm *zstrm, - const void *src, unsigned int *dst_len) +int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, + const void *src, unsigned int *dst_len) { - /* - * Our dst memory (zstrm->buffer) is always `2 * PAGE_SIZE' sized - * because sometimes we can endup having a bigger compressed data - * due to various reasons: for example compression algorithms tend - * to add some padding to the compressed buffer. Speaking of padding, - * comp algorithm `842' pads the compressed length to multiple of 8 - * and returns -ENOSP when the dst memory is not big enough, which - * is not something that ZRAM wants to see. We can handle the - * `compressed_size > PAGE_SIZE' case easily in ZRAM, but when we - * receive -ERRNO from the compressing backend we can't help it - * anymore. To make `842' happy we need to tell the exact size of - * the dst buffer, zram_drv will take care of the fact that - * compressed buffer is too big. - */ - *dst_len = PAGE_SIZE * 2; + struct zcomp_req req = { + .src = src, + .dst = zstrm->buffer, + .src_len = PAGE_SIZE, + .dst_len = 2 * PAGE_SIZE, + }; + int ret; - return crypto_comp_compress(zstrm->tfm, - src, PAGE_SIZE, - zstrm->buffer, dst_len); + ret = comp->ops->compress(comp->params, &zstrm->ctx, &req); + if (!ret) + *dst_len = req.dst_len; + return ret; } -int zcomp_decompress(struct zcomp_strm *zstrm, - const void *src, unsigned int src_len, void *dst) +int zcomp_decompress(struct zcomp *comp, struct zcomp_strm *zstrm, + const void *src, unsigned int src_len, void *dst) { - unsigned int dst_len = PAGE_SIZE; - - return crypto_comp_decompress(zstrm->tfm, - src, src_len, - dst, &dst_len); + struct zcomp_req req = { + .src = src, + .dst = dst, + .src_len = src_len, + .dst_len = PAGE_SIZE, + }; + + return comp->ops->decompress(comp->params, &zstrm->ctx, &req); } int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) @@ -158,7 +157,7 @@ int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) zstrm = per_cpu_ptr(comp->stream, cpu); local_lock_init(&zstrm->lock); - ret = zcomp_strm_init(zstrm, comp); + ret = zcomp_strm_init(comp, zstrm); if (ret) pr_err("Can't allocate a compression stream\n"); return ret; @@ -170,11 +169,11 @@ int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node) struct zcomp_strm *zstrm; zstrm = per_cpu_ptr(comp->stream, cpu); - zcomp_strm_free(zstrm); + zcomp_strm_free(comp, zstrm); return 0; } -static int zcomp_init(struct zcomp *comp) +static int zcomp_init(struct zcomp *comp, struct zcomp_params *params) { int ret; @@ -182,12 +181,19 @@ static int zcomp_init(struct zcomp *comp) if (!comp->stream) return -ENOMEM; + comp->params = params; + ret = comp->ops->setup_params(comp->params); + if (ret) + goto cleanup; + ret = cpuhp_state_add_instance(CPUHP_ZCOMP_PREPARE, &comp->node); if (ret < 0) goto cleanup; + return 0; cleanup: + comp->ops->release_params(comp->params); free_percpu(comp->stream); return ret; } @@ -195,37 +201,35 @@ cleanup: void zcomp_destroy(struct zcomp *comp) { cpuhp_state_remove_instance(CPUHP_ZCOMP_PREPARE, &comp->node); + comp->ops->release_params(comp->params); free_percpu(comp->stream); kfree(comp); } -/* - * search available compressors for requested algorithm. - * allocate new zcomp and initialize it. return compressing - * backend pointer or ERR_PTR if things went bad. ERR_PTR(-EINVAL) - * if requested algorithm is not supported, ERR_PTR(-ENOMEM) in - * case of allocation error, or any other error potentially - * returned by zcomp_init(). - */ -struct zcomp *zcomp_create(const char *alg) +struct zcomp *zcomp_create(const char *alg, struct zcomp_params *params) { struct zcomp *comp; int error; /* - * Crypto API will execute /sbin/modprobe if the compression module - * is not loaded yet. We must do it here, otherwise we are about to - * call /sbin/modprobe under CPU hot-plug lock. + * The backends array has a sentinel NULL value, so the minimum + * size is 1. In order to be valid the array, apart from the + * sentinel NULL element, should have at least one compression + * backend selected. */ - if (!zcomp_available_algorithm(alg)) - return ERR_PTR(-EINVAL); + BUILD_BUG_ON(ARRAY_SIZE(backends) <= 1); comp = kzalloc(sizeof(struct zcomp), GFP_KERNEL); if (!comp) return ERR_PTR(-ENOMEM); - comp->name = alg; - error = zcomp_init(comp); + comp->ops = lookup_backend_ops(alg); + if (!comp->ops) { + kfree(comp); + return ERR_PTR(-EINVAL); + } + + error = zcomp_init(comp, params); if (error) { kfree(comp); return ERR_PTR(error); diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h index e9fe63da0e9b..ad5762813842 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -1,24 +1,70 @@ /* SPDX-License-Identifier: GPL-2.0-or-later */ -/* - * Copyright (C) 2014 Sergey Senozhatsky. - */ #ifndef _ZCOMP_H_ #define _ZCOMP_H_ + #include <linux/local_lock.h> +#define ZCOMP_PARAM_NO_LEVEL INT_MIN + +/* + * Immutable driver (backend) parameters. The driver may attach private + * data to it (e.g. driver representation of the dictionary, etc.). + * + * This data is kept per-comp and is shared among execution contexts. + */ +struct zcomp_params { + void *dict; + size_t dict_sz; + s32 level; + + void *drv_data; +}; + +/* + * Run-time driver context - scratch buffers, etc. It is modified during + * request execution (compression/decompression), cannot be shared, so + * it's in per-CPU area. + */ +struct zcomp_ctx { + void *context; +}; + struct zcomp_strm { - /* The members ->buffer and ->tfm are protected by ->lock. */ local_lock_t lock; - /* compression/decompression buffer */ + /* compression buffer */ void *buffer; - struct crypto_comp *tfm; + struct zcomp_ctx ctx; +}; + +struct zcomp_req { + const unsigned char *src; + const size_t src_len; + + unsigned char *dst; + size_t dst_len; +}; + +struct zcomp_ops { + int (*compress)(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req); + int (*decompress)(struct zcomp_params *params, struct zcomp_ctx *ctx, + struct zcomp_req *req); + + int (*create_ctx)(struct zcomp_params *params, struct zcomp_ctx *ctx); + void (*destroy_ctx)(struct zcomp_ctx *ctx); + + int (*setup_params)(struct zcomp_params *params); + void (*release_params)(struct zcomp_params *params); + + const char *name; }; /* dynamic per-device compression frontend */ struct zcomp { struct zcomp_strm __percpu *stream; - const char *name; + const struct zcomp_ops *ops; + struct zcomp_params *params; struct hlist_node node; }; @@ -27,16 +73,15 @@ int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node); ssize_t zcomp_available_show(const char *comp, char *buf); bool zcomp_available_algorithm(const char *comp); -struct zcomp *zcomp_create(const char *alg); +struct zcomp *zcomp_create(const char *alg, struct zcomp_params *params); void zcomp_destroy(struct zcomp *comp); struct zcomp_strm *zcomp_stream_get(struct zcomp *comp); void zcomp_stream_put(struct zcomp *comp); -int zcomp_compress(struct zcomp_strm *zstrm, - const void *src, unsigned int *dst_len); - -int zcomp_decompress(struct zcomp_strm *zstrm, - const void *src, unsigned int src_len, void *dst); +int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, + const void *src, unsigned int *dst_len); +int zcomp_decompress(struct zcomp *comp, struct zcomp_strm *zstrm, + const void *src, unsigned int src_len, void *dst); #endif /* _ZCOMP_H_ */ diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 84cd62efac0f..c3d245617083 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -33,6 +33,7 @@ #include <linux/debugfs.h> #include <linux/cpuhotplug.h> #include <linux/part_stat.h> +#include <linux/kernel_read_file.h> #include "zram_drv.h" @@ -998,6 +999,103 @@ static int __comp_algorithm_store(struct zram *zram, u32 prio, const char *buf) return 0; } +static void comp_params_reset(struct zram *zram, u32 prio) +{ + struct zcomp_params *params = &zram->params[prio]; + + vfree(params->dict); + params->level = ZCOMP_PARAM_NO_LEVEL; + params->dict_sz = 0; + params->dict = NULL; +} + +static int comp_params_store(struct zram *zram, u32 prio, s32 level, + const char *dict_path) +{ + ssize_t sz = 0; + + comp_params_reset(zram, prio); + + if (dict_path) { + sz = kernel_read_file_from_path(dict_path, 0, + &zram->params[prio].dict, + INT_MAX, + NULL, + READING_POLICY); + if (sz < 0) + return -EINVAL; + } + + zram->params[prio].dict_sz = sz; + zram->params[prio].level = level; + return 0; +} + +static ssize_t algorithm_params_store(struct device *dev, + struct device_attribute *attr, + const char *buf, + size_t len) +{ + s32 prio = ZRAM_PRIMARY_COMP, level = ZCOMP_PARAM_NO_LEVEL; + char *args, *param, *val, *algo = NULL, *dict_path = NULL; + struct zram *zram = dev_to_zram(dev); + int ret; + + args = skip_spaces(buf); + while (*args) { + args = next_arg(args, ¶m, &val); + + if (!val || !*val) + return -EINVAL; + + if (!strcmp(param, "priority")) { + ret = kstrtoint(val, 10, &prio); + if (ret) + return ret; + continue; + } + + if (!strcmp(param, "level")) { + ret = kstrtoint(val, 10, &level); + if (ret) + return ret; + continue; + } + + if (!strcmp(param, "algo")) { + algo = val; + continue; + } + + if (!strcmp(param, "dict")) { + dict_path = val; + continue; + } + } + + /* Lookup priority by algorithm name */ + if (algo) { + s32 p; + + prio = -EINVAL; + for (p = ZRAM_PRIMARY_COMP; p < ZRAM_MAX_COMPS; p++) { + if (!zram->comp_algs[p]) + continue; + + if (!strcmp(zram->comp_algs[p], algo)) { + prio = p; + break; + } + } + } + + if (prio < ZRAM_PRIMARY_COMP || prio >= ZRAM_MAX_COMPS) + return -EINVAL; + + ret = comp_params_store(zram, prio, level, dict_path); + return ret ? ret : len; +} + static ssize_t comp_algorithm_show(struct device *dev, struct device_attribute *attr, char *buf) @@ -1330,7 +1428,8 @@ static int zram_read_from_zspool(struct zram *zram, struct page *page, ret = 0; } else { dst = kmap_local_page(page); - ret = zcomp_decompress(zstrm, src, size, dst); + ret = zcomp_decompress(zram->comps[prio], zstrm, + src, size, dst); kunmap_local(dst); zcomp_stream_put(zram->comps[prio]); } @@ -1417,7 +1516,8 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) compress_again: zstrm = zcomp_stream_get(zram->comps[ZRAM_PRIMARY_COMP]); src = kmap_local_page(page); - ret = zcomp_compress(zstrm, src, &comp_len); + ret = zcomp_compress(zram->comps[ZRAM_PRIMARY_COMP], zstrm, + src, &comp_len); kunmap_local(src); if (unlikely(ret)) { @@ -1604,7 +1704,8 @@ static int zram_recompress(struct zram *zram, u32 index, struct page *page, num_recomps++; zstrm = zcomp_stream_get(zram->comps[prio]); src = kmap_local_page(page); - ret = zcomp_compress(zstrm, src, &comp_len_new); + ret = zcomp_compress(zram->comps[prio], zstrm, + src, &comp_len_new); kunmap_local(src); if (ret) { @@ -1757,6 +1858,18 @@ static ssize_t recompress_store(struct device *dev, algo = val; continue; } + + if (!strcmp(param, "priority")) { + ret = kstrtouint(val, 10, &prio); + if (ret) + return ret; + + if (prio == ZRAM_PRIMARY_COMP) + prio = ZRAM_SECONDARY_COMP; + + prio_max = min(prio + 1, ZRAM_MAX_COMPS); + continue; + } } if (threshold >= huge_class_size) @@ -1979,6 +2092,15 @@ static void zram_slot_free_notify(struct block_device *bdev, zram_slot_unlock(zram, index); } +static void zram_comp_params_reset(struct zram *zram) +{ + u32 prio; + + for (prio = ZRAM_PRIMARY_COMP; prio < ZRAM_MAX_COMPS; prio++) { + comp_params_reset(zram, prio); + } +} + static void zram_destroy_comps(struct zram *zram) { u32 prio; @@ -1992,6 +2114,13 @@ static void zram_destroy_comps(struct zram *zram) zcomp_destroy(comp); zram->num_active_comps--; } + + for (prio = ZRAM_SECONDARY_COMP; prio < ZRAM_MAX_COMPS; prio++) { + kfree(zram->comp_algs[prio]); + zram->comp_algs[prio] = NULL; + } + + zram_comp_params_reset(zram); } static void zram_reset_device(struct zram *zram) @@ -2049,7 +2178,8 @@ static ssize_t disksize_store(struct device *dev, if (!zram->comp_algs[prio]) continue; - comp = zcomp_create(zram->comp_algs[prio]); + comp = zcomp_create(zram->comp_algs[prio], + &zram->params[prio]); if (IS_ERR(comp)) { pr_err("Cannot initialise %s compressing backend\n", zram->comp_algs[prio]); @@ -2152,6 +2282,7 @@ static DEVICE_ATTR_RW(writeback_limit_enable); static DEVICE_ATTR_RW(recomp_algorithm); static DEVICE_ATTR_WO(recompress); #endif +static DEVICE_ATTR_WO(algorithm_params); static struct attribute *zram_disk_attrs[] = { &dev_attr_disksize.attr, @@ -2179,6 +2310,7 @@ static struct attribute *zram_disk_attrs[] = { &dev_attr_recomp_algorithm.attr, &dev_attr_recompress.attr, #endif + &dev_attr_algorithm_params.attr, NULL, }; @@ -2254,6 +2386,7 @@ static int zram_add(void) if (ret) goto out_cleanup_disk; + zram_comp_params_reset(zram); comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor); zram_debugfs_register(zram); diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index 531cefc66668..cfc8c059db63 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -106,6 +106,7 @@ struct zram { struct zram_table_entry *table; struct zs_pool *mem_pool; struct zcomp *comps[ZRAM_MAX_COMPS]; + struct zcomp_params params[ZRAM_MAX_COMPS]; struct gendisk *disk; /* Prevent concurrent execution of device init */ struct rw_semaphore init_lock; |