summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2026-04-14 01:42:19 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2026-04-14 01:42:19 +0300
commitb8f82cb0d84d00c04cdbdce42f67df71b8507e8b (patch)
tree23d83d0ecb4fbff65459886c7a4e19008a820c49
parentb206a6fb9a105be198cf2dc435ffa4ad7c75ddc2 (diff)
parent3457a5ccacd34fdd5ebd3a4745e721b5a1239690 (diff)
downloadlinux-b8f82cb0d84d00c04cdbdce42f67df71b8507e8b.tar.xz
Merge tag 'landlock-7.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mic/linux
Pull Landlock update from Mickaël Salaün: "This adds a new Landlock access right for pathname UNIX domain sockets thanks to a new LSM hook, and a few fixes" * tag 'landlock-7.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mic/linux: (23 commits) landlock: Document fallocate(2) as another truncation corner case landlock: Document FS access right for pathname UNIX sockets selftests/landlock: Simplify ruleset creation and enforcement in fs_test selftests/landlock: Check that coredump sockets stay unrestricted selftests/landlock: Audit test for LANDLOCK_ACCESS_FS_RESOLVE_UNIX selftests/landlock: Test LANDLOCK_ACCESS_FS_RESOLVE_UNIX selftests/landlock: Replace access_fs_16 with ACCESS_ALL in fs_test samples/landlock: Add support for named UNIX domain socket restrictions landlock: Clarify BUILD_BUG_ON check in scoping logic landlock: Control pathname UNIX domain socket resolution by path landlock: Use mem_is_zero() in is_layer_masks_allowed() lsm: Add LSM hook security_unix_find landlock: Fix kernel-doc warning for pointer-to-array parameters landlock: Fix formatting in tsync.c landlock: Improve kernel-doc "Return:" section consistency landlock: Add missing kernel-doc "Return:" sections selftests/landlock: Fix format warning for __u64 in net_test selftests/landlock: Skip stale records in audit_match_record() selftests/landlock: Drain stale audit records on init selftests/landlock: Fix socket file descriptor leaks in audit helpers ...
-rw-r--r--Documentation/security/landlock.rst42
-rw-r--r--Documentation/userspace-api/landlock.rst22
-rw-r--r--include/linux/lsm_hook_defs.h5
-rw-r--r--include/linux/security.h11
-rw-r--r--include/uapi/linux/landlock.h25
-rw-r--r--net/unix/af_unix.c10
-rw-r--r--samples/landlock/sandboxer.c12
-rw-r--r--security/landlock/access.h4
-rw-r--r--security/landlock/audit.c1
-rw-r--r--security/landlock/cred.c6
-rw-r--r--security/landlock/cred.h2
-rw-r--r--security/landlock/domain.c6
-rw-r--r--security/landlock/fs.c163
-rw-r--r--security/landlock/id.c2
-rw-r--r--security/landlock/limits.h2
-rw-r--r--security/landlock/ruleset.c14
-rw-r--r--security/landlock/ruleset.h2
-rw-r--r--security/landlock/syscalls.c33
-rw-r--r--security/landlock/task.c22
-rw-r--r--security/landlock/tsync.c124
-rw-r--r--security/security.c20
-rw-r--r--tools/testing/selftests/landlock/audit.h133
-rw-r--r--tools/testing/selftests/landlock/audit_test.c357
-rw-r--r--tools/testing/selftests/landlock/base_test.c2
-rw-r--r--tools/testing/selftests/landlock/fs_test.c1343
-rw-r--r--tools/testing/selftests/landlock/net_test.c2
-rw-r--r--tools/testing/selftests/landlock/ptrace_test.c1
-rw-r--r--tools/testing/selftests/landlock/scoped_abstract_unix_test.c1
-rw-r--r--tools/testing/selftests/landlock/tsync_test.c77
29 files changed, 1650 insertions, 794 deletions
diff --git a/Documentation/security/landlock.rst b/Documentation/security/landlock.rst
index 3e4d4d04cfae..c5186526e76f 100644
--- a/Documentation/security/landlock.rst
+++ b/Documentation/security/landlock.rst
@@ -7,7 +7,7 @@ Landlock LSM: kernel documentation
==================================
:Author: Mickaël Salaün
-:Date: September 2025
+:Date: March 2026
Landlock's goal is to create scoped access-control (i.e. sandboxing). To
harden a whole system, this feature should be available to any process,
@@ -89,6 +89,46 @@ this is required to keep access controls consistent over the whole system, and
this avoids unattended bypasses through file descriptor passing (i.e. confused
deputy attack).
+.. _scoped-flags-interaction:
+
+Interaction between scoped flags and other access rights
+--------------------------------------------------------
+
+The ``scoped`` flags in &struct landlock_ruleset_attr restrict the
+use of *outgoing* IPC from the created Landlock domain, while they
+permit reaching out to IPC endpoints *within* the created Landlock
+domain.
+
+In the future, scoped flags *may* interact with other access rights,
+e.g. so that abstract UNIX sockets can be allow-listed by name, or so
+that signals can be allow-listed by signal number or target process.
+
+When introducing ``LANDLOCK_ACCESS_FS_RESOLVE_UNIX``, we defined it to
+implicitly have the same scoping semantics as a
+``LANDLOCK_SCOPE_PATHNAME_UNIX_SOCKET`` flag would have: connecting to
+UNIX sockets within the same domain (where
+``LANDLOCK_ACCESS_FS_RESOLVE_UNIX`` is used) is unconditionally
+allowed.
+
+The reasoning is:
+
+* Like other IPC mechanisms, connecting to named UNIX sockets in the
+ same domain should be expected and harmless. (If needed, users can
+ further refine their Landlock policies with nested domains or by
+ restricting ``LANDLOCK_ACCESS_FS_MAKE_SOCK``.)
+* We reserve the option to still introduce
+ ``LANDLOCK_SCOPE_PATHNAME_UNIX_SOCKET`` in the future. (This would
+ be useful if we wanted to have a Landlock rule to permit IPC access
+ to other Landlock domains.)
+* But we can postpone the point in time when users have to deal with
+ two interacting flags visible in the userspace API. (In particular,
+ it is possible that it won't be needed in practice, in which case we
+ can avoid the second flag altogether.)
+* If we *do* introduce ``LANDLOCK_SCOPE_PATHNAME_UNIX_SOCKET`` in the
+ future, setting this scoped flag in a ruleset does *not reduce* the
+ restrictions, because access within the same scope is already
+ allowed based on ``LANDLOCK_ACCESS_FS_RESOLVE_UNIX``.
+
Tests
=====
diff --git a/Documentation/userspace-api/landlock.rst b/Documentation/userspace-api/landlock.rst
index 7f86d7a37dc2..fd8b78c31f2f 100644
--- a/Documentation/userspace-api/landlock.rst
+++ b/Documentation/userspace-api/landlock.rst
@@ -77,7 +77,8 @@ to be explicit about the denied-by-default access rights.
LANDLOCK_ACCESS_FS_MAKE_SYM |
LANDLOCK_ACCESS_FS_REFER |
LANDLOCK_ACCESS_FS_TRUNCATE |
- LANDLOCK_ACCESS_FS_IOCTL_DEV,
+ LANDLOCK_ACCESS_FS_IOCTL_DEV |
+ LANDLOCK_ACCESS_FS_RESOLVE_UNIX,
.handled_access_net =
LANDLOCK_ACCESS_NET_BIND_TCP |
LANDLOCK_ACCESS_NET_CONNECT_TCP,
@@ -127,6 +128,10 @@ version, and only use the available subset of access rights:
/* Removes LANDLOCK_SCOPE_* for ABI < 6 */
ruleset_attr.scoped &= ~(LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET |
LANDLOCK_SCOPE_SIGNAL);
+ __attribute__((fallthrough));
+ case 6 ... 8:
+ /* Removes LANDLOCK_ACCESS_FS_RESOLVE_UNIX for ABI < 9 */
+ ruleset_attr.handled_access_fs &= ~LANDLOCK_ACCESS_FS_RESOLVE_UNIX;
}
This enables the creation of an inclusive ruleset that will contain our rules.
@@ -378,8 +383,8 @@ Truncating files
The operations covered by ``LANDLOCK_ACCESS_FS_WRITE_FILE`` and
``LANDLOCK_ACCESS_FS_TRUNCATE`` both change the contents of a file and sometimes
-overlap in non-intuitive ways. It is recommended to always specify both of
-these together.
+overlap in non-intuitive ways. It is strongly recommended to always specify
+both of these together (either granting both, or granting none).
A particularly surprising example is :manpage:`creat(2)`. The name suggests
that this system call requires the rights to create and write files. However,
@@ -391,6 +396,10 @@ It should also be noted that truncating files does not require the
system call, this can also be done through :manpage:`open(2)` with the flags
``O_RDONLY | O_TRUNC``.
+At the same time, on some filesystems, :manpage:`fallocate(2)` offers a way to
+shorten file contents with ``FALLOC_FL_COLLAPSE_RANGE`` when the file is opened
+for writing, sidestepping the ``LANDLOCK_ACCESS_FS_TRUNCATE`` right.
+
The truncate right is associated with the opened file (see below).
Rights associated with file descriptors
@@ -700,6 +709,13 @@ enforce Landlock rulesets across all threads of the calling process
using the ``LANDLOCK_RESTRICT_SELF_TSYNC`` flag passed to
sys_landlock_restrict_self().
+Pathname UNIX sockets (ABI < 9)
+-------------------------------
+
+Starting with the Landlock ABI version 9, it is possible to restrict
+connections to pathname UNIX domain sockets (:manpage:`unix(7)`) using
+the new ``LANDLOCK_ACCESS_FS_RESOLVE_UNIX`` right.
+
.. _kernel_support:
Kernel support
diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
index b4958167e381..2b8dfb35caed 100644
--- a/include/linux/lsm_hook_defs.h
+++ b/include/linux/lsm_hook_defs.h
@@ -322,6 +322,11 @@ LSM_HOOK(int, 0, post_notification, const struct cred *w_cred,
LSM_HOOK(int, 0, watch_key, struct key *key)
#endif /* CONFIG_SECURITY && CONFIG_KEY_NOTIFICATIONS */
+#if defined(CONFIG_SECURITY_NETWORK) && defined(CONFIG_SECURITY_PATH)
+LSM_HOOK(int, 0, unix_find, const struct path *path, struct sock *other,
+ int flags)
+#endif /* CONFIG_SECURITY_NETWORK && CONFIG_SECURITY_PATH */
+
#ifdef CONFIG_SECURITY_NETWORK
LSM_HOOK(int, 0, unix_stream_connect, struct sock *sock, struct sock *other,
struct sock *newsk)
diff --git a/include/linux/security.h b/include/linux/security.h
index 8d2d4856934e..41d7367cf403 100644
--- a/include/linux/security.h
+++ b/include/linux/security.h
@@ -1954,6 +1954,17 @@ static inline int security_mptcp_add_subflow(struct sock *sk, struct sock *ssk)
}
#endif /* CONFIG_SECURITY_NETWORK */
+#if defined(CONFIG_SECURITY_NETWORK) && defined(CONFIG_SECURITY_PATH)
+
+int security_unix_find(const struct path *path, struct sock *other, int flags);
+
+#else /* CONFIG_SECURITY_NETWORK && CONFIG_SECURITY_PATH */
+static inline int security_unix_find(const struct path *path, struct sock *other, int flags)
+{
+ return 0;
+}
+#endif /* CONFIG_SECURITY_NETWORK && CONFIG_SECURITY_PATH */
+
#ifdef CONFIG_SECURITY_INFINIBAND
int security_ib_pkey_access(void *sec, u64 subnet_prefix, u16 pkey);
int security_ib_endport_manage_subnet(void *sec, const char *name, u8 port_num);
diff --git a/include/uapi/linux/landlock.h b/include/uapi/linux/landlock.h
index f88fa1f68b77..10a346e55e95 100644
--- a/include/uapi/linux/landlock.h
+++ b/include/uapi/linux/landlock.h
@@ -116,7 +116,9 @@ struct landlock_ruleset_attr {
* ``LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF``, this flag only affects
* future nested domains, not the one being created. It can also be used
* with a @ruleset_fd value of -1 to mute subdomain logs without creating a
- * domain.
+ * domain. When combined with %LANDLOCK_RESTRICT_SELF_TSYNC and a
+ * @ruleset_fd value of -1, this configuration is propagated to all threads
+ * of the current process.
*
* The following flag supports policy enforcement in multithreaded processes:
*
@@ -248,6 +250,26 @@ struct landlock_net_port_attr {
*
* This access right is available since the fifth version of the Landlock
* ABI.
+ * - %LANDLOCK_ACCESS_FS_RESOLVE_UNIX: Look up pathname UNIX domain sockets
+ * (:manpage:`unix(7)`). On UNIX domain sockets, this restricts both calls to
+ * :manpage:`connect(2)` as well as calls to :manpage:`sendmsg(2)` with an
+ * explicit recipient address.
+ *
+ * This access right only applies to connections to UNIX server sockets which
+ * were created outside of the newly created Landlock domain (e.g. from within
+ * a parent domain or from an unrestricted process). Newly created UNIX
+ * servers within the same Landlock domain continue to be accessible. In this
+ * regard, %LANDLOCK_ACCESS_FS_RESOLVE_UNIX has the same semantics as the
+ * ``LANDLOCK_SCOPE_*`` flags.
+ *
+ * If a resolve attempt is denied, the operation returns an ``EACCES`` error,
+ * in line with other filesystem access rights (but different to denials for
+ * abstract UNIX domain sockets).
+ *
+ * This access right is available since the ninth version of the Landlock ABI.
+ *
+ * The rationale for this design is described in
+ * :ref:`Documentation/security/landlock.rst <scoped-flags-interaction>`.
*
* Whether an opened file can be truncated with :manpage:`ftruncate(2)` or used
* with `ioctl(2)` is determined during :manpage:`open(2)`, in the same way as
@@ -333,6 +355,7 @@ struct landlock_net_port_attr {
#define LANDLOCK_ACCESS_FS_REFER (1ULL << 13)
#define LANDLOCK_ACCESS_FS_TRUNCATE (1ULL << 14)
#define LANDLOCK_ACCESS_FS_IOCTL_DEV (1ULL << 15)
+#define LANDLOCK_ACCESS_FS_RESOLVE_UNIX (1ULL << 16)
/* clang-format on */
/**
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 0b0e58dd75fc..662ddb91f735 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1231,11 +1231,15 @@ static struct sock *unix_find_bsd(struct sockaddr_un *sunaddr, int addr_len,
goto path_put;
err = -EPROTOTYPE;
- if (sk->sk_type == type)
- touch_atime(&path);
- else
+ if (sk->sk_type != type)
goto sock_put;
+ err = security_unix_find(&path, sk, flags);
+ if (err)
+ goto sock_put;
+
+ touch_atime(&path);
+
path_put(&path);
return sk;
diff --git a/samples/landlock/sandboxer.c b/samples/landlock/sandboxer.c
index 9f21088c0855..66e56ae275c6 100644
--- a/samples/landlock/sandboxer.c
+++ b/samples/landlock/sandboxer.c
@@ -111,7 +111,8 @@ static int parse_path(char *env_path, const char ***const path_list)
LANDLOCK_ACCESS_FS_WRITE_FILE | \
LANDLOCK_ACCESS_FS_READ_FILE | \
LANDLOCK_ACCESS_FS_TRUNCATE | \
- LANDLOCK_ACCESS_FS_IOCTL_DEV)
+ LANDLOCK_ACCESS_FS_IOCTL_DEV | \
+ LANDLOCK_ACCESS_FS_RESOLVE_UNIX)
/* clang-format on */
@@ -295,11 +296,12 @@ out_unset:
LANDLOCK_ACCESS_FS_MAKE_SYM | \
LANDLOCK_ACCESS_FS_REFER | \
LANDLOCK_ACCESS_FS_TRUNCATE | \
- LANDLOCK_ACCESS_FS_IOCTL_DEV)
+ LANDLOCK_ACCESS_FS_IOCTL_DEV | \
+ LANDLOCK_ACCESS_FS_RESOLVE_UNIX)
/* clang-format on */
-#define LANDLOCK_ABI_LAST 8
+#define LANDLOCK_ABI_LAST 9
#define XSTR(s) #s
#define STR(s) XSTR(s)
@@ -438,6 +440,10 @@ int main(const int argc, char *const argv[], char *const *const envp)
~LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON;
__attribute__((fallthrough));
case 7:
+ case 8:
+ /* Removes LANDLOCK_ACCESS_FS_RESOLVE_UNIX for ABI < 9 */
+ ruleset_attr.handled_access_fs &=
+ ~LANDLOCK_ACCESS_FS_RESOLVE_UNIX;
/* Must be printed for any ABI < LANDLOCK_ABI_LAST. */
fprintf(stderr,
"Hint: You should update the running kernel "
diff --git a/security/landlock/access.h b/security/landlock/access.h
index 42c95747d7bd..c19d5bc13944 100644
--- a/security/landlock/access.h
+++ b/security/landlock/access.h
@@ -34,7 +34,7 @@
LANDLOCK_ACCESS_FS_IOCTL_DEV)
/* clang-format on */
-typedef u16 access_mask_t;
+typedef u32 access_mask_t;
/* Makes sure all filesystem access rights can be stored. */
static_assert(BITS_PER_TYPE(access_mask_t) >= LANDLOCK_NUM_ACCESS_FS);
@@ -50,7 +50,7 @@ struct access_masks {
access_mask_t fs : LANDLOCK_NUM_ACCESS_FS;
access_mask_t net : LANDLOCK_NUM_ACCESS_NET;
access_mask_t scope : LANDLOCK_NUM_SCOPE;
-};
+} __packed __aligned(sizeof(u32));
union access_masks_all {
struct access_masks masks;
diff --git a/security/landlock/audit.c b/security/landlock/audit.c
index 60ff217ab95b..8d0edf94037d 100644
--- a/security/landlock/audit.c
+++ b/security/landlock/audit.c
@@ -37,6 +37,7 @@ static const char *const fs_access_strings[] = {
[BIT_INDEX(LANDLOCK_ACCESS_FS_REFER)] = "fs.refer",
[BIT_INDEX(LANDLOCK_ACCESS_FS_TRUNCATE)] = "fs.truncate",
[BIT_INDEX(LANDLOCK_ACCESS_FS_IOCTL_DEV)] = "fs.ioctl_dev",
+ [BIT_INDEX(LANDLOCK_ACCESS_FS_RESOLVE_UNIX)] = "fs.resolve_unix",
};
static_assert(ARRAY_SIZE(fs_access_strings) == LANDLOCK_NUM_ACCESS_FS);
diff --git a/security/landlock/cred.c b/security/landlock/cred.c
index 0cb3edde4d18..cc419de75cd6 100644
--- a/security/landlock/cred.c
+++ b/security/landlock/cred.c
@@ -22,10 +22,8 @@ static void hook_cred_transfer(struct cred *const new,
const struct landlock_cred_security *const old_llcred =
landlock_cred(old);
- if (old_llcred->domain) {
- landlock_get_ruleset(old_llcred->domain);
- *landlock_cred(new) = *old_llcred;
- }
+ landlock_get_ruleset(old_llcred->domain);
+ *landlock_cred(new) = *old_llcred;
}
static int hook_cred_prepare(struct cred *const new,
diff --git a/security/landlock/cred.h b/security/landlock/cred.h
index c10a06727eb1..f287c56b5fd4 100644
--- a/security/landlock/cred.h
+++ b/security/landlock/cred.h
@@ -115,7 +115,7 @@ static inline bool landlocked(const struct task_struct *const task)
* @handle_layer: returned youngest layer handling a subset of @masks. Not set
* if the function returns NULL.
*
- * Returns: landlock_cred(@cred) if any access rights specified in @masks is
+ * Return: landlock_cred(@cred) if any access rights specified in @masks is
* handled, or NULL otherwise.
*/
static inline const struct landlock_cred_security *
diff --git a/security/landlock/domain.c b/security/landlock/domain.c
index f0d83f43afa1..06b6bd845060 100644
--- a/security/landlock/domain.c
+++ b/security/landlock/domain.c
@@ -34,7 +34,7 @@
* @exe_size: Returned size of @exe_str (including the trailing null
* character), if any.
*
- * Returns: A pointer to an allocated buffer where @exe_str point to, %NULL if
+ * Return: A pointer to an allocated buffer where @exe_str point to, %NULL if
* there is no executable path, or an error otherwise.
*/
static const void *get_current_exe(const char **const exe_str,
@@ -73,7 +73,7 @@ static const void *get_current_exe(const char **const exe_str,
}
/*
- * Returns: A newly allocated object describing a domain, or an error
+ * Return: A newly allocated object describing a domain, or an error
* otherwise.
*/
static struct landlock_details *get_current_details(void)
@@ -114,6 +114,8 @@ static struct landlock_details *get_current_details(void)
* restriction. The subjective credentials must not be in an overridden state.
*
* @hierarchy->parent and @hierarchy->usage should already be set.
+ *
+ * Return: 0 on success, -errno on failure.
*/
int landlock_init_hierarchy_log(struct landlock_hierarchy *const hierarchy)
{
diff --git a/security/landlock/fs.c b/security/landlock/fs.c
index e764470f588c..c1ecfe239032 100644
--- a/security/landlock/fs.c
+++ b/security/landlock/fs.c
@@ -27,6 +27,7 @@
#include <linux/lsm_hooks.h>
#include <linux/mount.h>
#include <linux/namei.h>
+#include <linux/net.h>
#include <linux/path.h>
#include <linux/pid.h>
#include <linux/rcupdate.h>
@@ -36,6 +37,7 @@
#include <linux/types.h>
#include <linux/wait_bit.h>
#include <linux/workqueue.h>
+#include <net/af_unix.h>
#include <uapi/linux/fiemap.h>
#include <uapi/linux/landlock.h>
@@ -119,8 +121,8 @@ static const struct landlock_object_underops landlock_fs_underops = {
* Any new IOCTL commands that are implemented in fs/ioctl.c's do_vfs_ioctl()
* should be considered for inclusion here.
*
- * Returns: true if the IOCTL @cmd can not be restricted with Landlock for
- * device files.
+ * Return: True if the IOCTL @cmd can not be restricted with Landlock for
+ * device files, false otherwise.
*/
static __attribute_const__ bool is_masked_device_ioctl(const unsigned int cmd)
{
@@ -314,7 +316,8 @@ retry:
LANDLOCK_ACCESS_FS_WRITE_FILE | \
LANDLOCK_ACCESS_FS_READ_FILE | \
LANDLOCK_ACCESS_FS_TRUNCATE | \
- LANDLOCK_ACCESS_FS_IOCTL_DEV)
+ LANDLOCK_ACCESS_FS_IOCTL_DEV | \
+ LANDLOCK_ACCESS_FS_RESOLVE_UNIX)
/* clang-format on */
/*
@@ -428,10 +431,10 @@ static bool may_refer(const struct layer_access_masks *const src_parent,
* Check that a destination file hierarchy has more restrictions than a source
* file hierarchy. This is only used for link and rename actions.
*
- * Returns: true if child1 may be moved from parent1 to parent2 without
- * increasing its access rights. If child2 is set, an additional condition is
+ * Return: True if child1 may be moved from parent1 to parent2 without
+ * increasing its access rights (if child2 is set, an additional condition is
* that child2 may be used from parent2 to parent1 without increasing its access
- * rights.
+ * rights), false otherwise.
*/
static bool no_more_access(const struct layer_access_masks *const parent1,
const struct layer_access_masks *const child1,
@@ -564,7 +567,7 @@ static void test_no_more_access(struct kunit *const test)
static bool is_layer_masks_allowed(const struct layer_access_masks *masks)
{
- return !memchr_inv(&masks->access, 0, sizeof(masks->access));
+ return mem_is_zero(&masks->access, sizeof(masks->access));
}
/*
@@ -734,9 +737,7 @@ static void test_is_eacces_with_write(struct kunit *const test)
* checks that the collected accesses and the remaining ones are enough to
* allow the request.
*
- * Returns:
- * - true if the access request is granted;
- * - false otherwise.
+ * Return: True if the access request is granted, false otherwise.
*/
static bool
is_access_to_paths_allowed(const struct landlock_ruleset *const domain,
@@ -1022,9 +1023,8 @@ static access_mask_t maybe_remove(const struct dentry *const dentry)
* only handles walking on the same mount point and only checks one set of
* accesses.
*
- * Returns:
- * - true if all the domain access rights are allowed for @dir;
- * - false if the walk reached @mnt_root.
+ * Return: True if all the domain access rights are allowed for @dir, false if
+ * the walk reached @mnt_root.
*/
static bool collect_domain_accesses(const struct landlock_ruleset *const domain,
const struct dentry *const mnt_root,
@@ -1120,10 +1120,9 @@ static bool collect_domain_accesses(const struct landlock_ruleset *const domain,
* ephemeral matrices take some space on the stack, which limits the number of
* layers to a deemed reasonable number: 16.
*
- * Returns:
- * - 0 if access is allowed;
- * - -EXDEV if @old_dentry would inherit new access rights from @new_dir;
- * - -EACCES if file removal or creation is denied.
+ * Return: 0 if access is allowed, -EXDEV if @old_dentry would inherit new
+ * access rights from @new_dir, or -EACCES if file removal or creation is
+ * denied.
*/
static int current_check_refer_path(struct dentry *const old_dentry,
const struct path *const new_dir,
@@ -1561,6 +1560,133 @@ static int hook_path_truncate(const struct path *const path)
return current_check_access_path(path, LANDLOCK_ACCESS_FS_TRUNCATE);
}
+/**
+ * unmask_scoped_access - Remove access right bits in @masks in all layers
+ * where @client and @server have the same domain
+ *
+ * This does the same as domain_is_scoped(), but unmasks bits in @masks.
+ * It can not return early as domain_is_scoped() does.
+ *
+ * A scoped access for a given access right bit is allowed iff, for all layer
+ * depths where the access bit is set, the client and server domain are the
+ * same. This function clears the access rights @access in @masks at all layer
+ * depths where the client and server domain are the same, so that, when they
+ * are all cleared, the access is allowed.
+ *
+ * @client: Client domain
+ * @server: Server domain
+ * @masks: Layer access masks to unmask
+ * @access: Access bits that control scoping
+ */
+static void unmask_scoped_access(const struct landlock_ruleset *const client,
+ const struct landlock_ruleset *const server,
+ struct layer_access_masks *const masks,
+ const access_mask_t access)
+{
+ int client_layer, server_layer;
+ const struct landlock_hierarchy *client_walker, *server_walker;
+
+ /* This should not happen. */
+ if (WARN_ON_ONCE(!client))
+ return;
+
+ /* Server has no Landlock domain; nothing to clear. */
+ if (!server)
+ return;
+
+ /*
+ * client_layer must be able to represent all numbers from
+ * LANDLOCK_MAX_NUM_LAYERS - 1 to -1 for the loop below to terminate.
+ * (It must be large enough, and it must be signed.)
+ */
+ BUILD_BUG_ON(!is_signed_type(typeof(client_layer)));
+ BUILD_BUG_ON(LANDLOCK_MAX_NUM_LAYERS - 1 >
+ type_max(typeof(client_layer)));
+
+ client_layer = client->num_layers - 1;
+ client_walker = client->hierarchy;
+ server_layer = server->num_layers - 1;
+ server_walker = server->hierarchy;
+
+ /*
+ * Clears the access bits at all layers where the client domain is the
+ * same as the server domain. We start the walk at min(client_layer,
+ * server_layer). The layer bits until there can not be cleared because
+ * either the client or the server domain is missing.
+ */
+ for (; client_layer > server_layer; client_layer--)
+ client_walker = client_walker->parent;
+
+ for (; server_layer > client_layer; server_layer--)
+ server_walker = server_walker->parent;
+
+ for (; client_layer >= 0; client_layer--) {
+ if (masks->access[client_layer] & access &&
+ client_walker == server_walker)
+ masks->access[client_layer] &= ~access;
+
+ client_walker = client_walker->parent;
+ server_walker = server_walker->parent;
+ }
+}
+
+static int hook_unix_find(const struct path *const path, struct sock *other,
+ int flags)
+{
+ const struct landlock_ruleset *dom_other;
+ const struct landlock_cred_security *subject;
+ struct layer_access_masks layer_masks;
+ struct landlock_request request = {};
+ static const struct access_masks fs_resolve_unix = {
+ .fs = LANDLOCK_ACCESS_FS_RESOLVE_UNIX,
+ };
+
+ /* Lookup for the purpose of saving coredumps is OK. */
+ if (unlikely(flags & SOCK_COREDUMP))
+ return 0;
+
+ subject = landlock_get_applicable_subject(current_cred(),
+ fs_resolve_unix, NULL);
+
+ if (!subject)
+ return 0;
+
+ /*
+ * Ignoring return value: that the domains apply was already checked in
+ * landlock_get_applicable_subject() above.
+ */
+ landlock_init_layer_masks(subject->domain, fs_resolve_unix.fs,
+ &layer_masks, LANDLOCK_KEY_INODE);
+
+ /* Checks the layers in which we are connecting within the same domain. */
+ unix_state_lock(other);
+ if (unlikely(sock_flag(other, SOCK_DEAD) || !other->sk_socket ||
+ !other->sk_socket->file)) {
+ unix_state_unlock(other);
+ /*
+ * We rely on the caller to catch the (non-reversible) SOCK_DEAD
+ * condition and retry the lookup. If we returned an error
+ * here, the lookup would not get retried.
+ */
+ return 0;
+ }
+ dom_other = landlock_cred(other->sk_socket->file->f_cred)->domain;
+
+ /* Access to the same (or a lower) domain is always allowed. */
+ unmask_scoped_access(subject->domain, dom_other, &layer_masks,
+ fs_resolve_unix.fs);
+ unix_state_unlock(other);
+
+ /* Checks the connections to allow-listed paths. */
+ if (is_access_to_paths_allowed(subject->domain, path,
+ fs_resolve_unix.fs, &layer_masks,
+ &request, NULL, 0, NULL, NULL, NULL))
+ return 0;
+
+ landlock_log_denial(subject, &request);
+ return -EACCES;
+}
+
/* File hooks */
/**
@@ -1568,7 +1694,7 @@ static int hook_path_truncate(const struct path *const path)
*
* @file: File being opened.
*
- * Returns the access rights that are required for opening the given file,
+ * Return: The access rights that are required for opening the given file,
* depending on the file type and open mode.
*/
static access_mask_t
@@ -1838,6 +1964,7 @@ static struct security_hook_list landlock_hooks[] __ro_after_init = {
LSM_HOOK_INIT(path_unlink, hook_path_unlink),
LSM_HOOK_INIT(path_rmdir, hook_path_rmdir),
LSM_HOOK_INIT(path_truncate, hook_path_truncate),
+ LSM_HOOK_INIT(unix_find, hook_unix_find),
LSM_HOOK_INIT(file_alloc_security, hook_file_alloc_security),
LSM_HOOK_INIT(file_open, hook_file_open),
diff --git a/security/landlock/id.c b/security/landlock/id.c
index 838c3ed7bb82..6c8769777fdc 100644
--- a/security/landlock/id.c
+++ b/security/landlock/id.c
@@ -258,7 +258,7 @@ static void test_range2_rand16(struct kunit *const test)
*
* @number_of_ids: Number of IDs to hold. Must be greater than one.
*
- * Returns: The first ID in the range.
+ * Return: The first ID in the range.
*/
u64 landlock_get_id_range(size_t number_of_ids)
{
diff --git a/security/landlock/limits.h b/security/landlock/limits.h
index eb584f47288d..b454ad73b15e 100644
--- a/security/landlock/limits.h
+++ b/security/landlock/limits.h
@@ -19,7 +19,7 @@
#define LANDLOCK_MAX_NUM_LAYERS 16
#define LANDLOCK_MAX_NUM_RULES U32_MAX
-#define LANDLOCK_LAST_ACCESS_FS LANDLOCK_ACCESS_FS_IOCTL_DEV
+#define LANDLOCK_LAST_ACCESS_FS LANDLOCK_ACCESS_FS_RESOLVE_UNIX
#define LANDLOCK_MASK_ACCESS_FS ((LANDLOCK_LAST_ACCESS_FS << 1) - 1)
#define LANDLOCK_NUM_ACCESS_FS __const_hweight64(LANDLOCK_MASK_ACCESS_FS)
diff --git a/security/landlock/ruleset.c b/security/landlock/ruleset.c
index 73018dc8d6c7..181df7736bb9 100644
--- a/security/landlock/ruleset.c
+++ b/security/landlock/ruleset.c
@@ -107,7 +107,7 @@ static bool is_object_pointer(const enum landlock_key_type key_type)
static struct landlock_rule *
create_rule(const struct landlock_id id,
- const struct landlock_layer (*const layers)[], const u32 num_layers,
+ const struct landlock_layer (*layers)[], const u32 num_layers,
const struct landlock_layer *const new_layer)
{
struct landlock_rule *new_rule;
@@ -201,10 +201,12 @@ static void build_check_ruleset(void)
* When merging a ruleset in a domain, or copying a domain, @layers will be
* added to @ruleset as new constraints, similarly to a boolean AND between
* access rights.
+ *
+ * Return: 0 on success, -errno on failure.
*/
static int insert_rule(struct landlock_ruleset *const ruleset,
const struct landlock_id id,
- const struct landlock_layer (*const layers)[],
+ const struct landlock_layer (*layers)[],
const size_t num_layers)
{
struct rb_node **walker_node;
@@ -530,8 +532,8 @@ void landlock_put_ruleset_deferred(struct landlock_ruleset *const ruleset)
* The current task is requesting to be restricted. The subjective credentials
* must not be in an overridden state. cf. landlock_init_hierarchy_log().
*
- * Returns the intersection of @parent and @ruleset, or returns @parent if
- * @ruleset is empty, or returns a duplicate of @ruleset if @parent is empty.
+ * Return: A new domain merging @parent and @ruleset on success, or ERR_PTR()
+ * on failure. If @parent is NULL, the new domain duplicates @ruleset.
*/
struct landlock_ruleset *
landlock_merge_ruleset(struct landlock_ruleset *const parent,
@@ -622,7 +624,7 @@ landlock_find_rule(const struct landlock_ruleset *const ruleset,
* @rule: A rule that grants a set of access rights for each layer
* @masks: A matrix of unfulfilled access rights for each layer
*
- * Returns true if the request is allowed (i.e. the access rights granted all
+ * Return: True if the request is allowed (i.e. the access rights granted all
* remaining unfulfilled access rights and masks has no leftover set bits).
*/
bool landlock_unmask_layers(const struct landlock_rule *const rule,
@@ -672,7 +674,7 @@ get_access_mask_t(const struct landlock_ruleset *const ruleset,
* @masks: Layer access masks to populate.
* @key_type: The key type to switch between access masks of different types.
*
- * Returns: An access mask where each access right bit is set which is handled
+ * Return: An access mask where each access right bit is set which is handled
* in any of the active layers in @domain.
*/
access_mask_t
diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h
index 9d6dc632684c..889f4b30301a 100644
--- a/security/landlock/ruleset.h
+++ b/security/landlock/ruleset.h
@@ -224,7 +224,7 @@ static inline void landlock_get_ruleset(struct landlock_ruleset *const ruleset)
*
* @domain: Landlock ruleset (used as a domain)
*
- * Returns: an access_masks result of the OR of all the domain's access masks.
+ * Return: An access_masks result of the OR of all the domain's access masks.
*/
static inline struct access_masks
landlock_union_access_masks(const struct landlock_ruleset *const domain)
diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c
index 0d66a68677b7..accfd2e5a0cd 100644
--- a/security/landlock/syscalls.c
+++ b/security/landlock/syscalls.c
@@ -60,6 +60,8 @@ static bool is_initialized(void)
* @ksize_min: Minimal required size to be copied.
* @src: User space pointer or NULL.
* @usize: (Alleged) size of the data pointed to by @src.
+ *
+ * Return: 0 on success, -errno on failure.
*/
static __always_inline int
copy_min_struct_from_user(void *const dst, const size_t ksize,
@@ -164,7 +166,7 @@ static const struct file_operations ruleset_fops = {
* If the change involves a fix that requires userspace awareness, also update
* the errata documentation in Documentation/userspace-api/landlock.rst .
*/
-const int landlock_abi_version = 8;
+const int landlock_abi_version = 9;
/**
* sys_landlock_create_ruleset - Create a new ruleset
@@ -178,16 +180,19 @@ const int landlock_abi_version = 8;
* - %LANDLOCK_CREATE_RULESET_VERSION
* - %LANDLOCK_CREATE_RULESET_ERRATA
*
- * This system call enables to create a new Landlock ruleset, and returns the
- * related file descriptor on success.
+ * This system call enables to create a new Landlock ruleset.
*
* If %LANDLOCK_CREATE_RULESET_VERSION or %LANDLOCK_CREATE_RULESET_ERRATA is
* set, then @attr must be NULL and @size must be 0.
*
- * Possible returned errors are:
+ * Return: The ruleset file descriptor on success, the Landlock ABI version if
+ * %LANDLOCK_CREATE_RULESET_VERSION is set, the errata value if
+ * %LANDLOCK_CREATE_RULESET_ERRATA is set, or -errno on failure. Possible
+ * returned errors are:
*
* - %EOPNOTSUPP: Landlock is supported by the kernel but disabled at boot time;
- * - %EINVAL: unknown @flags, or unknown access, or unknown scope, or too small @size;
+ * - %EINVAL: unknown @flags, or unknown access, or unknown scope, or too small
+ * @size;
* - %E2BIG: @attr or @size inconsistencies;
* - %EFAULT: @attr or @size inconsistencies;
* - %ENOMSG: empty &landlock_ruleset_attr.handled_access_fs.
@@ -398,7 +403,7 @@ static int add_rule_net_port(struct landlock_ruleset *ruleset,
* This system call enables to define a new rule and add it to an existing
* ruleset.
*
- * Possible returned errors are:
+ * Return: 0 on success, or -errno on failure. Possible returned errors are:
*
* - %EOPNOTSUPP: Landlock is supported by the kernel but disabled at boot time;
* - %EAFNOSUPPORT: @rule_type is %LANDLOCK_RULE_NET_PORT but TCP/IP is not
@@ -464,7 +469,7 @@ SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd,
* namespace or is running with no_new_privs. This avoids scenarios where
* unprivileged tasks can affect the behavior of privileged children.
*
- * Possible returned errors are:
+ * Return: 0 on success, or -errno on failure. Possible returned errors are:
*
* - %EOPNOTSUPP: Landlock is supported by the kernel but disabled at boot time;
* - %EINVAL: @flags contains an unknown bit.
@@ -512,10 +517,13 @@ SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32,
/*
* It is allowed to set LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF with
- * -1 as ruleset_fd, but no other flag must be set.
+ * -1 as ruleset_fd, optionally combined with
+ * LANDLOCK_RESTRICT_SELF_TSYNC to propagate this configuration to all
+ * threads. No other flag must be set.
*/
if (!(ruleset_fd == -1 &&
- flags == LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF)) {
+ (flags & ~LANDLOCK_RESTRICT_SELF_TSYNC) ==
+ LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF)) {
/* Gets and checks the ruleset. */
ruleset = get_ruleset_from_fd(ruleset_fd, FMODE_CAN_READ);
if (IS_ERR(ruleset))
@@ -537,9 +545,10 @@ SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32,
/*
* The only case when a ruleset may not be set is if
- * LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF is set and ruleset_fd is -1.
- * We could optimize this case by not calling commit_creds() if this flag
- * was already set, but it is not worth the complexity.
+ * LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF is set (optionally with
+ * LANDLOCK_RESTRICT_SELF_TSYNC) and ruleset_fd is -1. We could
+ * optimize this case by not calling commit_creds() if this flag was
+ * already set, but it is not worth the complexity.
*/
if (ruleset) {
/*
diff --git a/security/landlock/task.c b/security/landlock/task.c
index 833bc0cfe5c9..6d46042132ce 100644
--- a/security/landlock/task.c
+++ b/security/landlock/task.c
@@ -37,6 +37,9 @@
*
* Checks if the @parent domain is less or equal to (i.e. an ancestor, which
* means a subset of) the @child domain.
+ *
+ * Return: True if @parent is an ancestor of or equal to @child, false
+ * otherwise.
*/
static bool domain_scope_le(const struct landlock_ruleset *const parent,
const struct landlock_ruleset *const child)
@@ -79,8 +82,7 @@ static int domain_ptrace(const struct landlock_ruleset *const parent,
* If the current task has Landlock rules, then the child must have at least
* the same rules. Else denied.
*
- * Determines whether a process may access another, returning 0 if permission
- * granted, -errno if denied.
+ * Return: 0 if permission is granted, -errno if denied.
*/
static int hook_ptrace_access_check(struct task_struct *const child,
const unsigned int mode)
@@ -129,8 +131,7 @@ static int hook_ptrace_access_check(struct task_struct *const child,
* If the parent has Landlock rules, then the current task must have the same
* or more rules. Else denied.
*
- * Determines whether the nominated task is permitted to trace the current
- * process, returning 0 if permission is granted, -errno if denied.
+ * Return: 0 if permission is granted, -errno if denied.
*/
static int hook_ptrace_traceme(struct task_struct *const parent)
{
@@ -173,8 +174,8 @@ static int hook_ptrace_traceme(struct task_struct *const parent)
* @server: IPC receiver domain.
* @scope: The scope restriction criteria.
*
- * Returns: True if @server is in a different domain from @client, and @client
- * is scoped to access @server (i.e. access should be denied).
+ * Return: True if @server is in a different domain from @client and @client
+ * is scoped to access @server (i.e. access should be denied), false otherwise.
*/
static bool domain_is_scoped(const struct landlock_ruleset *const client,
const struct landlock_ruleset *const server,
@@ -190,10 +191,13 @@ static bool domain_is_scoped(const struct landlock_ruleset *const client,
client_layer = client->num_layers - 1;
client_walker = client->hierarchy;
/*
- * client_layer must be a signed integer with greater capacity
- * than client->num_layers to ensure the following loop stops.
+ * client_layer must be able to represent all numbers from
+ * LANDLOCK_MAX_NUM_LAYERS - 1 to -1 for the loop below to terminate.
+ * (It must be large enough, and it must be signed.)
*/
- BUILD_BUG_ON(sizeof(client_layer) > sizeof(client->num_layers));
+ BUILD_BUG_ON(!is_signed_type(typeof(client_layer)));
+ BUILD_BUG_ON(LANDLOCK_MAX_NUM_LAYERS - 1 >
+ type_max(typeof(client_layer)));
server_layer = server ? (server->num_layers - 1) : -1;
server_walker = server ? server->hierarchy : NULL;
diff --git a/security/landlock/tsync.c b/security/landlock/tsync.c
index 4d4427ba8d93..c5730bbd9ed3 100644
--- a/security/landlock/tsync.c
+++ b/security/landlock/tsync.c
@@ -85,12 +85,14 @@ static void restrict_one_thread(struct tsync_shared_context *ctx)
/*
* Switch out old_cred with new_cred, if possible.
*
- * In the common case, where all threads initially point to the same
- * struct cred, this optimization avoids creating separate redundant
- * credentials objects for each, which would all have the same contents.
+ * In the common case, where all threads initially point to the
+ * same struct cred, this optimization avoids creating separate
+ * redundant credentials objects for each, which would all have
+ * the same contents.
*
- * Note: We are intentionally dropping the const qualifier here, because
- * it is required by commit_creds() and abort_creds().
+ * Note: We are intentionally dropping the const qualifier
+ * here, because it is required by commit_creds() and
+ * abort_creds().
*/
cred = (struct cred *)get_cred(ctx->new_cred);
} else {
@@ -101,8 +103,8 @@ static void restrict_one_thread(struct tsync_shared_context *ctx)
atomic_set(&ctx->preparation_error, -ENOMEM);
/*
- * Even on error, we need to adhere to the protocol and coordinate
- * with concurrently running invocations.
+ * Even on error, we need to adhere to the protocol and
+ * coordinate with concurrently running invocations.
*/
if (atomic_dec_return(&ctx->num_preparing) == 0)
complete_all(&ctx->all_prepared);
@@ -135,9 +137,9 @@ static void restrict_one_thread(struct tsync_shared_context *ctx)
}
/*
- * Make sure that all sibling tasks fulfill the no_new_privs prerequisite.
- * (This is in line with Seccomp's SECCOMP_FILTER_FLAG_TSYNC logic in
- * kernel/seccomp.c)
+ * Make sure that all sibling tasks fulfill the no_new_privs
+ * prerequisite. (This is in line with Seccomp's
+ * SECCOMP_FILTER_FLAG_TSYNC logic in kernel/seccomp.c)
*/
if (ctx->set_no_new_privs)
task_set_no_new_privs(current);
@@ -183,10 +185,8 @@ struct tsync_works {
* capacity. This can legitimately happen if new threads get started after we
* grew the capacity.
*
- * Returns:
- * A pointer to the preallocated context struct, with task filled in.
- *
- * NULL, if we ran out of preallocated context structs.
+ * Return: A pointer to the preallocated context struct with task filled in, or
+ * NULL if preallocated context structs ran out.
*/
static struct tsync_work *tsync_works_provide(struct tsync_works *s,
struct task_struct *task)
@@ -223,16 +223,17 @@ static void tsync_works_trim(struct tsync_works *s)
ctx = s->works[s->size - 1];
/*
- * For consistency, remove the task from ctx so that it does not look like
- * we handed it a task_work.
+ * For consistency, remove the task from ctx so that it does not look
+ * like we handed it a task_work.
*/
put_task_struct(ctx->task);
*ctx = (typeof(*ctx)){};
/*
- * Cancel the tsync_works_provide() change to recycle the reserved memory
- * for the next thread, if any. This also ensures that cancel_tsync_works()
- * and tsync_works_release() do not see any NULL task pointers.
+ * Cancel the tsync_works_provide() change to recycle the reserved
+ * memory for the next thread, if any. This also ensures that
+ * cancel_tsync_works() and tsync_works_release() do not see any NULL
+ * task pointers.
*/
s->size--;
}
@@ -243,11 +244,8 @@ static void tsync_works_trim(struct tsync_works *s)
* On a successful return, the subsequent n calls to tsync_works_provide() are
* guaranteed to succeed. (size + n <= capacity)
*
- * Returns:
- * -ENOMEM if the (re)allocation fails
-
- * 0 if the allocation succeeds, partially succeeds, or no reallocation
- * was needed
+ * Return: 0 if sufficient space for n more elements could be provided, -ENOMEM
+ * on allocation errors, -EOVERFLOW in case of integer overflow.
*/
static int tsync_works_grow_by(struct tsync_works *s, size_t n, gfp_t flags)
{
@@ -363,8 +361,8 @@ static size_t count_additional_threads(const struct tsync_works *works)
* For each added task_work, atomically increments shared_ctx->num_preparing and
* shared_ctx->num_unfinished.
*
- * Returns:
- * true, if at least one eligible sibling thread was found
+ * Return: True if at least one eligible sibling thread was found, false
+ * otherwise.
*/
static bool schedule_task_work(struct tsync_works *works,
struct tsync_shared_context *shared_ctx)
@@ -393,17 +391,17 @@ static bool schedule_task_work(struct tsync_works *works,
continue;
/*
- * We found a sibling thread that is not doing its task_work yet, and
- * which might spawn new threads before our task work runs, so we need
- * at least one more round in the outer loop.
+ * We found a sibling thread that is not doing its task_work
+ * yet, and which might spawn new threads before our task work
+ * runs, so we need at least one more round in the outer loop.
*/
found_more_threads = true;
ctx = tsync_works_provide(works, thread);
if (!ctx) {
/*
- * We ran out of preallocated contexts -- we need to try again with
- * this thread at a later time!
+ * We ran out of preallocated contexts -- we need to
+ * try again with this thread at a later time!
* found_more_threads is already true at this point.
*/
break;
@@ -418,10 +416,10 @@ static bool schedule_task_work(struct tsync_works *works,
err = task_work_add(thread, &ctx->work, TWA_SIGNAL);
if (unlikely(err)) {
/*
- * task_work_add() only fails if the task is about to exit. We
- * checked that earlier, but it can happen as a race. Resume
- * without setting an error, as the task is probably gone in the
- * next loop iteration.
+ * task_work_add() only fails if the task is about to
+ * exit. We checked that earlier, but it can happen as
+ * a race. Resume without setting an error, as the
+ * task is probably gone in the next loop iteration.
*/
tsync_works_trim(works);
@@ -512,24 +510,25 @@ int landlock_restrict_sibling_threads(const struct cred *old_cred,
* After this barrier is reached, it's safe to read
* shared_ctx.preparation_error.
*
- * 4) reads shared_ctx.preparation_error and then either does commit_creds()
- * or abort_creds().
+ * 4) reads shared_ctx.preparation_error and then either does
+ * commit_creds() or abort_creds().
*
* 5) signals that it's done altogether (barrier synchronization
* "all_finished")
*
- * Unlike seccomp, which modifies sibling tasks directly, we do not need to
- * acquire the cred_guard_mutex and sighand->siglock:
+ * Unlike seccomp, which modifies sibling tasks directly, we do not
+ * need to acquire the cred_guard_mutex and sighand->siglock:
*
- * - As in our case, all threads are themselves exchanging their own struct
- * cred through the credentials API, no locks are needed for that.
+ * - As in our case, all threads are themselves exchanging their own
+ * struct cred through the credentials API, no locks are needed for
+ * that.
* - Our for_each_thread() loops are protected by RCU.
- * - We do not acquire a lock to keep the list of sibling threads stable
- * between our for_each_thread loops. If the list of available sibling
- * threads changes between these for_each_thread loops, we make up for
- * that by continuing to look for threads until they are all discovered
- * and have entered their task_work, where they are unable to spawn new
- * threads.
+ * - We do not acquire a lock to keep the list of sibling threads
+ * stable between our for_each_thread loops. If the list of
+ * available sibling threads changes between these for_each_thread
+ * loops, we make up for that by continuing to look for threads until
+ * they are all discovered and have entered their task_work, where
+ * they are unable to spawn new threads.
*/
do {
/* In RCU read-lock, count the threads we need. */
@@ -546,31 +545,36 @@ int landlock_restrict_sibling_threads(const struct cred *old_cred,
}
/*
- * The "all_prepared" barrier is used locally to the loop body, this use
- * of for_each_thread(). We can reset it on each loop iteration because
- * all previous loop iterations are done with it already.
+ * The "all_prepared" barrier is used locally to the loop body,
+ * this use of for_each_thread(). We can reset it on each loop
+ * iteration because all previous loop iterations are done with
+ * it already.
*
- * num_preparing is initialized to 1 so that the counter can not go to 0
- * and mark the completion as done before all task works are registered.
- * We decrement it at the end of the loop body.
+ * num_preparing is initialized to 1 so that the counter can
+ * not go to 0 and mark the completion as done before all task
+ * works are registered. We decrement it at the end of the
+ * loop body.
*/
atomic_set(&shared_ctx.num_preparing, 1);
reinit_completion(&shared_ctx.all_prepared);
/*
- * In RCU read-lock, schedule task work on newly discovered sibling
- * tasks.
+ * In RCU read-lock, schedule task work on newly discovered
+ * sibling tasks.
*/
found_more_threads = schedule_task_work(&works, &shared_ctx);
/*
- * Decrement num_preparing for current, to undo that we initialized it
- * to 1 a few lines above.
+ * Decrement num_preparing for current, to undo that we
+ * initialized it to 1 a few lines above.
*/
if (atomic_dec_return(&shared_ctx.num_preparing) > 0) {
if (wait_for_completion_interruptible(
&shared_ctx.all_prepared)) {
- /* In case of interruption, we need to retry the system call. */
+ /*
+ * In case of interruption, we need to retry
+ * the system call.
+ */
atomic_set(&shared_ctx.preparation_error,
-ERESTARTNOINTR);
@@ -603,8 +607,8 @@ int landlock_restrict_sibling_threads(const struct cred *old_cred,
complete_all(&shared_ctx.ready_to_commit);
/*
- * Decrement num_unfinished for current, to undo that we initialized it to 1
- * at the beginning.
+ * Decrement num_unfinished for current, to undo that we initialized it
+ * to 1 at the beginning.
*/
if (atomic_dec_return(&shared_ctx.num_unfinished) > 0)
wait_for_completion(&shared_ctx.all_finished);
diff --git a/security/security.c b/security/security.c
index 048560ef6a1a..4e999f023651 100644
--- a/security/security.c
+++ b/security/security.c
@@ -4834,6 +4834,26 @@ int security_mptcp_add_subflow(struct sock *sk, struct sock *ssk)
#endif /* CONFIG_SECURITY_NETWORK */
+#if defined(CONFIG_SECURITY_NETWORK) && defined(CONFIG_SECURITY_PATH)
+/**
+ * security_unix_find() - Check if a named AF_UNIX socket can connect
+ * @path: path of the socket being connected to
+ * @other: peer sock
+ * @flags: flags associated with the socket
+ *
+ * This hook is called to check permissions before connecting to a named
+ * AF_UNIX socket. The caller does not hold any locks on @other.
+ *
+ * Return: Returns 0 if permission is granted.
+ */
+int security_unix_find(const struct path *path, struct sock *other, int flags)
+{
+ return call_int_hook(unix_find, path, other, flags);
+}
+EXPORT_SYMBOL(security_unix_find);
+
+#endif /* CONFIG_SECURITY_NETWORK && CONFIG_SECURITY_PATH */
+
#ifdef CONFIG_SECURITY_INFINIBAND
/**
* security_ib_pkey_access() - Check if access to an IB pkey is allowed
diff --git a/tools/testing/selftests/landlock/audit.h b/tools/testing/selftests/landlock/audit.h
index 44eb433e9666..834005b2b0f0 100644
--- a/tools/testing/selftests/landlock/audit.h
+++ b/tools/testing/selftests/landlock/audit.h
@@ -249,9 +249,9 @@ static __maybe_unused char *regex_escape(const char *const src, char *dst,
static int audit_match_record(int audit_fd, const __u16 type,
const char *const pattern, __u64 *domain_id)
{
- struct audit_message msg;
+ struct audit_message msg, last_mismatch = {};
int ret, err = 0;
- bool matches_record = !type;
+ int num_type_match = 0;
regmatch_t matches[2];
regex_t regex;
@@ -259,21 +259,35 @@ static int audit_match_record(int audit_fd, const __u16 type,
if (ret)
return -EINVAL;
- do {
+ /*
+ * Reads records until one matches both the expected type and the
+ * pattern. Type-matching records with non-matching content are
+ * silently consumed, which handles stale domain deallocation records
+ * from a previous test emitted asynchronously by kworker threads.
+ */
+ while (true) {
memset(&msg, 0, sizeof(msg));
err = audit_recv(audit_fd, &msg);
- if (err)
+ if (err) {
+ if (num_type_match) {
+ printf("DATA: %s\n", last_mismatch.data);
+ printf("ERROR: %d record(s) matched type %u"
+ " but not pattern: %s\n",
+ num_type_match, type, pattern);
+ }
goto out;
+ }
- if (msg.header.nlmsg_type == type)
- matches_record = true;
- } while (!matches_record);
+ if (type && msg.header.nlmsg_type != type)
+ continue;
- ret = regexec(&regex, msg.data, ARRAY_SIZE(matches), matches, 0);
- if (ret) {
- printf("DATA: %s\n", msg.data);
- printf("ERROR: no match for pattern: %s\n", pattern);
- err = -ENOENT;
+ ret = regexec(&regex, msg.data, ARRAY_SIZE(matches), matches,
+ 0);
+ if (!ret)
+ break;
+
+ num_type_match++;
+ last_mismatch = msg;
}
if (domain_id) {
@@ -309,28 +323,56 @@ static int __maybe_unused matches_log_domain_allocated(int audit_fd, pid_t pid,
log_match_len =
snprintf(log_match, sizeof(log_match), log_template, pid);
- if (log_match_len > sizeof(log_match))
+ if (log_match_len >= sizeof(log_match))
return -E2BIG;
return audit_match_record(audit_fd, AUDIT_LANDLOCK_DOMAIN, log_match,
domain_id);
}
-static int __maybe_unused matches_log_domain_deallocated(
- int audit_fd, unsigned int num_denials, __u64 *domain_id)
+/*
+ * Matches a domain deallocation record. When expected_domain_id is non-zero,
+ * the pattern includes the specific domain ID so that stale deallocation
+ * records from a previous test (with a different domain ID) are skipped by
+ * audit_match_record(), and the socket timeout is temporarily increased to
+ * audit_tv_dom_drop to wait for the asynchronous kworker deallocation.
+ */
+static int __maybe_unused
+matches_log_domain_deallocated(int audit_fd, unsigned int num_denials,
+ __u64 expected_domain_id, __u64 *domain_id)
{
static const char log_template[] = REGEX_LANDLOCK_PREFIX
" status=deallocated denials=%u$";
- char log_match[sizeof(log_template) + 10];
- int log_match_len;
-
- log_match_len = snprintf(log_match, sizeof(log_match), log_template,
- num_denials);
- if (log_match_len > sizeof(log_match))
+ static const char log_template_with_id[] =
+ "^audit([0-9.:]\\+): domain=\\(%llx\\)"
+ " status=deallocated denials=%u$";
+ char log_match[sizeof(log_template_with_id) + 32];
+ int log_match_len, err;
+
+ if (expected_domain_id)
+ log_match_len = snprintf(log_match, sizeof(log_match),
+ log_template_with_id,
+ (unsigned long long)expected_domain_id,
+ num_denials);
+ else
+ log_match_len = snprintf(log_match, sizeof(log_match),
+ log_template, num_denials);
+
+ if (log_match_len >= sizeof(log_match))
return -E2BIG;
- return audit_match_record(audit_fd, AUDIT_LANDLOCK_DOMAIN, log_match,
- domain_id);
+ if (expected_domain_id)
+ setsockopt(audit_fd, SOL_SOCKET, SO_RCVTIMEO,
+ &audit_tv_dom_drop, sizeof(audit_tv_dom_drop));
+
+ err = audit_match_record(audit_fd, AUDIT_LANDLOCK_DOMAIN, log_match,
+ domain_id);
+
+ if (expected_domain_id)
+ setsockopt(audit_fd, SOL_SOCKET, SO_RCVTIMEO, &audit_tv_default,
+ sizeof(audit_tv_default));
+
+ return err;
}
struct audit_records {
@@ -338,6 +380,15 @@ struct audit_records {
size_t domain;
};
+/*
+ * WARNING: Do not assert records.domain == 0 without a preceding
+ * audit_match_record() call. Domain deallocation records are emitted
+ * asynchronously from kworker threads and can arrive after the drain in
+ * audit_init(), corrupting the domain count. A preceding audit_match_record()
+ * call consumes stale records while scanning, making the assertion safe in
+ * practice because stale deallocation records arrive before the expected access
+ * records.
+ */
static int audit_count_records(int audit_fd, struct audit_records *records)
{
struct audit_message msg;
@@ -379,19 +430,35 @@ static int audit_init(void)
err = audit_set_status(fd, AUDIT_STATUS_ENABLED, 1);
if (err)
- return err;
+ goto err_close;
err = audit_set_status(fd, AUDIT_STATUS_PID, getpid());
if (err)
- return err;
+ goto err_close;
/* Sets a timeout for negative tests. */
err = setsockopt(fd, SOL_SOCKET, SO_RCVTIMEO, &audit_tv_default,
sizeof(audit_tv_default));
- if (err)
- return -errno;
+ if (err) {
+ err = -errno;
+ goto err_close;
+ }
+
+ /*
+ * Drains stale audit records that accumulated in the kernel backlog
+ * while no audit daemon socket was open. This happens when non-audit
+ * Landlock tests generate records while audit_enabled is non-zero (e.g.
+ * from boot configuration), or when domain deallocation records arrive
+ * asynchronously after a previous test's socket was closed.
+ */
+ while (audit_recv(fd, NULL) == 0)
+ ;
return fd;
+
+err_close:
+ close(fd);
+ return err;
}
static int audit_init_filter_exe(struct audit_filter *filter, const char *path)
@@ -441,8 +508,10 @@ static int audit_cleanup(int audit_fd, struct audit_filter *filter)
filter = &new_filter;
err = audit_init_filter_exe(filter, NULL);
- if (err)
+ if (err) {
+ close(audit_fd);
return err;
+ }
}
/* Filters might not be in place. */
@@ -468,11 +537,15 @@ static int audit_init_with_exe_filter(struct audit_filter *filter)
err = audit_init_filter_exe(filter, NULL);
if (err)
- return err;
+ goto err_close;
err = audit_filter_exe(fd, filter, AUDIT_ADD_RULE);
if (err)
- return err;
+ goto err_close;
return fd;
+
+err_close:
+ close(fd);
+ return err;
}
diff --git a/tools/testing/selftests/landlock/audit_test.c b/tools/testing/selftests/landlock/audit_test.c
index 46d02d49835a..93ae5bd0dcce 100644
--- a/tools/testing/selftests/landlock/audit_test.c
+++ b/tools/testing/selftests/landlock/audit_test.c
@@ -139,29 +139,31 @@ TEST_F(audit, layers)
WEXITSTATUS(status) != EXIT_SUCCESS)
_metadata->exit_code = KSFT_FAIL;
- /* Purges log from deallocated domains. */
- EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO,
- &audit_tv_dom_drop, sizeof(audit_tv_dom_drop)));
+ /*
+ * Purges log from deallocated domains. Records arrive in LIFO order
+ * (innermost domain first) because landlock_put_hierarchy() walks the
+ * chain sequentially in a single kworker context.
+ */
for (i = ARRAY_SIZE(*domain_stack) - 1; i >= 0; i--) {
__u64 deallocated_dom = 2;
EXPECT_EQ(0, matches_log_domain_deallocated(self->audit_fd, 1,
+ (*domain_stack)[i],
&deallocated_dom));
EXPECT_EQ((*domain_stack)[i], deallocated_dom)
{
TH_LOG("Failed to match domain %llx (#%d)",
- (*domain_stack)[i], i);
+ (unsigned long long)(*domain_stack)[i], i);
}
}
EXPECT_EQ(0, munmap(domain_stack, sizeof(*domain_stack)));
- EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO,
- &audit_tv_default, sizeof(audit_tv_default)));
EXPECT_EQ(0, close(ruleset_fd));
}
struct thread_data {
pid_t parent_pid;
int ruleset_fd, pipe_child, pipe_parent;
+ bool mute_subdomains;
};
static void *thread_audit_test(void *arg)
@@ -270,13 +272,329 @@ TEST_F(audit, thread)
EXPECT_EQ(0, close(pipe_parent[1]));
ASSERT_EQ(0, pthread_join(thread, NULL));
- EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO,
- &audit_tv_dom_drop, sizeof(audit_tv_dom_drop)));
- EXPECT_EQ(0, matches_log_domain_deallocated(self->audit_fd, 1,
- &deallocated_dom));
+ EXPECT_EQ(0, matches_log_domain_deallocated(
+ self->audit_fd, 1, denial_dom, &deallocated_dom));
EXPECT_EQ(denial_dom, deallocated_dom);
- EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO,
- &audit_tv_default, sizeof(audit_tv_default)));
+}
+
+/*
+ * Verifies that log_subdomains_off set via the ruleset_fd=-1 path (without
+ * creating a domain) is inherited by children across fork(). This exercises
+ * the hook_cred_transfer() fix: the Landlock credential blob must be copied
+ * even when the source credential has no domain.
+ *
+ * Phase 1 (baseline): a child without muting creates a domain and triggers a
+ * denial that IS logged.
+ *
+ * Phase 2 (after muting): the parent mutes subdomain logs, forks another child
+ * who creates a domain and triggers a denial that is NOT logged.
+ */
+TEST_F(audit, log_subdomains_off_fork)
+{
+ const struct landlock_ruleset_attr ruleset_attr = {
+ .scoped = LANDLOCK_SCOPE_SIGNAL,
+ };
+ struct audit_records records;
+ int ruleset_fd, status;
+ pid_t child;
+
+ ruleset_fd =
+ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ ASSERT_LE(0, ruleset_fd);
+
+ ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
+
+ /*
+ * Phase 1: forks a child that creates a domain and triggers a denial
+ * before any muting. This proves the audit path works.
+ */
+ child = fork();
+ ASSERT_LE(0, child);
+ if (child == 0) {
+ ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0));
+ ASSERT_EQ(-1, kill(getppid(), 0));
+ ASSERT_EQ(EPERM, errno);
+ _exit(0);
+ return;
+ }
+
+ ASSERT_EQ(child, waitpid(child, &status, 0));
+ ASSERT_EQ(true, WIFEXITED(status));
+ ASSERT_EQ(0, WEXITSTATUS(status));
+
+ /* The denial must be logged (baseline). */
+ EXPECT_EQ(0, matches_log_signal(_metadata, self->audit_fd, getpid(),
+ NULL));
+
+ /* Drains any remaining records (e.g. domain allocation). */
+ EXPECT_EQ(0, audit_count_records(self->audit_fd, &records));
+
+ /*
+ * Mutes subdomain logs without creating a domain. The parent's
+ * credential has domain=NULL and log_subdomains_off=1.
+ */
+ ASSERT_EQ(0, landlock_restrict_self(
+ -1, LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF));
+
+ /*
+ * Phase 2: forks a child that creates a domain and triggers a denial.
+ * Because log_subdomains_off was inherited via fork(), the child's
+ * domain has log_status=LANDLOCK_LOG_DISABLED.
+ */
+ child = fork();
+ ASSERT_LE(0, child);
+ if (child == 0) {
+ ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0));
+ ASSERT_EQ(-1, kill(getppid(), 0));
+ ASSERT_EQ(EPERM, errno);
+ _exit(0);
+ return;
+ }
+
+ ASSERT_EQ(child, waitpid(child, &status, 0));
+ ASSERT_EQ(true, WIFEXITED(status));
+ ASSERT_EQ(0, WEXITSTATUS(status));
+
+ /* No denial record should appear. */
+ EXPECT_EQ(-EAGAIN, matches_log_signal(_metadata, self->audit_fd,
+ getpid(), NULL));
+
+ EXPECT_EQ(0, audit_count_records(self->audit_fd, &records));
+ EXPECT_EQ(0, records.access);
+
+ EXPECT_EQ(0, close(ruleset_fd));
+}
+
+/*
+ * Thread function: runs two rounds of (create domain, trigger denial, signal
+ * back), waiting for the main thread before each round. When mute_subdomains
+ * is set, phase 1 also mutes subdomain logs via the fd=-1 path before creating
+ * the domain. The ruleset_fd is kept open across both rounds so each
+ * restrict_self call stacks a new domain layer.
+ */
+static void *thread_sandbox_deny_twice(void *arg)
+{
+ const struct thread_data *data = (struct thread_data *)arg;
+ uintptr_t err = 0;
+ char buffer;
+
+ /* Phase 1: optionally mutes, creates a domain, and triggers a denial. */
+ if (read(data->pipe_parent, &buffer, 1) != 1) {
+ err = 1;
+ goto out;
+ }
+
+ if (data->mute_subdomains &&
+ landlock_restrict_self(-1,
+ LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF)) {
+ err = 2;
+ goto out;
+ }
+
+ if (landlock_restrict_self(data->ruleset_fd, 0)) {
+ err = 3;
+ goto out;
+ }
+
+ if (kill(data->parent_pid, 0) != -1 || errno != EPERM) {
+ err = 4;
+ goto out;
+ }
+
+ if (write(data->pipe_child, ".", 1) != 1) {
+ err = 5;
+ goto out;
+ }
+
+ /* Phase 2: stacks another domain and triggers a denial. */
+ if (read(data->pipe_parent, &buffer, 1) != 1) {
+ err = 6;
+ goto out;
+ }
+
+ if (landlock_restrict_self(data->ruleset_fd, 0)) {
+ err = 7;
+ goto out;
+ }
+
+ if (kill(data->parent_pid, 0) != -1 || errno != EPERM) {
+ err = 8;
+ goto out;
+ }
+
+ if (write(data->pipe_child, ".", 1) != 1) {
+ err = 9;
+ goto out;
+ }
+
+out:
+ close(data->ruleset_fd);
+ close(data->pipe_child);
+ close(data->pipe_parent);
+ return (void *)err;
+}
+
+/*
+ * Verifies that LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF with
+ * LANDLOCK_RESTRICT_SELF_TSYNC and ruleset_fd=-1 propagates log_subdomains_off
+ * to a sibling thread, suppressing audit logging on domains it subsequently
+ * creates.
+ *
+ * Phase 1 (before TSYNC) acts as an inline baseline: the sibling creates a
+ * domain and triggers a denial that IS logged.
+ *
+ * Phase 2 (after TSYNC) verifies suppression: the sibling stacks another domain
+ * and triggers a denial that is NOT logged.
+ */
+TEST_F(audit, log_subdomains_off_tsync)
+{
+ const struct landlock_ruleset_attr ruleset_attr = {
+ .scoped = LANDLOCK_SCOPE_SIGNAL,
+ };
+ struct audit_records records;
+ struct thread_data child_data = {};
+ int pipe_child[2], pipe_parent[2];
+ char buffer;
+ pthread_t thread;
+ void *thread_ret;
+
+ child_data.parent_pid = getppid();
+ ASSERT_EQ(0, pipe2(pipe_child, O_CLOEXEC));
+ child_data.pipe_child = pipe_child[1];
+ ASSERT_EQ(0, pipe2(pipe_parent, O_CLOEXEC));
+ child_data.pipe_parent = pipe_parent[0];
+ child_data.ruleset_fd =
+ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ ASSERT_LE(0, child_data.ruleset_fd);
+
+ ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
+
+ /* Creates the sibling thread. */
+ ASSERT_EQ(0, pthread_create(&thread, NULL, thread_sandbox_deny_twice,
+ &child_data));
+
+ /*
+ * Phase 1: the sibling creates a domain and triggers a denial before
+ * any log muting. This proves the audit path works.
+ */
+ ASSERT_EQ(1, write(pipe_parent[1], ".", 1));
+ ASSERT_EQ(1, read(pipe_child[0], &buffer, 1));
+
+ /* The denial must be logged. */
+ EXPECT_EQ(0, matches_log_signal(_metadata, self->audit_fd,
+ child_data.parent_pid, NULL));
+
+ /* Drains any remaining records (e.g. domain allocation). */
+ EXPECT_EQ(0, audit_count_records(self->audit_fd, &records));
+
+ /*
+ * Mutes subdomain logs and propagates to the sibling thread via TSYNC,
+ * without creating a domain.
+ */
+ ASSERT_EQ(0, landlock_restrict_self(
+ -1, LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF |
+ LANDLOCK_RESTRICT_SELF_TSYNC));
+
+ /*
+ * Phase 2: the sibling stacks another domain and triggers a denial.
+ * Because log_subdomains_off was propagated via TSYNC, the new domain
+ * has log_status=LANDLOCK_LOG_DISABLED.
+ */
+ ASSERT_EQ(1, write(pipe_parent[1], ".", 1));
+ ASSERT_EQ(1, read(pipe_child[0], &buffer, 1));
+
+ /* No denial record should appear. */
+ EXPECT_EQ(-EAGAIN, matches_log_signal(_metadata, self->audit_fd,
+ child_data.parent_pid, NULL));
+
+ EXPECT_EQ(0, audit_count_records(self->audit_fd, &records));
+ EXPECT_EQ(0, records.access);
+
+ EXPECT_EQ(0, close(pipe_child[0]));
+ EXPECT_EQ(0, close(pipe_parent[1]));
+ ASSERT_EQ(0, pthread_join(thread, &thread_ret));
+ EXPECT_EQ(NULL, thread_ret);
+}
+
+/*
+ * Verifies that LANDLOCK_RESTRICT_SELF_TSYNC without
+ * LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF overrides a sibling thread's
+ * log_subdomains_off, re-enabling audit logging on domains the sibling
+ * subsequently creates.
+ *
+ * Phase 1: the sibling sets log_subdomains_off, creates a muted domain, and
+ * triggers a denial that is NOT logged.
+ *
+ * Phase 2 (after TSYNC without LOG_SUBDOMAINS_OFF): the sibling stacks another
+ * domain and triggers a denial that IS logged, proving the muting was
+ * overridden.
+ */
+TEST_F(audit, tsync_override_log_subdomains_off)
+{
+ const struct landlock_ruleset_attr ruleset_attr = {
+ .scoped = LANDLOCK_SCOPE_SIGNAL,
+ };
+ struct audit_records records;
+ struct thread_data child_data = {};
+ int pipe_child[2], pipe_parent[2];
+ char buffer;
+ pthread_t thread;
+ void *thread_ret;
+
+ child_data.parent_pid = getppid();
+ ASSERT_EQ(0, pipe2(pipe_child, O_CLOEXEC));
+ child_data.pipe_child = pipe_child[1];
+ ASSERT_EQ(0, pipe2(pipe_parent, O_CLOEXEC));
+ child_data.pipe_parent = pipe_parent[0];
+ child_data.ruleset_fd =
+ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ ASSERT_LE(0, child_data.ruleset_fd);
+
+ ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
+
+ child_data.mute_subdomains = true;
+
+ /* Creates the sibling thread. */
+ ASSERT_EQ(0, pthread_create(&thread, NULL, thread_sandbox_deny_twice,
+ &child_data));
+
+ /*
+ * Phase 1: the sibling mutes subdomain logs, creates a domain, and
+ * triggers a denial. The denial must not be logged.
+ */
+ ASSERT_EQ(1, write(pipe_parent[1], ".", 1));
+ ASSERT_EQ(1, read(pipe_child[0], &buffer, 1));
+
+ EXPECT_EQ(-EAGAIN, matches_log_signal(_metadata, self->audit_fd,
+ child_data.parent_pid, NULL));
+
+ /* Drains any remaining records. */
+ EXPECT_EQ(0, audit_count_records(self->audit_fd, &records));
+ EXPECT_EQ(0, records.access);
+
+ /*
+ * Overrides the sibling's log_subdomains_off by calling TSYNC without
+ * LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF.
+ */
+ ASSERT_EQ(0, landlock_restrict_self(child_data.ruleset_fd,
+ LANDLOCK_RESTRICT_SELF_TSYNC));
+
+ /*
+ * Phase 2: the sibling stacks another domain and triggers a denial.
+ * Because TSYNC replaced its log_subdomains_off with 0, the new domain
+ * has log_status=LANDLOCK_LOG_PENDING.
+ */
+ ASSERT_EQ(1, write(pipe_parent[1], ".", 1));
+ ASSERT_EQ(1, read(pipe_child[0], &buffer, 1));
+
+ /* The denial must be logged. */
+ EXPECT_EQ(0, matches_log_signal(_metadata, self->audit_fd,
+ child_data.parent_pid, NULL));
+
+ EXPECT_EQ(0, close(pipe_child[0]));
+ EXPECT_EQ(0, close(pipe_parent[1]));
+ ASSERT_EQ(0, pthread_join(thread, &thread_ret));
+ EXPECT_EQ(NULL, thread_ret);
}
FIXTURE(audit_flags)
@@ -412,7 +730,6 @@ TEST_F(audit_flags, signal)
} else {
EXPECT_EQ(1, records.access);
}
- EXPECT_EQ(0, records.domain);
/* Updates filter rules to match the drop record. */
set_cap(_metadata, CAP_AUDIT_CONTROL);
@@ -433,22 +750,21 @@ TEST_F(audit_flags, signal)
if (variant->restrict_flags &
LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF) {
+ /*
+ * No deallocation record: denials=0 never matches a real
+ * record.
+ */
EXPECT_EQ(-EAGAIN,
- matches_log_domain_deallocated(self->audit_fd, 0,
+ matches_log_domain_deallocated(self->audit_fd, 0, 0,
&deallocated_dom));
EXPECT_EQ(deallocated_dom, 2);
} else {
- EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO,
- &audit_tv_dom_drop,
- sizeof(audit_tv_dom_drop)));
EXPECT_EQ(0, matches_log_domain_deallocated(self->audit_fd, 2,
+ *self->domain_id,
&deallocated_dom));
EXPECT_NE(deallocated_dom, 2);
EXPECT_NE(deallocated_dom, 0);
EXPECT_EQ(deallocated_dom, *self->domain_id);
- EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO,
- &audit_tv_default,
- sizeof(audit_tv_default)));
}
}
@@ -601,7 +917,6 @@ TEST_F(audit_exec, signal_and_open)
/* Tests that there was no denial until now. */
EXPECT_EQ(0, audit_count_records(self->audit_fd, &records));
EXPECT_EQ(0, records.access);
- EXPECT_EQ(0, records.domain);
/*
* Wait for the child to do a first denied action by layer1 and
diff --git a/tools/testing/selftests/landlock/base_test.c b/tools/testing/selftests/landlock/base_test.c
index 0fea236ef4bd..30d37234086c 100644
--- a/tools/testing/selftests/landlock/base_test.c
+++ b/tools/testing/selftests/landlock/base_test.c
@@ -76,7 +76,7 @@ TEST(abi_version)
const struct landlock_ruleset_attr ruleset_attr = {
.handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE,
};
- ASSERT_EQ(8, landlock_create_ruleset(NULL, 0,
+ ASSERT_EQ(9, landlock_create_ruleset(NULL, 0,
LANDLOCK_CREATE_RULESET_VERSION));
ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 0,
diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c
index 968a91c927a4..cdb47fc1fc0a 100644
--- a/tools/testing/selftests/landlock/fs_test.c
+++ b/tools/testing/selftests/landlock/fs_test.c
@@ -22,6 +22,7 @@
#include <sys/ioctl.h>
#include <sys/mount.h>
#include <sys/prctl.h>
+#include <sys/resource.h>
#include <sys/sendfile.h>
#include <sys/socket.h>
#include <sys/stat.h>
@@ -575,9 +576,10 @@ TEST_F_FORK(layout1, inval)
LANDLOCK_ACCESS_FS_WRITE_FILE | \
LANDLOCK_ACCESS_FS_READ_FILE | \
LANDLOCK_ACCESS_FS_TRUNCATE | \
- LANDLOCK_ACCESS_FS_IOCTL_DEV)
+ LANDLOCK_ACCESS_FS_IOCTL_DEV | \
+ LANDLOCK_ACCESS_FS_RESOLVE_UNIX)
-#define ACCESS_LAST LANDLOCK_ACCESS_FS_IOCTL_DEV
+#define ACCESS_LAST LANDLOCK_ACCESS_FS_RESOLVE_UNIX
#define ACCESS_ALL ( \
ACCESS_FILE | \
@@ -765,15 +767,6 @@ static int create_ruleset(struct __test_metadata *const _metadata,
.handled_access_fs = handled_access_fs,
};
- ASSERT_NE(NULL, rules)
- {
- TH_LOG("No rule list");
- }
- ASSERT_NE(NULL, rules[0].path)
- {
- TH_LOG("Empty rule list");
- }
-
ruleset_fd =
landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
ASSERT_LE(0, ruleset_fd)
@@ -781,16 +774,26 @@ static int create_ruleset(struct __test_metadata *const _metadata,
TH_LOG("Failed to create a ruleset: %s", strerror(errno));
}
- for (i = 0; rules[i].path; i++) {
- if (!rules[i].access)
- continue;
+ if (rules)
+ for (i = 0; rules[i].path; i++) {
+ if (!rules[i].access)
+ continue;
- add_path_beneath(_metadata, ruleset_fd, rules[i].access,
- rules[i].path);
- }
+ add_path_beneath(_metadata, ruleset_fd, rules[i].access,
+ rules[i].path);
+ }
return ruleset_fd;
}
+static void enforce_fs(struct __test_metadata *const _metadata,
+ const __u64 access_fs, const struct rule rules[])
+{
+ const int ruleset_fd = create_ruleset(_metadata, access_fs, rules);
+
+ enforce_ruleset(_metadata, ruleset_fd);
+ EXPECT_EQ(0, close(ruleset_fd));
+}
+
TEST_F_FORK(layout0, proc_nsfs)
{
const struct rule rules[] = {
@@ -877,13 +880,10 @@ TEST_F_FORK(layout1, effective_access)
},
{},
};
- const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
char buf;
int reg_fd;
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, rules);
/* Tests on a directory (with or without O_PATH). */
ASSERT_EQ(EACCES, test_open("/", O_RDONLY));
@@ -930,12 +930,9 @@ TEST_F_FORK(layout1, unhandled_access)
},
{},
};
- /* Here, we only handle read accesses, not write accesses. */
- const int ruleset_fd = create_ruleset(_metadata, ACCESS_RO, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ /* Here, we only handle read accesses, not write accesses. */
+ enforce_fs(_metadata, ACCESS_RO, rules);
/*
* Because the policy does not handle LANDLOCK_ACCESS_FS_WRITE_FILE,
@@ -964,11 +961,8 @@ TEST_F_FORK(layout1, ruleset_overlap)
},
{},
};
- const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, rules);
/* Checks s1d1 hierarchy. */
ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
@@ -1020,11 +1014,8 @@ TEST_F_FORK(layout1, layer_rule_unions)
},
{},
};
- int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer1);
/* Checks s1d1 hierarchy with layer1. */
ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
@@ -1046,10 +1037,7 @@ TEST_F_FORK(layout1, layer_rule_unions)
ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
/* Doesn't change anything from layer1. */
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer2);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer2);
/* Checks s1d1 hierarchy with layer2. */
ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
@@ -1071,10 +1059,7 @@ TEST_F_FORK(layout1, layer_rule_unions)
ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
/* Only allows write (but not read) to dir_s1d3. */
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer3);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer3);
/* Checks s1d1 hierarchy with layer3. */
ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
@@ -1112,27 +1097,18 @@ TEST_F_FORK(layout1, non_overlapping_accesses)
},
{},
};
- int ruleset_fd;
ASSERT_EQ(0, unlink(file1_s1d1));
ASSERT_EQ(0, unlink(file1_s1d2));
- ruleset_fd =
- create_ruleset(_metadata, LANDLOCK_ACCESS_FS_MAKE_REG, layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_MAKE_REG, layer1);
ASSERT_EQ(-1, mknod(file1_s1d1, S_IFREG | 0700, 0));
ASSERT_EQ(EACCES, errno);
ASSERT_EQ(0, mknod(file1_s1d2, S_IFREG | 0700, 0));
ASSERT_EQ(0, unlink(file1_s1d2));
- ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_REMOVE_FILE,
- layer2);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_REMOVE_FILE, layer2);
/* Unchanged accesses for file creation. */
ASSERT_EQ(-1, mknod(file1_s1d1, S_IFREG | 0700, 0));
@@ -1236,37 +1212,24 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
},
{},
};
- int ruleset_fd;
- ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
- layer1_read);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_READ_FILE, layer1_read);
/* Checks that read access is granted for file1_s1d3 with layer 1. */
ASSERT_EQ(0, test_open(file1_s1d3, O_RDWR));
ASSERT_EQ(EACCES, test_open(file2_s1d3, O_RDONLY));
ASSERT_EQ(0, test_open(file2_s1d3, O_WRONLY));
- ruleset_fd = create_ruleset(_metadata,
- LANDLOCK_ACCESS_FS_READ_FILE |
- LANDLOCK_ACCESS_FS_WRITE_FILE,
- layer2_read_write);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_READ_FILE | LANDLOCK_ACCESS_FS_WRITE_FILE,
+ layer2_read_write);
/* Checks that previous access rights are unchanged with layer 2. */
ASSERT_EQ(0, test_open(file1_s1d3, O_RDWR));
ASSERT_EQ(EACCES, test_open(file2_s1d3, O_RDONLY));
ASSERT_EQ(0, test_open(file2_s1d3, O_WRONLY));
- ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
- layer3_read);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_READ_FILE, layer3_read);
/* Checks that previous access rights are unchanged with layer 3. */
ASSERT_EQ(0, test_open(file1_s1d3, O_RDWR));
@@ -1274,13 +1237,9 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
ASSERT_EQ(0, test_open(file2_s1d3, O_WRONLY));
/* This time, denies write access for the file hierarchy. */
- ruleset_fd = create_ruleset(_metadata,
- LANDLOCK_ACCESS_FS_READ_FILE |
- LANDLOCK_ACCESS_FS_WRITE_FILE,
- layer4_read_write);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_READ_FILE | LANDLOCK_ACCESS_FS_WRITE_FILE,
+ layer4_read_write);
/*
* Checks that the only change with layer 4 is that write access is
@@ -1291,11 +1250,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
ASSERT_EQ(EACCES, test_open(file2_s1d3, O_RDONLY));
ASSERT_EQ(EACCES, test_open(file2_s1d3, O_WRONLY));
- ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
- layer5_read);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_READ_FILE, layer5_read);
/* Checks that previous access rights are unchanged with layer 5. */
ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY));
@@ -1303,11 +1258,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
ASSERT_EQ(EACCES, test_open(file2_s1d3, O_WRONLY));
ASSERT_EQ(EACCES, test_open(file2_s1d3, O_RDONLY));
- ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_EXECUTE,
- layer6_execute);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_EXECUTE, layer6_execute);
/* Checks that previous access rights are unchanged with layer 6. */
ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY));
@@ -1315,13 +1266,9 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
ASSERT_EQ(EACCES, test_open(file2_s1d3, O_WRONLY));
ASSERT_EQ(EACCES, test_open(file2_s1d3, O_RDONLY));
- ruleset_fd = create_ruleset(_metadata,
- LANDLOCK_ACCESS_FS_READ_FILE |
- LANDLOCK_ACCESS_FS_WRITE_FILE,
- layer7_read_write);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_READ_FILE | LANDLOCK_ACCESS_FS_WRITE_FILE,
+ layer7_read_write);
/* Checks read access is now denied with layer 7. */
ASSERT_EQ(EACCES, test_open(file1_s1d3, O_RDONLY));
@@ -1342,7 +1289,6 @@ TEST_F_FORK(layout1, inherit_subset)
};
const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
enforce_ruleset(_metadata, ruleset_fd);
ASSERT_EQ(EACCES, test_open(file1_s1d1, O_WRONLY));
@@ -1458,7 +1404,6 @@ TEST_F_FORK(layout1, inherit_superset)
};
const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
enforce_ruleset(_metadata, ruleset_fd);
/* Readdir access is denied for dir_s1d2. */
@@ -1474,7 +1419,7 @@ TEST_F_FORK(layout1, inherit_superset)
LANDLOCK_ACCESS_FS_READ_DIR,
dir_s1d2);
enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ EXPECT_EQ(0, close(ruleset_fd));
/* Readdir access is still denied for dir_s1d2. */
ASSERT_EQ(EACCES, test_open(dir_s1d2, O_RDONLY | O_DIRECTORY));
@@ -1496,7 +1441,6 @@ TEST_F_FORK(layout0, max_layers)
};
const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
for (i = 0; i < 16; i++)
enforce_ruleset(_metadata, ruleset_fd);
@@ -1505,7 +1449,7 @@ TEST_F_FORK(layout0, max_layers)
ASSERT_EQ(-1, err);
ASSERT_EQ(E2BIG, errno);
}
- ASSERT_EQ(0, close(ruleset_fd));
+ EXPECT_EQ(0, close(ruleset_fd));
}
TEST_F_FORK(layout1, empty_or_same_ruleset)
@@ -1519,20 +1463,15 @@ TEST_F_FORK(layout1, empty_or_same_ruleset)
ASSERT_LE(-1, ruleset_fd);
ASSERT_EQ(ENOMSG, errno);
- /* Enforces policy which deny read access to all files. */
- ruleset_attr.handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE;
- ruleset_fd =
- landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
+ /* Enforces policy which denies read access to all files. */
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_READ_FILE, NULL);
+
ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
ASSERT_EQ(0, test_open(dir_s1d1, O_RDONLY));
- /* Nests a policy which deny read access to all directories. */
- ruleset_attr.handled_access_fs = LANDLOCK_ACCESS_FS_READ_DIR;
+ /* Nests a policy which denies read access to all directories. */
ruleset_fd =
- landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
- ASSERT_LE(0, ruleset_fd);
+ create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_DIR, NULL);
enforce_ruleset(_metadata, ruleset_fd);
ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY));
@@ -1556,11 +1495,8 @@ TEST_F_FORK(layout1, rule_on_mountpoint)
},
{},
};
- const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, rules);
ASSERT_EQ(0, test_open(dir_s1d1, O_RDONLY));
@@ -1585,11 +1521,8 @@ TEST_F_FORK(layout1, rule_over_mountpoint)
},
{},
};
- const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, rules);
ASSERT_EQ(0, test_open(dir_s1d1, O_RDONLY));
@@ -1613,21 +1546,15 @@ TEST_F_FORK(layout1, rule_over_root_allow_then_deny)
},
{},
};
- int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, rules);
/* Checks allowed access. */
ASSERT_EQ(0, test_open("/", O_RDONLY));
ASSERT_EQ(0, test_open(dir_s1d1, O_RDONLY));
rules[0].access = LANDLOCK_ACCESS_FS_READ_FILE;
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, rules);
/* Checks denied access (on a directory). */
ASSERT_EQ(EACCES, test_open("/", O_RDONLY));
@@ -1643,11 +1570,8 @@ TEST_F_FORK(layout1, rule_over_root_deny)
},
{},
};
- const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, rules);
/* Checks denied access (on a directory). */
ASSERT_EQ(EACCES, test_open("/", O_RDONLY));
@@ -1663,7 +1587,6 @@ TEST_F_FORK(layout1, rule_inside_mount_ns)
},
{},
};
- int ruleset_fd;
set_cap(_metadata, CAP_SYS_ADMIN);
ASSERT_EQ(0, syscall(__NR_pivot_root, dir_s3d2, dir_s3d3))
@@ -1673,10 +1596,7 @@ TEST_F_FORK(layout1, rule_inside_mount_ns)
ASSERT_EQ(0, chdir("/"));
clear_cap(_metadata, CAP_SYS_ADMIN);
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, rules);
ASSERT_EQ(0, test_open("s3d3", O_RDONLY));
ASSERT_EQ(EACCES, test_open("/", O_RDONLY));
@@ -1691,11 +1611,8 @@ TEST_F_FORK(layout1, mount_and_pivot)
},
{},
};
- const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, rules);
set_cap(_metadata, CAP_SYS_ADMIN);
ASSERT_EQ(-1, mount(NULL, dir_s3d2, NULL, MS_RDONLY, NULL));
@@ -1714,9 +1631,6 @@ TEST_F_FORK(layout1, move_mount)
},
{},
};
- const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
-
- ASSERT_LE(0, ruleset_fd);
set_cap(_metadata, CAP_SYS_ADMIN);
ASSERT_EQ(0, syscall(__NR_move_mount, AT_FDCWD, dir_s3d2, AT_FDCWD,
@@ -1729,8 +1643,7 @@ TEST_F_FORK(layout1, move_mount)
dir_s3d2, 0));
clear_cap(_metadata, CAP_SYS_ADMIN);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, rules);
set_cap(_metadata, CAP_SYS_ADMIN);
ASSERT_EQ(-1, syscall(__NR_move_mount, AT_FDCWD, dir_s3d2, AT_FDCWD,
@@ -1745,14 +1658,9 @@ TEST_F_FORK(layout1, topology_changes_with_net_only)
.handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP |
LANDLOCK_ACCESS_NET_CONNECT_TCP,
};
- int ruleset_fd;
/* Add network restrictions. */
- ruleset_fd =
- landlock_create_ruleset(&ruleset_net, sizeof(ruleset_net), 0);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ drop_access_rights(_metadata, &ruleset_net);
/* Mount, remount, move_mount, umount, and pivot_root checks. */
set_cap(_metadata, CAP_SYS_ADMIN);
@@ -1773,14 +1681,9 @@ TEST_F_FORK(layout1, topology_changes_with_net_and_fs)
LANDLOCK_ACCESS_NET_CONNECT_TCP,
.handled_access_fs = LANDLOCK_ACCESS_FS_EXECUTE,
};
- int ruleset_fd;
/* Add network and filesystem restrictions. */
- ruleset_fd = landlock_create_ruleset(&ruleset_net_fs,
- sizeof(ruleset_net_fs), 0);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ drop_access_rights(_metadata, &ruleset_net_fs);
/* Mount, remount, move_mount, umount, and pivot_root checks. */
set_cap(_metadata, CAP_SYS_ADMIN);
@@ -1817,14 +1720,13 @@ TEST_F_FORK(layout1, release_inodes)
};
const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
- ASSERT_LE(0, ruleset_fd);
/* Unmount a file hierarchy while it is being used by a ruleset. */
set_cap(_metadata, CAP_SYS_ADMIN);
ASSERT_EQ(0, umount(dir_s3d2));
clear_cap(_metadata, CAP_SYS_ADMIN);
enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ EXPECT_EQ(0, close(ruleset_fd));
ASSERT_EQ(0, test_open(file1_s1d1, O_RDONLY));
ASSERT_EQ(EACCES, test_open(dir_s3d2, O_RDONLY));
@@ -1856,7 +1758,6 @@ TEST_F_FORK(layout1, covered_rule)
/* Creates a ruleset with the future hidden directory. */
ruleset_fd =
create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_DIR, layer1);
- ASSERT_LE(0, ruleset_fd);
/* Covers with a new mount point. */
set_cap(_metadata, CAP_SYS_ADMIN);
@@ -1906,10 +1807,7 @@ static void test_relative_path(struct __test_metadata *const _metadata,
};
int dirfd, ruleset_fd;
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer1_base);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer1_base);
ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer2_subs);
@@ -2090,10 +1988,7 @@ TEST_F_FORK(layout1, execute)
},
{},
};
- const int ruleset_fd =
- create_ruleset(_metadata, rules[0].access, rules);
- ASSERT_LE(0, ruleset_fd);
copy_file(_metadata, bin_true, file1_s1d1);
copy_file(_metadata, bin_true, file1_s1d2);
copy_file(_metadata, bin_true, file1_s1d3);
@@ -2102,8 +1997,7 @@ TEST_F_FORK(layout1, execute)
test_execute(_metadata, 0, file1_s1d1);
test_check_exec(_metadata, 0, file1_s1d1);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, rules[0].access, rules);
ASSERT_EQ(0, test_open(dir_s1d1, O_RDONLY));
ASSERT_EQ(0, test_open(file1_s1d1, O_RDONLY));
@@ -2214,16 +2108,12 @@ TEST_F_FORK(layout1, link)
},
{},
};
- int ruleset_fd = create_ruleset(_metadata, layer1[0].access, layer1);
-
- ASSERT_LE(0, ruleset_fd);
ASSERT_EQ(0, unlink(file1_s1d1));
ASSERT_EQ(0, unlink(file1_s1d2));
ASSERT_EQ(0, unlink(file1_s1d3));
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, layer1[0].access, layer1);
ASSERT_EQ(-1, link(file2_s1d1, file1_s1d1));
ASSERT_EQ(EACCES, errno);
@@ -2243,10 +2133,7 @@ TEST_F_FORK(layout1, link)
ASSERT_EQ(0, unlink(file2_s1d2));
ASSERT_EQ(0, unlink(file2_s1d3));
- ruleset_fd = create_ruleset(_metadata, layer2[0].access, layer2);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, layer2[0].access, layer2);
/* Checks that linkind doesn't require the ability to delete a file. */
ASSERT_EQ(0, link(file1_s1d2, file2_s1d2));
@@ -2296,15 +2183,10 @@ TEST_F_FORK(layout1, rename_file)
},
{},
};
- const int ruleset_fd =
- create_ruleset(_metadata, rules[0].access, rules);
-
- ASSERT_LE(0, ruleset_fd);
ASSERT_EQ(0, unlink(file1_s1d2));
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, rules[0].access, rules);
/*
* Tries to replace a file, from a directory that allows file removal,
@@ -2378,17 +2260,12 @@ TEST_F_FORK(layout1, rename_dir)
},
{},
};
- const int ruleset_fd =
- create_ruleset(_metadata, rules[0].access, rules);
-
- ASSERT_LE(0, ruleset_fd);
/* Empties dir_s1d3 to allow renaming. */
ASSERT_EQ(0, unlink(file1_s1d3));
ASSERT_EQ(0, unlink(file2_s1d3));
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, rules[0].access, rules);
/* Exchanges and renames directory to a different parent. */
ASSERT_EQ(-1, renameat2(AT_FDCWD, dir_s2d3, AT_FDCWD, dir_s1d3,
@@ -2442,12 +2319,8 @@ TEST_F_FORK(layout1, reparent_refer)
},
{},
};
- int ruleset_fd =
- create_ruleset(_metadata, LANDLOCK_ACCESS_FS_REFER, layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_REFER, layer1);
ASSERT_EQ(-1, rename(dir_s1d2, dir_s2d1));
ASSERT_EQ(EXDEV, errno);
@@ -2477,14 +2350,9 @@ static void refer_denied_by_default(struct __test_metadata *const _metadata,
const int layer1_err,
const struct rule layer2[])
{
- int ruleset_fd;
-
ASSERT_EQ(0, unlink(file1_s1d2));
- ruleset_fd = create_ruleset(_metadata, layer1[0].access, layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, layer1[0].access, layer1);
/*
* If the first layer handles LANDLOCK_ACCESS_FS_REFER (according to
@@ -2496,10 +2364,7 @@ static void refer_denied_by_default(struct __test_metadata *const _metadata,
ASSERT_EQ(layer1_err, test_exchange(file2_s1d1, file2_s1d2));
ASSERT_EQ(layer1_err, test_exchange(file2_s1d2, file2_s1d1));
- ruleset_fd = create_ruleset(_metadata, layer2[0].access, layer2);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, layer2[0].access, layer2);
/*
* Now, either the first or the second layer does not handle
@@ -2585,10 +2450,7 @@ TEST_F_FORK(layout1, refer_denied_by_default4)
*/
TEST_F_FORK(layout1, refer_mount_root_deny)
{
- const struct landlock_ruleset_attr ruleset_attr = {
- .handled_access_fs = LANDLOCK_ACCESS_FS_MAKE_DIR,
- };
- int root_fd, ruleset_fd;
+ int root_fd;
/* Creates a mount object from a non-mount point. */
set_cap(_metadata, CAP_SYS_ADMIN);
@@ -2598,13 +2460,7 @@ TEST_F_FORK(layout1, refer_mount_root_deny)
clear_cap(_metadata, CAP_SYS_ADMIN);
ASSERT_LE(0, root_fd);
- ruleset_fd =
- landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
- ASSERT_LE(0, ruleset_fd);
-
- ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
- ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0));
- EXPECT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_MAKE_DIR, NULL);
/* Link denied by Landlock: EACCES. */
EXPECT_EQ(-1, linkat(root_fd, ".", root_fd, "does_not_exist", 0));
@@ -2639,18 +2495,12 @@ TEST_F_FORK(layout1, refer_part_mount_tree_is_allowed)
},
{},
};
- int ruleset_fd;
ASSERT_EQ(0, unlink(file1_s3d3));
- ruleset_fd = create_ruleset(_metadata,
- LANDLOCK_ACCESS_FS_REFER |
- LANDLOCK_ACCESS_FS_MAKE_REG |
- LANDLOCK_ACCESS_FS_REMOVE_FILE,
- layer1);
-
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_REFER | LANDLOCK_ACCESS_FS_MAKE_REG |
+ LANDLOCK_ACCESS_FS_REMOVE_FILE,
+ layer1);
ASSERT_EQ(0, rename(file1_s3d4, file1_s3d3));
}
@@ -2676,13 +2526,10 @@ TEST_F_FORK(layout1, reparent_link)
},
{},
};
- const int ruleset_fd = create_ruleset(
- _metadata,
- LANDLOCK_ACCESS_FS_MAKE_REG | LANDLOCK_ACCESS_FS_REFER, layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_MAKE_REG | LANDLOCK_ACCESS_FS_REFER,
+ layer1);
ASSERT_EQ(0, unlink(file1_s1d1));
ASSERT_EQ(0, unlink(file1_s1d2));
@@ -2754,13 +2601,10 @@ TEST_F_FORK(layout1, reparent_rename)
},
{},
};
- const int ruleset_fd = create_ruleset(
- _metadata,
- LANDLOCK_ACCESS_FS_MAKE_REG | LANDLOCK_ACCESS_FS_REFER, layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_MAKE_REG | LANDLOCK_ACCESS_FS_REFER,
+ layer1);
ASSERT_EQ(0, unlink(file1_s1d2));
ASSERT_EQ(0, unlink(file1_s1d3));
@@ -2900,13 +2744,9 @@ reparent_exdev_layers_enforce1(struct __test_metadata *const _metadata)
},
{},
};
- const int ruleset_fd = create_ruleset(
- _metadata,
- LANDLOCK_ACCESS_FS_MAKE_REG | LANDLOCK_ACCESS_FS_REFER, layer1);
-
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_MAKE_REG | LANDLOCK_ACCESS_FS_REFER,
+ layer1);
}
static void
@@ -2923,12 +2763,7 @@ reparent_exdev_layers_enforce2(struct __test_metadata *const _metadata)
* Same checks as before but with a second layer and a new MAKE_DIR
* rule (and no explicit handling of REFER).
*/
- const int ruleset_fd =
- create_ruleset(_metadata, LANDLOCK_ACCESS_FS_MAKE_DIR, layer2);
-
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_MAKE_DIR, layer2);
}
TEST_F_FORK(layout1, reparent_exdev_layers_rename1)
@@ -3197,15 +3032,11 @@ TEST_F_FORK(layout1, reparent_remove)
},
{},
};
- const int ruleset_fd = create_ruleset(
- _metadata,
- LANDLOCK_ACCESS_FS_REFER | LANDLOCK_ACCESS_FS_REMOVE_DIR |
- LANDLOCK_ACCESS_FS_REMOVE_FILE,
- layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_REFER | LANDLOCK_ACCESS_FS_REMOVE_DIR |
+ LANDLOCK_ACCESS_FS_REMOVE_FILE,
+ layer1);
/* Access denied because of wrong/swapped remove file/dir. */
ASSERT_EQ(-1, rename(file1_s1d1, dir_s2d2));
@@ -3269,17 +3100,13 @@ TEST_F_FORK(layout1, reparent_dom_superset)
},
{},
};
- int ruleset_fd = create_ruleset(_metadata,
- LANDLOCK_ACCESS_FS_REFER |
- LANDLOCK_ACCESS_FS_EXECUTE |
- LANDLOCK_ACCESS_FS_MAKE_SOCK |
- LANDLOCK_ACCESS_FS_READ_FILE |
- LANDLOCK_ACCESS_FS_MAKE_FIFO,
- layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_REFER | LANDLOCK_ACCESS_FS_EXECUTE |
+ LANDLOCK_ACCESS_FS_MAKE_SOCK |
+ LANDLOCK_ACCESS_FS_READ_FILE |
+ LANDLOCK_ACCESS_FS_MAKE_FIFO,
+ layer1);
ASSERT_EQ(-1, rename(file1_s1d2, file1_s2d1));
ASSERT_EQ(EXDEV, errno);
@@ -3342,18 +3169,13 @@ TEST_F_FORK(layout1, remove_dir)
},
{},
};
- const int ruleset_fd =
- create_ruleset(_metadata, rules[0].access, rules);
-
- ASSERT_LE(0, ruleset_fd);
ASSERT_EQ(0, unlink(file1_s1d1));
ASSERT_EQ(0, unlink(file1_s1d2));
ASSERT_EQ(0, unlink(file1_s1d3));
ASSERT_EQ(0, unlink(file2_s1d3));
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, rules[0].access, rules);
ASSERT_EQ(0, rmdir(dir_s1d3));
ASSERT_EQ(0, mkdir(dir_s1d3, 0700));
@@ -3379,12 +3201,8 @@ TEST_F_FORK(layout1, remove_file)
},
{},
};
- const int ruleset_fd =
- create_ruleset(_metadata, rules[0].access, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, rules[0].access, rules);
ASSERT_EQ(-1, unlink(file1_s1d1));
ASSERT_EQ(EACCES, errno);
@@ -3405,9 +3223,6 @@ static void test_make_file(struct __test_metadata *const _metadata,
},
{},
};
- const int ruleset_fd = create_ruleset(_metadata, access, rules);
-
- ASSERT_LE(0, ruleset_fd);
ASSERT_EQ(0, unlink(file1_s1d1));
ASSERT_EQ(0, unlink(file2_s1d1));
@@ -3423,8 +3238,7 @@ static void test_make_file(struct __test_metadata *const _metadata,
ASSERT_EQ(0, unlink(file1_s1d3));
ASSERT_EQ(0, unlink(file2_s1d3));
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, access, rules);
ASSERT_EQ(-1, mknod(file1_s1d1, mode | 0400, dev));
ASSERT_EQ(EACCES, errno);
@@ -3493,10 +3307,6 @@ TEST_F_FORK(layout1, make_sym)
},
{},
};
- const int ruleset_fd =
- create_ruleset(_metadata, rules[0].access, rules);
-
- ASSERT_LE(0, ruleset_fd);
ASSERT_EQ(0, unlink(file1_s1d1));
ASSERT_EQ(0, unlink(file2_s1d1));
@@ -3508,8 +3318,7 @@ TEST_F_FORK(layout1, make_sym)
ASSERT_EQ(0, unlink(file1_s1d3));
ASSERT_EQ(0, unlink(file2_s1d3));
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, rules[0].access, rules);
ASSERT_EQ(-1, symlink("none", file1_s1d1));
ASSERT_EQ(EACCES, errno);
@@ -3538,17 +3347,12 @@ TEST_F_FORK(layout1, make_dir)
},
{},
};
- const int ruleset_fd =
- create_ruleset(_metadata, rules[0].access, rules);
-
- ASSERT_LE(0, ruleset_fd);
ASSERT_EQ(0, unlink(file1_s1d1));
ASSERT_EQ(0, unlink(file1_s1d2));
ASSERT_EQ(0, unlink(file1_s1d3));
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, rules[0].access, rules);
/* Uses file_* as directory names. */
ASSERT_EQ(-1, mkdir(file1_s1d1, 0700));
@@ -3579,14 +3383,10 @@ TEST_F_FORK(layout1, proc_unlinked_file)
{},
};
int reg_fd, proc_fd;
- const int ruleset_fd = create_ruleset(
- _metadata,
- LANDLOCK_ACCESS_FS_READ_FILE | LANDLOCK_ACCESS_FS_WRITE_FILE,
- rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_READ_FILE | LANDLOCK_ACCESS_FS_WRITE_FILE,
+ rules);
ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDWR));
ASSERT_EQ(0, test_open(file1_s1d2, O_RDONLY));
@@ -3622,13 +3422,9 @@ TEST_F_FORK(layout1, proc_pipe)
},
{},
};
- /* Limits read and write access to files tied to the filesystem. */
- const int ruleset_fd =
- create_ruleset(_metadata, rules[0].access, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ /* Limits read and write access to files tied to the filesystem. */
+ enforce_fs(_metadata, rules[0].access, rules);
/* Checks enforcement for normal files. */
ASSERT_EQ(0, test_open(file1_s1d2, O_RDWR));
@@ -3718,16 +3514,10 @@ TEST_F_FORK(layout1, truncate_unhandled)
{},
};
- const __u64 handled = LANDLOCK_ACCESS_FS_READ_FILE |
- LANDLOCK_ACCESS_FS_WRITE_FILE;
- int ruleset_fd;
-
/* Enables Landlock. */
- ruleset_fd = create_ruleset(_metadata, handled, rules);
-
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_READ_FILE | LANDLOCK_ACCESS_FS_WRITE_FILE,
+ rules);
/*
* Checks read right: truncate and open with O_TRUNC work, unless the
@@ -3800,17 +3590,13 @@ TEST_F_FORK(layout1, truncate)
},
{},
};
- const __u64 handled = LANDLOCK_ACCESS_FS_READ_FILE |
- LANDLOCK_ACCESS_FS_WRITE_FILE |
- LANDLOCK_ACCESS_FS_TRUNCATE;
- int ruleset_fd;
/* Enables Landlock. */
- ruleset_fd = create_ruleset(_metadata, handled, rules);
-
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_READ_FILE |
+ LANDLOCK_ACCESS_FS_WRITE_FILE |
+ LANDLOCK_ACCESS_FS_TRUNCATE,
+ rules);
/* Checks read, write and truncate rights: truncation works. */
EXPECT_EQ(0, test_truncate(file_rwt));
@@ -3910,34 +3696,25 @@ TEST_F_FORK(layout1, ftruncate)
},
{},
};
- int fd_layer0, fd_layer1, fd_layer2, fd_layer3, ruleset_fd;
+ int fd_layer0, fd_layer1, fd_layer2, fd_layer3;
fd_layer0 = open(path, O_WRONLY);
EXPECT_EQ(0, test_ftruncate(fd_layer0));
- ruleset_fd = create_ruleset(_metadata, handled1, layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, handled1, layer1);
fd_layer1 = open(path, O_WRONLY);
EXPECT_EQ(0, test_ftruncate(fd_layer0));
EXPECT_EQ(0, test_ftruncate(fd_layer1));
- ruleset_fd = create_ruleset(_metadata, handled2, layer2);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, handled2, layer2);
fd_layer2 = open(path, O_WRONLY);
EXPECT_EQ(0, test_ftruncate(fd_layer0));
EXPECT_EQ(0, test_ftruncate(fd_layer1));
EXPECT_EQ(0, test_ftruncate(fd_layer2));
- ruleset_fd = create_ruleset(_metadata, handled3, layer3);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, handled3, layer3);
fd_layer3 = open(path, O_WRONLY);
EXPECT_EQ(0, test_ftruncate(fd_layer0));
@@ -4029,13 +3806,10 @@ TEST_F_FORK(ftruncate, open_and_ftruncate)
},
{},
};
- int fd, ruleset_fd;
+ int fd;
/* Enables Landlock. */
- ruleset_fd = create_ruleset(_metadata, variant->handled, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, variant->handled, rules);
fd = open(path, O_WRONLY);
EXPECT_EQ(variant->expected_open_result, (fd < 0 ? errno : 0));
@@ -4070,12 +3844,9 @@ TEST_F_FORK(ftruncate, open_and_ftruncate_in_different_processes)
},
{},
};
- int fd, ruleset_fd;
+ int fd;
- ruleset_fd = create_ruleset(_metadata, variant->handled, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, variant->handled, rules);
fd = open(path, O_WRONLY);
ASSERT_EQ(variant->expected_open_result, (fd < 0 ? errno : 0));
@@ -4120,10 +3891,7 @@ static int test_fs_ioc_getflags_ioctl(int fd)
TEST(memfd_ftruncate_and_ioctl)
{
- const struct landlock_ruleset_attr attr = {
- .handled_access_fs = ACCESS_ALL,
- };
- int ruleset_fd, fd, i;
+ int fd, i;
/*
* We exercise the same test both with and without Landlock enabled, to
@@ -4145,10 +3913,7 @@ TEST(memfd_ftruncate_and_ioctl)
ASSERT_EQ(0, close(fd));
/* Enables Landlock. */
- ruleset_fd = landlock_create_ruleset(&attr, sizeof(attr), 0);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
}
}
@@ -4163,10 +3928,7 @@ static int test_fionread_ioctl(int fd)
TEST_F_FORK(layout1, o_path_ftruncate_and_ioctl)
{
- const struct landlock_ruleset_attr attr = {
- .handled_access_fs = ACCESS_ALL,
- };
- int ruleset_fd, fd;
+ int fd;
/*
* Checks that for files opened with O_PATH, both ioctl(2) and
@@ -4182,10 +3944,7 @@ TEST_F_FORK(layout1, o_path_ftruncate_and_ioctl)
ASSERT_EQ(0, close(fd));
/* Enables Landlock. */
- ruleset_fd = landlock_create_ruleset(&attr, sizeof(attr), 0);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
/*
* Checks that after enabling Landlock,
@@ -4259,16 +4018,10 @@ struct space_resv {
*/
TEST_F_FORK(layout1, blanket_permitted_ioctls)
{
- const struct landlock_ruleset_attr attr = {
- .handled_access_fs = LANDLOCK_ACCESS_FS_IOCTL_DEV,
- };
- int ruleset_fd, fd;
+ int fd;
/* Enables Landlock. */
- ruleset_fd = landlock_create_ruleset(&attr, sizeof(attr), 0);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_IOCTL_DEV, NULL);
fd = open("/dev/null", O_RDWR | O_CLOEXEC);
ASSERT_LE(0, fd);
@@ -4321,20 +4074,14 @@ TEST_F_FORK(layout1, blanket_permitted_ioctls)
TEST_F_FORK(layout1, named_pipe_ioctl)
{
pid_t child_pid;
- int fd, ruleset_fd;
+ int fd;
const char *const path = file1_s1d1;
- const struct landlock_ruleset_attr attr = {
- .handled_access_fs = LANDLOCK_ACCESS_FS_IOCTL_DEV,
- };
ASSERT_EQ(0, unlink(path));
ASSERT_EQ(0, mkfifo(path, 0600));
/* Enables Landlock. */
- ruleset_fd = landlock_create_ruleset(&attr, sizeof(attr), 0);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_IOCTL_DEV, NULL);
/* The child process opens the pipe for writing. */
child_pid = fork();
@@ -4357,44 +4104,72 @@ TEST_F_FORK(layout1, named_pipe_ioctl)
ASSERT_EQ(child_pid, waitpid(child_pid, NULL, 0));
}
-/* For named UNIX domain sockets, no IOCTL restrictions apply. */
-TEST_F_FORK(layout1, named_unix_domain_socket_ioctl)
+/*
+ * set_up_named_unix_server - Create a pathname unix socket
+ *
+ * If the socket type is not SOCK_DGRAM, also invoke listen(2).
+ *
+ * Return: The listening FD - it is the caller responsibility to close it.
+ */
+static int set_up_named_unix_server(struct __test_metadata *const _metadata,
+ int type, const char *const path)
{
- const char *const path = file1_s1d1;
- int srv_fd, cli_fd, ruleset_fd;
- struct sockaddr_un srv_un = {
+ int fd;
+ struct sockaddr_un addr = {
.sun_family = AF_UNIX,
};
- struct sockaddr_un cli_un = {
+
+ fd = socket(AF_UNIX, type, 0);
+ ASSERT_LE(0, fd);
+
+ ASSERT_LT(strlen(path), sizeof(addr.sun_path));
+ strncpy(addr.sun_path, path, sizeof(addr.sun_path) - 1);
+
+ ASSERT_EQ(0, bind(fd, (struct sockaddr *)&addr, sizeof(addr)));
+
+ if (type != SOCK_DGRAM)
+ ASSERT_EQ(0, listen(fd, 10 /* qlen */));
+ return fd;
+}
+
+/*
+ * test_connect_named_unix - connect to the given named UNIX socket
+ *
+ * Return: The errno from connect(), or 0
+ */
+static int test_connect_named_unix(struct __test_metadata *const _metadata,
+ int fd, const char *const path)
+{
+ struct sockaddr_un addr = {
.sun_family = AF_UNIX,
};
- const struct landlock_ruleset_attr attr = {
- .handled_access_fs = LANDLOCK_ACCESS_FS_IOCTL_DEV,
- };
- /* Sets up a server */
- ASSERT_EQ(0, unlink(path));
- srv_fd = socket(AF_UNIX, SOCK_STREAM, 0);
- ASSERT_LE(0, srv_fd);
+ ASSERT_LT(strlen(path), sizeof(addr.sun_path));
+ strncpy(addr.sun_path, path, sizeof(addr.sun_path) - 1);
- strncpy(srv_un.sun_path, path, sizeof(srv_un.sun_path));
- ASSERT_EQ(0, bind(srv_fd, (struct sockaddr *)&srv_un, sizeof(srv_un)));
+ if (connect(fd, (struct sockaddr *)&addr, sizeof(addr)) == -1)
+ return errno;
+ return 0;
+}
- ASSERT_EQ(0, listen(srv_fd, 10 /* qlen */));
+/* For named UNIX domain sockets, no IOCTL restrictions apply. */
+TEST_F_FORK(layout1, named_unix_domain_socket_ioctl)
+{
+ const char *const path = file1_s1d1;
+ int srv_fd, cli_fd;
+
+ /* Sets up a server */
+ ASSERT_EQ(0, unlink(path));
+ srv_fd = set_up_named_unix_server(_metadata, SOCK_STREAM, path);
/* Enables Landlock. */
- ruleset_fd = landlock_create_ruleset(&attr, sizeof(attr), 0);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_IOCTL_DEV, NULL);
/* Sets up a client connection to it */
cli_fd = socket(AF_UNIX, SOCK_STREAM, 0);
ASSERT_LE(0, cli_fd);
- strncpy(cli_un.sun_path, path, sizeof(cli_un.sun_path));
- ASSERT_EQ(0,
- connect(cli_fd, (struct sockaddr *)&cli_un, sizeof(cli_un)));
+ ASSERT_EQ(0, test_connect_named_unix(_metadata, cli_fd, path));
/* FIONREAD and other IOCTLs should not be forbidden. */
EXPECT_EQ(0, test_fionread_ioctl(cli_fd));
@@ -4461,29 +4236,25 @@ TEST_F_FORK(ioctl, handle_dir_access_file)
},
{},
};
- int file_fd, ruleset_fd;
+ int fd;
/* Enables Landlock. */
- ruleset_fd = create_ruleset(_metadata, variant->handled, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, variant->handled, rules);
- file_fd = open("/dev/zero", variant->open_mode);
- ASSERT_LE(0, file_fd);
+ fd = open("/dev/zero", variant->open_mode);
+ ASSERT_LE(0, fd);
/* Checks that IOCTL commands return the expected errors. */
- EXPECT_EQ(variant->expected_fionread_result,
- test_fionread_ioctl(file_fd));
+ EXPECT_EQ(variant->expected_fionread_result, test_fionread_ioctl(fd));
/* Checks that unrestrictable commands are unrestricted. */
- EXPECT_EQ(0, ioctl(file_fd, FIOCLEX));
- EXPECT_EQ(0, ioctl(file_fd, FIONCLEX));
- EXPECT_EQ(0, ioctl(file_fd, FIONBIO, &flag));
- EXPECT_EQ(0, ioctl(file_fd, FIOASYNC, &flag));
- EXPECT_EQ(0, ioctl(file_fd, FIGETBSZ, &flag));
+ EXPECT_EQ(0, ioctl(fd, FIOCLEX));
+ EXPECT_EQ(0, ioctl(fd, FIONCLEX));
+ EXPECT_EQ(0, ioctl(fd, FIONBIO, &flag));
+ EXPECT_EQ(0, ioctl(fd, FIOASYNC, &flag));
+ EXPECT_EQ(0, ioctl(fd, FIGETBSZ, &flag));
- ASSERT_EQ(0, close(file_fd));
+ ASSERT_EQ(0, close(fd));
}
TEST_F_FORK(ioctl, handle_dir_access_dir)
@@ -4496,13 +4267,10 @@ TEST_F_FORK(ioctl, handle_dir_access_dir)
},
{},
};
- int dir_fd, ruleset_fd;
+ int dir_fd;
/* Enables Landlock. */
- ruleset_fd = create_ruleset(_metadata, variant->handled, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, variant->handled, rules);
/*
* Ignore variant->open_mode for this test, as we intend to open a
@@ -4541,32 +4309,478 @@ TEST_F_FORK(ioctl, handle_file_access_file)
},
{},
};
- int file_fd, ruleset_fd;
+ int fd;
/* Enables Landlock. */
- ruleset_fd = create_ruleset(_metadata, variant->handled, rules);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, variant->handled, rules);
- file_fd = open("/dev/zero", variant->open_mode);
- ASSERT_LE(0, file_fd)
+ fd = open("/dev/zero", variant->open_mode);
+ ASSERT_LE(0, fd)
{
TH_LOG("Failed to open /dev/zero: %s", strerror(errno));
}
/* Checks that IOCTL commands return the expected errors. */
- EXPECT_EQ(variant->expected_fionread_result,
- test_fionread_ioctl(file_fd));
+ EXPECT_EQ(variant->expected_fionread_result, test_fionread_ioctl(fd));
/* Checks that unrestrictable commands are unrestricted. */
- EXPECT_EQ(0, ioctl(file_fd, FIOCLEX));
- EXPECT_EQ(0, ioctl(file_fd, FIONCLEX));
- EXPECT_EQ(0, ioctl(file_fd, FIONBIO, &flag));
- EXPECT_EQ(0, ioctl(file_fd, FIOASYNC, &flag));
- EXPECT_EQ(0, ioctl(file_fd, FIGETBSZ, &flag));
+ EXPECT_EQ(0, ioctl(fd, FIOCLEX));
+ EXPECT_EQ(0, ioctl(fd, FIONCLEX));
+ EXPECT_EQ(0, ioctl(fd, FIONBIO, &flag));
+ EXPECT_EQ(0, ioctl(fd, FIOASYNC, &flag));
+ EXPECT_EQ(0, ioctl(fd, FIGETBSZ, &flag));
+
+ ASSERT_EQ(0, close(fd));
+}
+
+/*
+ * test_sendto_named_unix - sendto to the given named UNIX socket
+ *
+ * sendto() is equivalent to sendmsg() in this respect.
+ *
+ * Return: The errno from sendto(), or 0
+ */
+static int test_sendto_named_unix(struct __test_metadata *const _metadata,
+ int fd, const char *const path)
+{
+ static const char buf[] = "dummy";
+ struct sockaddr_un addr = {
+ .sun_family = AF_UNIX,
+ };
+
+ ASSERT_LT(strlen(path), sizeof(addr.sun_path));
+ strncpy(addr.sun_path, path, sizeof(addr.sun_path) - 1);
+
+ if (sendto(fd, buf, sizeof(buf), 0, (struct sockaddr *)&addr,
+ sizeof(addr)) == -1)
+ return errno;
+ return 0;
+}
+
+/* clang-format off */
+FIXTURE(scoped_domains) {};
+/* clang-format on */
+
+#include "scoped_base_variants.h"
+
+FIXTURE_SETUP(scoped_domains)
+{
+ drop_caps(_metadata);
+};
+
+FIXTURE_TEARDOWN(scoped_domains)
+{
+}
+
+/*
+ * Flags for test_connect_to_parent and test_connect_to_child:
+ *
+ * USE_SENDTO: Use sendto() instead of connect() (for SOCK_DGRAM only)
+ * ENFORCE_ALL: Enforce a Landlock domain even when the variant says
+ * we shouldn't. We enforce a domain where the path is allow-listed,
+ * and expect the behavior to be the same as if none was used.
+ */
+#define USE_SENDTO (1 << 0)
+#define ENFORCE_ALL (1 << 1)
+
+static void test_connect_to_parent(struct __test_metadata *const _metadata,
+ const FIXTURE_VARIANT(scoped_domains) *
+ variant,
+ int sock_type, int flags)
+{
+ const char *const path = "sock";
+ const struct rule rules[] = {
+ {
+ .path = ".",
+ .access = LANDLOCK_ACCESS_FS_RESOLVE_UNIX,
+ },
+ {},
+ };
+ int cli_fd, srv_fd, res, status;
+ pid_t child_pid;
+ int readiness_pipe[2];
+ char buf[1];
+
+ if (variant->domain_both)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX, NULL);
+ else if (flags & ENFORCE_ALL)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX, rules);
+
+ unlink(path);
+ ASSERT_EQ(0, pipe2(readiness_pipe, O_CLOEXEC));
+
+ child_pid = fork();
+ ASSERT_LE(0, child_pid);
+
+ if (child_pid == 0) {
+ if (variant->domain_child)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX,
+ NULL);
+ else if (flags & ENFORCE_ALL)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX,
+ rules);
+
+ /* Wait for server to be available. */
+ EXPECT_EQ(0, close(readiness_pipe[1]));
+ EXPECT_EQ(1, read(readiness_pipe[0], &buf, 1));
+ EXPECT_EQ(0, close(readiness_pipe[0]));
+
+ /* Talk to server. */
+ cli_fd = socket(AF_UNIX, sock_type, 0);
+ ASSERT_LE(0, cli_fd);
+
+ if (flags & USE_SENDTO)
+ res = test_sendto_named_unix(_metadata, cli_fd, path);
+ else
+ res = test_connect_named_unix(_metadata, cli_fd, path);
+
+ EXPECT_EQ(variant->domain_child ? EACCES : 0, res);
+
+ /* Clean up. */
+ EXPECT_EQ(0, close(cli_fd));
+
+ _exit(_metadata->exit_code);
+ return;
+ }
+
+ if (variant->domain_parent)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX, NULL);
+ else if (flags & ENFORCE_ALL)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX, rules);
+
+ srv_fd = set_up_named_unix_server(_metadata, sock_type, path);
+
+ /* Tell the child that it can connect. */
+ EXPECT_EQ(0, close(readiness_pipe[0]));
+ EXPECT_EQ(sizeof(buf), write(readiness_pipe[1], buf, sizeof(buf)));
+ EXPECT_EQ(0, close(readiness_pipe[1]));
+
+ /* Wait for child. */
+ ASSERT_EQ(child_pid, waitpid(child_pid, &status, 0));
+ EXPECT_EQ(1, WIFEXITED(status));
+ EXPECT_EQ(EXIT_SUCCESS, WEXITSTATUS(status));
+
+ /* Clean up. */
+ EXPECT_EQ(0, close(srv_fd));
+ EXPECT_EQ(0, unlink(path));
+}
+
+static void test_connect_to_child(struct __test_metadata *const _metadata,
+ const FIXTURE_VARIANT(scoped_domains) *
+ variant,
+ int sock_type, int flags)
+{
+ const char *const path = "sock";
+ const struct rule rules[] = {
+ {
+ .path = ".",
+ .access = LANDLOCK_ACCESS_FS_RESOLVE_UNIX,
+ },
+ {},
+ };
+ int readiness_pipe[2];
+ int shutdown_pipe[2];
+ int cli_fd, srv_fd, res, status;
+ pid_t child_pid;
+ char buf[1];
+
+ if (variant->domain_both)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX, NULL);
+ else if (flags & ENFORCE_ALL)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX, rules);
+
+ unlink(path);
+ ASSERT_EQ(0, pipe2(readiness_pipe, O_CLOEXEC));
+ ASSERT_EQ(0, pipe2(shutdown_pipe, O_CLOEXEC));
+
+ child_pid = fork();
+ ASSERT_LE(0, child_pid);
+
+ if (child_pid == 0) {
+ if (variant->domain_child)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX,
+ NULL);
+ else if (flags & ENFORCE_ALL)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX,
+ rules);
+
+ srv_fd = set_up_named_unix_server(_metadata, sock_type, path);
+
+ /* Tell the parent that it can connect. */
+ EXPECT_EQ(0, close(readiness_pipe[0]));
+ EXPECT_EQ(sizeof(buf),
+ write(readiness_pipe[1], buf, sizeof(buf)));
+ EXPECT_EQ(0, close(readiness_pipe[1]));
+
+ /* Wait until it is time to shut down. */
+ EXPECT_EQ(0, close(shutdown_pipe[1]));
+ EXPECT_EQ(1, read(shutdown_pipe[0], &buf, 1));
+ EXPECT_EQ(0, close(shutdown_pipe[0]));
+
+ /* Cleanup */
+ EXPECT_EQ(0, close(srv_fd));
+ EXPECT_EQ(0, unlink(path));
+
+ _exit(_metadata->exit_code);
+ return;
+ }
+
+ if (variant->domain_parent)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX, NULL);
+ else if (flags & ENFORCE_ALL)
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX, rules);
+
+ /* Wait for server to be available. */
+ EXPECT_EQ(0, close(readiness_pipe[1]));
+ EXPECT_EQ(1, read(readiness_pipe[0], &buf, 1));
+ EXPECT_EQ(0, close(readiness_pipe[0]));
+
+ /* Talk to server. */
+ cli_fd = socket(AF_UNIX, sock_type, 0);
+ ASSERT_LE(0, cli_fd);
+
+ if (flags & USE_SENDTO)
+ res = test_sendto_named_unix(_metadata, cli_fd, path);
+ else
+ res = test_connect_named_unix(_metadata, cli_fd, path);
+
+ EXPECT_EQ(variant->domain_parent ? EACCES : 0, res);
+
+ /* Clean up. */
+ EXPECT_EQ(0, close(cli_fd));
+
+ /* Tell the server to shut down. */
+ EXPECT_EQ(0, close(shutdown_pipe[0]));
+ EXPECT_EQ(sizeof(buf), write(shutdown_pipe[1], buf, sizeof(buf)));
+ EXPECT_EQ(0, close(shutdown_pipe[1]));
+
+ /* Wait for child. */
+ ASSERT_EQ(child_pid, waitpid(child_pid, &status, 0));
+ EXPECT_EQ(1, WIFEXITED(status));
+ EXPECT_EQ(EXIT_SUCCESS, WEXITSTATUS(status));
+}
+
+TEST_F(scoped_domains, unix_stream_connect_to_parent)
+{
+ test_connect_to_parent(_metadata, variant, SOCK_STREAM, 0);
+}
+
+TEST_F(scoped_domains, unix_dgram_connect_to_parent)
+{
+ test_connect_to_parent(_metadata, variant, SOCK_DGRAM, 0);
+}
+
+TEST_F(scoped_domains, unix_dgram_sendmsg_to_parent)
+{
+ test_connect_to_parent(_metadata, variant, SOCK_DGRAM, USE_SENDTO);
+}
+
+TEST_F(scoped_domains, unix_seqpacket_connect_to_parent)
+{
+ test_connect_to_parent(_metadata, variant, SOCK_SEQPACKET, 0);
+}
+
+TEST_F(scoped_domains, unix_stream_connect_to_parent_full)
+{
+ test_connect_to_parent(_metadata, variant, SOCK_STREAM, ENFORCE_ALL);
+}
+
+TEST_F(scoped_domains, unix_dgram_connect_to_parent_full)
+{
+ test_connect_to_parent(_metadata, variant, SOCK_DGRAM, ENFORCE_ALL);
+}
+
+TEST_F(scoped_domains, unix_dgram_sendmsg_to_parent_full)
+{
+ test_connect_to_parent(_metadata, variant, SOCK_DGRAM,
+ USE_SENDTO | ENFORCE_ALL);
+}
+
+TEST_F(scoped_domains, unix_seqpacket_connect_to_parent_full)
+{
+ test_connect_to_parent(_metadata, variant, SOCK_SEQPACKET, ENFORCE_ALL);
+}
+
+TEST_F(scoped_domains, unix_stream_connect_to_child)
+{
+ test_connect_to_child(_metadata, variant, SOCK_STREAM, 0);
+}
+
+TEST_F(scoped_domains, unix_dgram_connect_to_child)
+{
+ test_connect_to_child(_metadata, variant, SOCK_DGRAM, 0);
+}
+
+TEST_F(scoped_domains, unix_dgram_sendmsg_to_child)
+{
+ test_connect_to_child(_metadata, variant, SOCK_DGRAM, USE_SENDTO);
+}
+
+TEST_F(scoped_domains, unix_seqpacket_connect_to_child)
+{
+ test_connect_to_child(_metadata, variant, SOCK_SEQPACKET, 0);
+}
+
+TEST_F(scoped_domains, unix_stream_connect_to_child_full)
+{
+ test_connect_to_child(_metadata, variant, SOCK_STREAM, ENFORCE_ALL);
+}
+
+TEST_F(scoped_domains, unix_dgram_connect_to_child_full)
+{
+ test_connect_to_child(_metadata, variant, SOCK_DGRAM, ENFORCE_ALL);
+}
+
+TEST_F(scoped_domains, unix_dgram_sendmsg_to_child_full)
+{
+ test_connect_to_child(_metadata, variant, SOCK_DGRAM,
+ USE_SENDTO | ENFORCE_ALL);
+}
+
+TEST_F(scoped_domains, unix_seqpacket_connect_to_child_full)
+{
+ test_connect_to_child(_metadata, variant, SOCK_SEQPACKET, ENFORCE_ALL);
+}
+
+#undef USE_SENDTO
+#undef ENFORCE_ALL
+
+static void read_core_pattern(struct __test_metadata *const _metadata,
+ char *buf, size_t buf_size)
+{
+ int fd;
+ ssize_t ret;
+
+ fd = open("/proc/sys/kernel/core_pattern", O_RDONLY | O_CLOEXEC);
+ ASSERT_LE(0, fd);
+
+ ret = read(fd, buf, buf_size - 1);
+ ASSERT_LE(0, ret);
+ EXPECT_EQ(0, close(fd));
+
+ buf[ret] = '\0';
+}
+
+static void set_core_pattern(struct __test_metadata *const _metadata,
+ const char *pattern)
+{
+ int fd;
+ size_t len = strlen(pattern);
+
+ /*
+ * Writing to /proc/sys/kernel/core_pattern requires EUID 0 because
+ * sysctl_perm() checks that, ignoring capabilities like
+ * CAP_SYS_ADMIN or CAP_DAC_OVERRIDE.
+ *
+ * Switching EUID clears the dumpable flag, which must be restored
+ * afterwards to allow coredumps.
+ */
+ set_cap(_metadata, CAP_SETUID);
+ ASSERT_EQ(0, seteuid(0));
+ clear_cap(_metadata, CAP_SETUID);
+
+ fd = open("/proc/sys/kernel/core_pattern", O_WRONLY | O_CLOEXEC);
+ ASSERT_LE(0, fd)
+ {
+ TH_LOG("Failed to open core_pattern for writing: %s",
+ strerror(errno));
+ }
+
+ ASSERT_EQ(len, write(fd, pattern, len));
+ EXPECT_EQ(0, close(fd));
+
+ set_cap(_metadata, CAP_SETUID);
+ ASSERT_EQ(0, seteuid(getuid()));
+ clear_cap(_metadata, CAP_SETUID);
- ASSERT_EQ(0, close(file_fd));
+ /* Restore dumpable flag cleared by seteuid(). */
+ ASSERT_EQ(0, prctl(PR_SET_DUMPABLE, 1, 0, 0, 0));
+}
+
+FIXTURE(coredump)
+{
+ char original_core_pattern[256];
+};
+
+FIXTURE_SETUP(coredump)
+{
+ disable_caps(_metadata);
+ read_core_pattern(_metadata, self->original_core_pattern,
+ sizeof(self->original_core_pattern));
+}
+
+FIXTURE_TEARDOWN_PARENT(coredump)
+{
+ set_core_pattern(_metadata, self->original_core_pattern);
+}
+
+/*
+ * Test that even when a process is restricted with
+ * LANDLOCK_ACCESS_FS_RESOLVE_UNIX, the kernel can still initiate a connection
+ * to the coredump socket on the processes' behalf.
+ */
+TEST_F_FORK(coredump, socket_not_restricted)
+{
+ static const char core_pattern[] = "@/tmp/landlock_coredump_test.sock";
+ const char *const sock_path = core_pattern + 1;
+ int srv_fd, conn_fd, status;
+ pid_t child_pid;
+ struct ucred cred;
+ socklen_t cred_len = sizeof(cred);
+ char buf[4096];
+
+ /* Set up the coredump server socket. */
+ unlink(sock_path);
+ srv_fd = set_up_named_unix_server(_metadata, SOCK_STREAM, sock_path);
+
+ /* Point coredumps at our socket. */
+ set_core_pattern(_metadata, core_pattern);
+
+ /* Restrict LANDLOCK_ACCESS_FS_RESOLVE_UNIX. */
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_RESOLVE_UNIX, NULL);
+
+ /* Fork a child that crashes. */
+ child_pid = fork();
+ ASSERT_LE(0, child_pid);
+ if (child_pid == 0) {
+ struct rlimit rl = {
+ .rlim_cur = RLIM_INFINITY,
+ .rlim_max = RLIM_INFINITY,
+ };
+
+ ASSERT_EQ(0, setrlimit(RLIMIT_CORE, &rl));
+
+ /* Crash on purpose. */
+ kill(getpid(), SIGSEGV);
+ _exit(1);
+ }
+
+ /*
+ * Accept the coredump connection. If Landlock incorrectly denies the
+ * kernel's coredump connect, accept() will block forever, so the test
+ * would time out.
+ */
+ conn_fd = accept(srv_fd, NULL, NULL);
+ ASSERT_LE(0, conn_fd);
+
+ /* Check that the connection came from the crashing child. */
+ ASSERT_EQ(0, getsockopt(conn_fd, SOL_SOCKET, SO_PEERCRED, &cred,
+ &cred_len));
+ EXPECT_EQ(child_pid, cred.pid);
+
+ /* Drain the coredump data so the kernel can finish. */
+ while (read(conn_fd, buf, sizeof(buf)) > 0)
+ ;
+
+ EXPECT_EQ(0, close(conn_fd));
+
+ /* Wait for the child and verify it coredumped. */
+ ASSERT_EQ(child_pid, waitpid(child_pid, &status, 0));
+ ASSERT_TRUE(WIFSIGNALED(status));
+ ASSERT_TRUE(WCOREDUMP(status));
+
+ EXPECT_EQ(0, close(srv_fd));
+ EXPECT_EQ(0, unlink(sock_path));
}
/* clang-format off */
@@ -4710,13 +4924,9 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
},
{},
};
- int ruleset_fd;
/* Sets rules for the parent directories. */
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer1_parent);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer1_parent);
/* Checks source hierarchy. */
ASSERT_EQ(0, test_open(file1_s1d1, O_RDONLY));
@@ -4735,10 +4945,7 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
ASSERT_EQ(0, test_open(dir_s2d2, O_RDONLY | O_DIRECTORY));
/* Sets rules for the mount points. */
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer2_mount_point);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer2_mount_point);
/* Checks source hierarchy. */
ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
@@ -4759,10 +4966,7 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
ASSERT_EQ(0, test_open(bind_dir_s1d3, O_RDONLY | O_DIRECTORY));
/* Sets a (shared) rule only on the source. */
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer3_source);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer3_source);
/* Checks source hierarchy. */
ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDONLY));
@@ -4783,10 +4987,7 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
ASSERT_EQ(EACCES, test_open(bind_dir_s1d3, O_RDONLY | O_DIRECTORY));
/* Sets a (shared) rule only on the destination. */
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer4_destination);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer4_destination);
/* Checks source hierarchy. */
ASSERT_EQ(EACCES, test_open(file1_s1d3, O_RDONLY));
@@ -4811,13 +5012,10 @@ TEST_F_FORK(layout1_bind, reparent_cross_mount)
},
{},
};
- int ruleset_fd = create_ruleset(
- _metadata,
- LANDLOCK_ACCESS_FS_REFER | LANDLOCK_ACCESS_FS_EXECUTE, layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_REFER | LANDLOCK_ACCESS_FS_EXECUTE,
+ layer1);
/* Checks basic denied move. */
ASSERT_EQ(-1, rename(file1_s1d1, file1_s1d2));
@@ -4874,10 +5072,6 @@ TEST_F_FORK(layout1_bind, path_disconnected)
create_ruleset(_metadata, ACCESS_RW, layer3_only_s1d2);
int bind_s1d3_fd;
- ASSERT_LE(0, ruleset_fd_l1);
- ASSERT_LE(0, ruleset_fd_l2);
- ASSERT_LE(0, ruleset_fd_l3);
-
enforce_ruleset(_metadata, ruleset_fd_l1);
EXPECT_EQ(0, close(ruleset_fd_l1));
@@ -4981,8 +5175,6 @@ TEST_F_FORK(layout1_bind, path_disconnected_rename)
ruleset_fd_l1 = create_ruleset(_metadata, ACCESS_ALL, layer1);
ruleset_fd_l2 = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
layer2_only_s1d2);
- ASSERT_LE(0, ruleset_fd_l1);
- ASSERT_LE(0, ruleset_fd_l2);
enforce_ruleset(_metadata, ruleset_fd_l1);
EXPECT_EQ(0, close(ruleset_fd_l1));
@@ -5128,7 +5320,7 @@ TEST_F_FORK(layout1_bind, path_disconnected_link)
},
{}
};
- int ruleset_fd, bind_s1d3_fd;
+ int bind_s1d3_fd;
/* Removes unneeded files created by layout1, otherwise it will EEXIST. */
ASSERT_EQ(0, unlink(file1_s1d2));
@@ -5151,10 +5343,7 @@ TEST_F_FORK(layout1_bind, path_disconnected_link)
TH_LOG("Failed to create %s: %s", dir_s4d2, strerror(errno));
}
- ruleset_fd = create_ruleset(_metadata, ACCESS_ALL, layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- EXPECT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_ALL, layer1);
/* From disconnected to connected. */
ASSERT_EQ(0, linkat(bind_s1d3_fd, file1_name, AT_FDCWD, file1_s2d2, 0))
@@ -5692,7 +5881,6 @@ TEST_F_FORK(layout4_disconnected_leafs, read_rename_exchange)
int ruleset_fd, s1d41_bind_fd, s1d42_bind_fd;
ruleset_fd = create_ruleset(_metadata, handled_access, rules);
- ASSERT_LE(0, ruleset_fd);
/* Adds rule for the covered directory. */
if (variant->allowed_s2d2) {
@@ -6625,7 +6813,6 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
},
{},
};
- int ruleset_fd;
size_t i;
const char *path_entry;
@@ -6633,10 +6820,7 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
SKIP(return, "overlayfs is not supported (test)");
/* Sets rules on base directories (i.e. outside overlay scope). */
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer1_base);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer1_base);
/* Checks lower layer. */
for_each_path(lower_base_files, path_entry, i) {
@@ -6681,10 +6865,7 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
}
/* Sets rules on data directories (i.e. inside overlay scope). */
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer2_data);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer2_data);
/* Checks merge. */
for_each_path(merge_base_files, path_entry, i) {
@@ -6698,10 +6879,7 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
}
/* Same checks with tighter rules. */
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer3_subdirs);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer3_subdirs);
/* Checks changes for lower layer. */
for_each_path(lower_base_files, path_entry, i) {
@@ -6723,10 +6901,7 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
}
/* Sets rules directly on overlayed files. */
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer4_files);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer4_files);
/* Checks unchanged accesses on lower layer. */
for_each_path(lower_sub_files, path_entry, i) {
@@ -6751,10 +6926,7 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
}
/* Only allowes access to the merge hierarchy. */
- ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer5_merge_only);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, ACCESS_RW, layer5_merge_only);
/* Checks new accesses on lower layer. */
for_each_path(lower_sub_files, path_entry, i) {
@@ -6940,11 +7112,7 @@ static void layer3_fs_tag_inode(struct __test_metadata *const _metadata,
},
{},
};
- const struct landlock_ruleset_attr layer2_deny_everything_attr = {
- .handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE,
- };
const char *const dev_null_path = "/dev/null";
- int ruleset_fd;
if (self->skip_test)
SKIP(return, "this filesystem is not supported (test)");
@@ -6953,22 +7121,14 @@ static void layer3_fs_tag_inode(struct __test_metadata *const _metadata,
EXPECT_EQ(0, test_open(dev_null_path, O_RDONLY | O_CLOEXEC));
EXPECT_EQ(0, test_open(variant->file_path, O_RDONLY | O_CLOEXEC));
- ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
- layer1_allow_read_file);
- EXPECT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- EXPECT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
+ layer1_allow_read_file);
EXPECT_EQ(EACCES, test_open(dev_null_path, O_RDONLY | O_CLOEXEC));
EXPECT_EQ(0, test_open(variant->file_path, O_RDONLY | O_CLOEXEC));
/* Forbids directory reading. */
- ruleset_fd =
- landlock_create_ruleset(&layer2_deny_everything_attr,
- sizeof(layer2_deny_everything_attr), 0);
- EXPECT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- EXPECT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_READ_FILE, NULL);
/* Checks with Landlock and forbidden access. */
EXPECT_EQ(EACCES, test_open(dev_null_path, O_RDONLY | O_CLOEXEC));
@@ -7030,7 +7190,6 @@ TEST_F_FORK(layout3_fs, release_inodes)
ruleset_fd =
create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_DIR, layer1);
- ASSERT_LE(0, ruleset_fd);
/* Unmount the filesystem while it is being used by a ruleset. */
set_cap(_metadata, CAP_SYS_ADMIN);
@@ -7137,11 +7296,7 @@ TEST_F(audit_layout1, execute_make)
test_execute(_metadata, 0, file1_s1d1);
test_check_exec(_metadata, 0, file1_s1d1);
- drop_access_rights(_metadata,
- &(struct landlock_ruleset_attr){
- .handled_access_fs =
- LANDLOCK_ACCESS_FS_EXECUTE,
- });
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_EXECUTE, NULL);
test_execute(_metadata, EACCES, file1_s1d1);
EXPECT_EQ(0, matches_log_fs(_metadata, self->audit_fd, "fs\\.execute",
@@ -7160,26 +7315,6 @@ TEST_F(audit_layout1, execute_make)
* only the blocked ones are logged.
*/
-/* clang-format off */
-static const __u64 access_fs_16 =
- LANDLOCK_ACCESS_FS_EXECUTE |
- LANDLOCK_ACCESS_FS_WRITE_FILE |
- LANDLOCK_ACCESS_FS_READ_FILE |
- LANDLOCK_ACCESS_FS_READ_DIR |
- LANDLOCK_ACCESS_FS_REMOVE_DIR |
- LANDLOCK_ACCESS_FS_REMOVE_FILE |
- LANDLOCK_ACCESS_FS_MAKE_CHAR |
- LANDLOCK_ACCESS_FS_MAKE_DIR |
- LANDLOCK_ACCESS_FS_MAKE_REG |
- LANDLOCK_ACCESS_FS_MAKE_SOCK |
- LANDLOCK_ACCESS_FS_MAKE_FIFO |
- LANDLOCK_ACCESS_FS_MAKE_BLOCK |
- LANDLOCK_ACCESS_FS_MAKE_SYM |
- LANDLOCK_ACCESS_FS_REFER |
- LANDLOCK_ACCESS_FS_TRUNCATE |
- LANDLOCK_ACCESS_FS_IOCTL_DEV;
-/* clang-format on */
-
TEST_F(audit_layout1, execute_read)
{
struct audit_records records;
@@ -7188,9 +7323,7 @@ TEST_F(audit_layout1, execute_read)
test_execute(_metadata, 0, file1_s1d1);
test_check_exec(_metadata, 0, file1_s1d1);
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
/*
* The only difference with the previous audit_layout1.execute_read test is
@@ -7212,9 +7345,7 @@ TEST_F(audit_layout1, write_file)
{
struct audit_records records;
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(EACCES, test_open(file1_s1d1, O_WRONLY));
EXPECT_EQ(0, matches_log_fs(_metadata, self->audit_fd,
@@ -7229,9 +7360,7 @@ TEST_F(audit_layout1, read_file)
{
struct audit_records records;
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
EXPECT_EQ(0, matches_log_fs(_metadata, self->audit_fd, "fs\\.read_file",
@@ -7246,9 +7375,7 @@ TEST_F(audit_layout1, read_dir)
{
struct audit_records records;
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(EACCES, test_open(dir_s1d1, O_DIRECTORY));
EXPECT_EQ(0, matches_log_fs(_metadata, self->audit_fd, "fs\\.read_dir",
@@ -7266,9 +7393,7 @@ TEST_F(audit_layout1, remove_dir)
EXPECT_EQ(0, unlink(file1_s1d3));
EXPECT_EQ(0, unlink(file2_s1d3));
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(-1, rmdir(dir_s1d3));
EXPECT_EQ(EACCES, errno);
@@ -7289,9 +7414,7 @@ TEST_F(audit_layout1, remove_file)
{
struct audit_records records;
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(-1, unlink(file1_s1d3));
EXPECT_EQ(EACCES, errno);
@@ -7309,9 +7432,7 @@ TEST_F(audit_layout1, make_char)
EXPECT_EQ(0, unlink(file1_s1d3));
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(-1, mknod(file1_s1d3, S_IFCHR | 0644, 0));
EXPECT_EQ(EACCES, errno);
@@ -7329,9 +7450,7 @@ TEST_F(audit_layout1, make_dir)
EXPECT_EQ(0, unlink(file1_s1d3));
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(-1, mkdir(file1_s1d3, 0755));
EXPECT_EQ(EACCES, errno);
@@ -7349,9 +7468,7 @@ TEST_F(audit_layout1, make_reg)
EXPECT_EQ(0, unlink(file1_s1d3));
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(-1, mknod(file1_s1d3, S_IFREG | 0644, 0));
EXPECT_EQ(EACCES, errno);
@@ -7369,9 +7486,7 @@ TEST_F(audit_layout1, make_sock)
EXPECT_EQ(0, unlink(file1_s1d3));
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(-1, mknod(file1_s1d3, S_IFSOCK | 0644, 0));
EXPECT_EQ(EACCES, errno);
@@ -7389,9 +7504,7 @@ TEST_F(audit_layout1, make_fifo)
EXPECT_EQ(0, unlink(file1_s1d3));
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(-1, mknod(file1_s1d3, S_IFIFO | 0644, 0));
EXPECT_EQ(EACCES, errno);
@@ -7409,9 +7522,7 @@ TEST_F(audit_layout1, make_block)
EXPECT_EQ(0, unlink(file1_s1d3));
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(-1, mknod(file1_s1d3, S_IFBLK | 0644, 0));
EXPECT_EQ(EACCES, errno);
@@ -7429,9 +7540,7 @@ TEST_F(audit_layout1, make_sym)
EXPECT_EQ(0, unlink(file1_s1d3));
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(-1, symlink("target", file1_s1d3));
EXPECT_EQ(EACCES, errno);
@@ -7449,10 +7558,7 @@ TEST_F(audit_layout1, refer_handled)
EXPECT_EQ(0, unlink(file1_s1d3));
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs =
- LANDLOCK_ACCESS_FS_REFER,
- });
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_REFER, NULL);
EXPECT_EQ(-1, link(file1_s1d1, file1_s1d3));
EXPECT_EQ(EXDEV, errno);
@@ -7474,12 +7580,9 @@ TEST_F(audit_layout1, refer_make)
EXPECT_EQ(0, unlink(file1_s1d3));
- drop_access_rights(_metadata,
- &(struct landlock_ruleset_attr){
- .handled_access_fs =
- LANDLOCK_ACCESS_FS_MAKE_REG |
- LANDLOCK_ACCESS_FS_REFER,
- });
+ enforce_fs(_metadata,
+ LANDLOCK_ACCESS_FS_MAKE_REG | LANDLOCK_ACCESS_FS_REFER,
+ NULL);
EXPECT_EQ(-1, link(file1_s1d1, file1_s1d3));
EXPECT_EQ(EACCES, errno);
@@ -7499,9 +7602,7 @@ TEST_F(audit_layout1, refer_rename)
EXPECT_EQ(0, unlink(file1_s1d3));
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(EACCES, test_rename(file1_s1d2, file1_s2d3));
EXPECT_EQ(0, matches_log_fs(_metadata, self->audit_fd,
@@ -7521,9 +7622,7 @@ TEST_F(audit_layout1, refer_exchange)
EXPECT_EQ(0, unlink(file1_s1d3));
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
/*
* The only difference with the previous audit_layout1.refer_rename test is
@@ -7561,12 +7660,8 @@ TEST_F(audit_layout1, refer_rename_half)
},
{},
};
- int ruleset_fd =
- create_ruleset(_metadata, LANDLOCK_ACCESS_FS_REFER, layer1);
- ASSERT_LE(0, ruleset_fd);
- enforce_ruleset(_metadata, ruleset_fd);
- ASSERT_EQ(0, close(ruleset_fd));
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_REFER, layer1);
ASSERT_EQ(-1, rename(dir_s1d2, dir_s2d3));
ASSERT_EQ(EXDEV, errno);
@@ -7584,9 +7679,7 @@ TEST_F(audit_layout1, truncate)
{
struct audit_records records;
- drop_access_rights(_metadata, &(struct landlock_ruleset_attr){
- .handled_access_fs = access_fs_16,
- });
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
EXPECT_EQ(-1, truncate(file1_s1d3, 0));
EXPECT_EQ(EACCES, errno);
@@ -7603,12 +7696,7 @@ TEST_F(audit_layout1, ioctl_dev)
struct audit_records records;
int fd;
- drop_access_rights(_metadata,
- &(struct landlock_ruleset_attr){
- .handled_access_fs =
- access_fs_16 &
- ~LANDLOCK_ACCESS_FS_READ_FILE,
- });
+ enforce_fs(_metadata, ACCESS_ALL & ~LANDLOCK_ACCESS_FS_READ_FILE, NULL);
fd = open("/dev/null", O_RDONLY | O_CLOEXEC);
ASSERT_LE(0, fd);
@@ -7622,15 +7710,48 @@ TEST_F(audit_layout1, ioctl_dev)
EXPECT_EQ(1, records.domain);
}
+TEST_F(audit_layout1, resolve_unix)
+{
+ struct audit_records records;
+ const char *const path = "sock";
+ int srv_fd, cli_fd, status;
+ pid_t child_pid;
+
+ srv_fd = set_up_named_unix_server(_metadata, SOCK_STREAM, path);
+
+ child_pid = fork();
+ ASSERT_LE(0, child_pid);
+ if (!child_pid) {
+ enforce_fs(_metadata, ACCESS_ALL, NULL);
+
+ cli_fd = socket(AF_UNIX, SOCK_STREAM, 0);
+ ASSERT_LE(0, cli_fd);
+ EXPECT_EQ(EACCES,
+ test_connect_named_unix(_metadata, cli_fd, path));
+
+ EXPECT_EQ(0, close(cli_fd));
+ _exit(_metadata->exit_code);
+ }
+
+ ASSERT_EQ(child_pid, waitpid(child_pid, &status, 0));
+ EXPECT_EQ(1, WIFEXITED(status));
+ EXPECT_EQ(EXIT_SUCCESS, WEXITSTATUS(status));
+
+ EXPECT_EQ(0, matches_log_fs_extra(_metadata, self->audit_fd,
+ "fs\\.resolve_unix", path, NULL));
+
+ EXPECT_EQ(0, audit_count_records(self->audit_fd, &records));
+ EXPECT_EQ(0, records.access);
+ EXPECT_EQ(1, records.domain);
+
+ EXPECT_EQ(0, close(srv_fd));
+}
+
TEST_F(audit_layout1, mount)
{
struct audit_records records;
- drop_access_rights(_metadata,
- &(struct landlock_ruleset_attr){
- .handled_access_fs =
- LANDLOCK_ACCESS_FS_EXECUTE,
- });
+ enforce_fs(_metadata, LANDLOCK_ACCESS_FS_EXECUTE, NULL);
set_cap(_metadata, CAP_SYS_ADMIN);
EXPECT_EQ(-1, mount(NULL, dir_s3d2, NULL, MS_RDONLY, NULL));
diff --git a/tools/testing/selftests/landlock/net_test.c b/tools/testing/selftests/landlock/net_test.c
index b34b139b3f89..4c528154ea92 100644
--- a/tools/testing/selftests/landlock/net_test.c
+++ b/tools/testing/selftests/landlock/net_test.c
@@ -1356,7 +1356,7 @@ TEST_F(mini, network_access_rights)
&net_port, 0))
{
TH_LOG("Failed to add rule with access 0x%llx: %s",
- access, strerror(errno));
+ (unsigned long long)access, strerror(errno));
}
}
EXPECT_EQ(0, close(ruleset_fd));
diff --git a/tools/testing/selftests/landlock/ptrace_test.c b/tools/testing/selftests/landlock/ptrace_test.c
index 4f64c90583cd..1b6c8b53bf33 100644
--- a/tools/testing/selftests/landlock/ptrace_test.c
+++ b/tools/testing/selftests/landlock/ptrace_test.c
@@ -342,7 +342,6 @@ TEST_F(audit, trace)
/* Makes sure there is no superfluous logged records. */
EXPECT_EQ(0, audit_count_records(self->audit_fd, &records));
EXPECT_EQ(0, records.access);
- EXPECT_EQ(0, records.domain);
yama_ptrace_scope = get_yama_ptrace_scope();
ASSERT_LE(0, yama_ptrace_scope);
diff --git a/tools/testing/selftests/landlock/scoped_abstract_unix_test.c b/tools/testing/selftests/landlock/scoped_abstract_unix_test.c
index 72f97648d4a7..c47491d2d1c1 100644
--- a/tools/testing/selftests/landlock/scoped_abstract_unix_test.c
+++ b/tools/testing/selftests/landlock/scoped_abstract_unix_test.c
@@ -312,7 +312,6 @@ TEST_F(scoped_audit, connect_to_child)
/* Makes sure there is no superfluous logged records. */
EXPECT_EQ(0, audit_count_records(self->audit_fd, &records));
EXPECT_EQ(0, records.access);
- EXPECT_EQ(0, records.domain);
ASSERT_EQ(0, pipe2(pipe_child, O_CLOEXEC));
ASSERT_EQ(0, pipe2(pipe_parent, O_CLOEXEC));
diff --git a/tools/testing/selftests/landlock/tsync_test.c b/tools/testing/selftests/landlock/tsync_test.c
index 2b9ad4f154f4..9cf1491bbaaf 100644
--- a/tools/testing/selftests/landlock/tsync_test.c
+++ b/tools/testing/selftests/landlock/tsync_test.c
@@ -247,4 +247,81 @@ TEST(tsync_interrupt)
EXPECT_EQ(0, close(ruleset_fd));
}
+/* clang-format off */
+FIXTURE(tsync_without_ruleset) {};
+/* clang-format on */
+
+FIXTURE_VARIANT(tsync_without_ruleset)
+{
+ const __u32 flags;
+ const int expected_errno;
+};
+
+/* clang-format off */
+FIXTURE_VARIANT_ADD(tsync_without_ruleset, tsync_only) {
+ /* clang-format on */
+ .flags = LANDLOCK_RESTRICT_SELF_TSYNC,
+ .expected_errno = EBADF,
+};
+
+/* clang-format off */
+FIXTURE_VARIANT_ADD(tsync_without_ruleset, subdomains_off_same_exec_off) {
+ /* clang-format on */
+ .flags = LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF |
+ LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF |
+ LANDLOCK_RESTRICT_SELF_TSYNC,
+ .expected_errno = EBADF,
+};
+
+/* clang-format off */
+FIXTURE_VARIANT_ADD(tsync_without_ruleset, subdomains_off_new_exec_on) {
+ /* clang-format on */
+ .flags = LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF |
+ LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON |
+ LANDLOCK_RESTRICT_SELF_TSYNC,
+ .expected_errno = EBADF,
+};
+
+/* clang-format off */
+FIXTURE_VARIANT_ADD(tsync_without_ruleset, all_flags) {
+ /* clang-format on */
+ .flags = LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF |
+ LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON |
+ LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF |
+ LANDLOCK_RESTRICT_SELF_TSYNC,
+ .expected_errno = EBADF,
+};
+
+/* clang-format off */
+FIXTURE_VARIANT_ADD(tsync_without_ruleset, subdomains_off) {
+ /* clang-format on */
+ .flags = LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF |
+ LANDLOCK_RESTRICT_SELF_TSYNC,
+ .expected_errno = 0,
+};
+
+FIXTURE_SETUP(tsync_without_ruleset)
+{
+ disable_caps(_metadata);
+}
+
+FIXTURE_TEARDOWN(tsync_without_ruleset)
+{
+}
+
+TEST_F(tsync_without_ruleset, check)
+{
+ int ret;
+
+ ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
+
+ ret = landlock_restrict_self(-1, variant->flags);
+ if (variant->expected_errno) {
+ EXPECT_EQ(-1, ret);
+ EXPECT_EQ(variant->expected_errno, errno);
+ } else {
+ EXPECT_EQ(0, ret);
+ }
+}
+
TEST_HARNESS_MAIN