diff options
-rw-r--r-- | Documentation/userspace-api/landlock.rst | 99 | ||||
-rw-r--r-- | include/uapi/linux/landlock.h | 55 | ||||
-rw-r--r-- | samples/landlock/sandboxer.c | 115 | ||||
-rw-r--r-- | security/landlock/Kconfig | 1 | ||||
-rw-r--r-- | security/landlock/Makefile | 2 | ||||
-rw-r--r-- | security/landlock/fs.c | 232 | ||||
-rw-r--r-- | security/landlock/limits.h | 6 | ||||
-rw-r--r-- | security/landlock/net.c | 200 | ||||
-rw-r--r-- | security/landlock/net.h | 33 | ||||
-rw-r--r-- | security/landlock/ruleset.c | 405 | ||||
-rw-r--r-- | security/landlock/ruleset.h | 185 | ||||
-rw-r--r-- | security/landlock/setup.c | 2 | ||||
-rw-r--r-- | security/landlock/syscalls.c | 158 | ||||
-rw-r--r-- | tools/testing/selftests/landlock/base_test.c | 2 | ||||
-rw-r--r-- | tools/testing/selftests/landlock/common.h | 13 | ||||
-rw-r--r-- | tools/testing/selftests/landlock/config | 4 | ||||
-rw-r--r-- | tools/testing/selftests/landlock/fs_test.c | 69 | ||||
-rw-r--r-- | tools/testing/selftests/landlock/net_test.c | 1738 |
18 files changed, 2967 insertions, 352 deletions
diff --git a/Documentation/userspace-api/landlock.rst b/Documentation/userspace-api/landlock.rst index d8cd8cd9ce25..2e3822677061 100644 --- a/Documentation/userspace-api/landlock.rst +++ b/Documentation/userspace-api/landlock.rst @@ -8,13 +8,13 @@ Landlock: unprivileged access control ===================================== :Author: Mickaël Salaün -:Date: October 2022 +:Date: October 2023 The goal of Landlock is to enable to restrict ambient rights (e.g. global -filesystem access) for a set of processes. Because Landlock is a stackable -LSM, it makes possible to create safe security sandboxes as new security layers -in addition to the existing system-wide access-controls. This kind of sandbox -is expected to help mitigate the security impact of bugs or +filesystem or network access) for a set of processes. Because Landlock +is a stackable LSM, it makes possible to create safe security sandboxes as new +security layers in addition to the existing system-wide access-controls. This +kind of sandbox is expected to help mitigate the security impact of bugs or unexpected/malicious behaviors in user space applications. Landlock empowers any process, including unprivileged ones, to securely restrict themselves. @@ -28,20 +28,34 @@ appropriately <kernel_support>`. Landlock rules ============== -A Landlock rule describes an action on an object. An object is currently a -file hierarchy, and the related filesystem actions are defined with `access -rights`_. A set of rules is aggregated in a ruleset, which can then restrict +A Landlock rule describes an action on an object which the process intends to +perform. A set of rules is aggregated in a ruleset, which can then restrict the thread enforcing it, and its future children. +The two existing types of rules are: + +Filesystem rules + For these rules, the object is a file hierarchy, + and the related filesystem actions are defined with + `filesystem access rights`. + +Network rules (since ABI v4) + For these rules, the object is a TCP port, + and the related actions are defined with `network access rights`. + Defining and enforcing a security policy ---------------------------------------- -We first need to define the ruleset that will contain our rules. For this -example, the ruleset will contain rules that only allow read actions, but write -actions will be denied. The ruleset then needs to handle both of these kind of -actions. This is required for backward and forward compatibility (i.e. the -kernel and user space may not know each other's supported restrictions), hence -the need to be explicit about the denied-by-default access rights. +We first need to define the ruleset that will contain our rules. + +For this example, the ruleset will contain rules that only allow filesystem +read actions and establish a specific TCP connection. Filesystem write +actions and other TCP actions will be denied. + +The ruleset then needs to handle both these kinds of actions. This is +required for backward and forward compatibility (i.e. the kernel and user +space may not know each other's supported restrictions), hence the need +to be explicit about the denied-by-default access rights. .. code-block:: c @@ -62,6 +76,9 @@ the need to be explicit about the denied-by-default access rights. LANDLOCK_ACCESS_FS_MAKE_SYM | LANDLOCK_ACCESS_FS_REFER | LANDLOCK_ACCESS_FS_TRUNCATE, + .handled_access_net = + LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, }; Because we may not know on which kernel version an application will be @@ -70,9 +87,7 @@ should try to protect users as much as possible whatever the kernel they are using. To avoid binary enforcement (i.e. either all security features or none), we can leverage a dedicated Landlock command to get the current version of the Landlock ABI and adapt the handled accesses. Let's check if we should -remove the ``LANDLOCK_ACCESS_FS_REFER`` or ``LANDLOCK_ACCESS_FS_TRUNCATE`` -access rights, which are only supported starting with the second and third -version of the ABI. +remove access rights which are only supported in higher versions of the ABI. .. code-block:: c @@ -92,6 +107,12 @@ version of the ABI. case 2: /* Removes LANDLOCK_ACCESS_FS_TRUNCATE for ABI < 3 */ ruleset_attr.handled_access_fs &= ~LANDLOCK_ACCESS_FS_TRUNCATE; + __attribute__((fallthrough)); + case 3: + /* Removes network support for ABI < 4 */ + ruleset_attr.handled_access_net &= + ~(LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP); } This enables to create an inclusive ruleset that will contain our rules. @@ -143,10 +164,23 @@ for the ruleset creation, by filtering access rights according to the Landlock ABI version. In this example, this is not required because all of the requested ``allowed_access`` rights are already available in ABI 1. -We now have a ruleset with one rule allowing read access to ``/usr`` while -denying all other handled accesses for the filesystem. The next step is to -restrict the current thread from gaining more privileges (e.g. thanks to a SUID -binary). +For network access-control, we can add a set of rules that allow to use a port +number for a specific action: HTTPS connections. + +.. code-block:: c + + struct landlock_net_port_attr net_port = { + .allowed_access = LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = 443, + }; + + err = landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &net_port, 0); + +The next step is to restrict the current thread from gaining more privileges +(e.g. through a SUID binary). We now have a ruleset with the first rule +allowing read access to ``/usr`` while denying all other handled accesses for +the filesystem, and a second rule allowing HTTPS connections. .. code-block:: c @@ -355,7 +389,7 @@ Access rights ------------- .. kernel-doc:: include/uapi/linux/landlock.h - :identifiers: fs_access + :identifiers: fs_access net_access Creating a new ruleset ---------------------- @@ -374,6 +408,7 @@ Extending a ruleset .. kernel-doc:: include/uapi/linux/landlock.h :identifiers: landlock_rule_type landlock_path_beneath_attr + landlock_net_port_attr Enforcing a ruleset ------------------- @@ -387,9 +422,9 @@ Current limitations Filesystem topology modification -------------------------------- -As for file renaming and linking, a sandboxed thread cannot modify its -filesystem topology, whether via :manpage:`mount(2)` or -:manpage:`pivot_root(2)`. However, :manpage:`chroot(2)` calls are not denied. +Threads sandboxed with filesystem restrictions cannot modify filesystem +topology, whether via :manpage:`mount(2)` or :manpage:`pivot_root(2)`. +However, :manpage:`chroot(2)` calls are not denied. Special filesystems ------------------- @@ -451,6 +486,14 @@ always allowed when using a kernel that only supports the first or second ABI. Starting with the Landlock ABI version 3, it is now possible to securely control truncation thanks to the new ``LANDLOCK_ACCESS_FS_TRUNCATE`` access right. +Network support (ABI < 4) +------------------------- + +Starting with the Landlock ABI version 4, it is now possible to restrict TCP +bind and connect actions to only a set of allowed ports thanks to the new +``LANDLOCK_ACCESS_NET_BIND_TCP`` and ``LANDLOCK_ACCESS_NET_CONNECT_TCP`` +access rights. + .. _kernel_support: Kernel support @@ -469,6 +512,12 @@ still enable it by adding ``lsm=landlock,[...]`` to Documentation/admin-guide/kernel-parameters.rst thanks to the bootloader configuration. +To be able to explicitly allow TCP operations (e.g., adding a network rule with +``LANDLOCK_ACCESS_NET_BIND_TCP``), the kernel must support TCP +(``CONFIG_INET=y``). Otherwise, sys_landlock_add_rule() returns an +``EAFNOSUPPORT`` error, which can safely be ignored because this kind of TCP +operation is already not possible. + Questions and answers ===================== diff --git a/include/uapi/linux/landlock.h b/include/uapi/linux/landlock.h index 81d09ef9aa50..25c8d7677539 100644 --- a/include/uapi/linux/landlock.h +++ b/include/uapi/linux/landlock.h @@ -31,6 +31,12 @@ struct landlock_ruleset_attr { * this access right. */ __u64 handled_access_fs; + /** + * @handled_access_net: Bitmask of actions (cf. `Network flags`_) + * that is handled by this ruleset and should then be forbidden if no + * rule explicitly allow them. + */ + __u64 handled_access_net; }; /* @@ -54,6 +60,11 @@ enum landlock_rule_type { * landlock_path_beneath_attr . */ LANDLOCK_RULE_PATH_BENEATH = 1, + /** + * @LANDLOCK_RULE_NET_PORT: Type of a &struct + * landlock_net_port_attr . + */ + LANDLOCK_RULE_NET_PORT, }; /** @@ -80,6 +91,31 @@ struct landlock_path_beneath_attr { } __attribute__((packed)); /** + * struct landlock_net_port_attr - Network port definition + * + * Argument of sys_landlock_add_rule(). + */ +struct landlock_net_port_attr { + /** + * @allowed_access: Bitmask of allowed access network for a port + * (cf. `Network flags`_). + */ + __u64 allowed_access; + /** + * @port: Network port in host endianness. + * + * It should be noted that port 0 passed to :manpage:`bind(2)` will + * bind to an available port from a specific port range. This can be + * configured thanks to the ``/proc/sys/net/ipv4/ip_local_port_range`` + * sysctl (also used for IPv6). A Landlock rule with port 0 and the + * ``LANDLOCK_ACCESS_NET_BIND_TCP`` right means that requesting to bind + * on port 0 is allowed and it will automatically translate to binding + * on the related port range. + */ + __u64 port; +}; + +/** * DOC: fs_access * * A set of actions on kernel objects may be defined by an attribute (e.g. @@ -189,4 +225,23 @@ struct landlock_path_beneath_attr { #define LANDLOCK_ACCESS_FS_TRUNCATE (1ULL << 14) /* clang-format on */ +/** + * DOC: net_access + * + * Network flags + * ~~~~~~~~~~~~~~~~ + * + * These flags enable to restrict a sandboxed process to a set of network + * actions. This is supported since the Landlock ABI version 4. + * + * TCP sockets with allowed actions: + * + * - %LANDLOCK_ACCESS_NET_BIND_TCP: Bind a TCP socket to a local port. + * - %LANDLOCK_ACCESS_NET_CONNECT_TCP: Connect an active TCP socket to + * a remote port. + */ +/* clang-format off */ +#define LANDLOCK_ACCESS_NET_BIND_TCP (1ULL << 0) +#define LANDLOCK_ACCESS_NET_CONNECT_TCP (1ULL << 1) +/* clang-format on */ #endif /* _UAPI_LINUX_LANDLOCK_H */ diff --git a/samples/landlock/sandboxer.c b/samples/landlock/sandboxer.c index e2056c8b902c..08596c0ef070 100644 --- a/samples/landlock/sandboxer.c +++ b/samples/landlock/sandboxer.c @@ -8,6 +8,8 @@ */ #define _GNU_SOURCE +#define __SANE_USERSPACE_TYPES__ +#include <arpa/inet.h> #include <errno.h> #include <fcntl.h> #include <linux/landlock.h> @@ -51,7 +53,9 @@ static inline int landlock_restrict_self(const int ruleset_fd, #define ENV_FS_RO_NAME "LL_FS_RO" #define ENV_FS_RW_NAME "LL_FS_RW" -#define ENV_PATH_TOKEN ":" +#define ENV_TCP_BIND_NAME "LL_TCP_BIND" +#define ENV_TCP_CONNECT_NAME "LL_TCP_CONNECT" +#define ENV_DELIMITER ":" static int parse_path(char *env_path, const char ***const path_list) { @@ -60,13 +64,13 @@ static int parse_path(char *env_path, const char ***const path_list) if (env_path) { num_paths++; for (i = 0; env_path[i]; i++) { - if (env_path[i] == ENV_PATH_TOKEN[0]) + if (env_path[i] == ENV_DELIMITER[0]) num_paths++; } } *path_list = malloc(num_paths * sizeof(**path_list)); for (i = 0; i < num_paths; i++) - (*path_list)[i] = strsep(&env_path, ENV_PATH_TOKEN); + (*path_list)[i] = strsep(&env_path, ENV_DELIMITER); return num_paths; } @@ -81,8 +85,8 @@ static int parse_path(char *env_path, const char ***const path_list) /* clang-format on */ -static int populate_ruleset(const char *const env_var, const int ruleset_fd, - const __u64 allowed_access) +static int populate_ruleset_fs(const char *const env_var, const int ruleset_fd, + const __u64 allowed_access) { int num_paths, i, ret = 1; char *env_path_name; @@ -143,6 +147,39 @@ out_free_name: return ret; } +static int populate_ruleset_net(const char *const env_var, const int ruleset_fd, + const __u64 allowed_access) +{ + int ret = 1; + char *env_port_name, *strport; + struct landlock_net_port_attr net_port = { + .allowed_access = allowed_access, + .port = 0, + }; + + env_port_name = getenv(env_var); + if (!env_port_name) + return 0; + env_port_name = strdup(env_port_name); + unsetenv(env_var); + + while ((strport = strsep(&env_port_name, ENV_DELIMITER))) { + net_port.port = atoi(strport); + if (landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &net_port, 0)) { + fprintf(stderr, + "Failed to update the ruleset with port \"%llu\": %s\n", + net_port.port, strerror(errno)); + goto out_free_name; + } + } + ret = 0; + +out_free_name: + free(env_port_name); + return ret; +} + /* clang-format off */ #define ACCESS_FS_ROUGHLY_READ ( \ @@ -166,39 +203,58 @@ out_free_name: /* clang-format on */ -#define LANDLOCK_ABI_LAST 3 +#define LANDLOCK_ABI_LAST 4 int main(const int argc, char *const argv[], char *const *const envp) { const char *cmd_path; char *const *cmd_argv; int ruleset_fd, abi; + char *env_port_name; __u64 access_fs_ro = ACCESS_FS_ROUGHLY_READ, access_fs_rw = ACCESS_FS_ROUGHLY_READ | ACCESS_FS_ROUGHLY_WRITE; + struct landlock_ruleset_attr ruleset_attr = { .handled_access_fs = access_fs_rw, + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, }; if (argc < 2) { fprintf(stderr, - "usage: %s=\"...\" %s=\"...\" %s <cmd> [args]...\n\n", - ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]); + "usage: %s=\"...\" %s=\"...\" %s=\"...\" %s=\"...\"%s " + "<cmd> [args]...\n\n", + ENV_FS_RO_NAME, ENV_FS_RW_NAME, ENV_TCP_BIND_NAME, + ENV_TCP_CONNECT_NAME, argv[0]); fprintf(stderr, "Launch a command in a restricted environment.\n\n"); - fprintf(stderr, "Environment variables containing paths, " - "each separated by a colon:\n"); + fprintf(stderr, + "Environment variables containing paths and ports " + "each separated by a colon:\n"); fprintf(stderr, "* %s: list of paths allowed to be used in a read-only way.\n", ENV_FS_RO_NAME); fprintf(stderr, - "* %s: list of paths allowed to be used in a read-write way.\n", + "* %s: list of paths allowed to be used in a read-write way.\n\n", ENV_FS_RW_NAME); fprintf(stderr, + "Environment variables containing ports are optional " + "and could be skipped.\n"); + fprintf(stderr, + "* %s: list of ports allowed to bind (server).\n", + ENV_TCP_BIND_NAME); + fprintf(stderr, + "* %s: list of ports allowed to connect (client).\n", + ENV_TCP_CONNECT_NAME); + fprintf(stderr, "\nexample:\n" "%s=\"/bin:/lib:/usr:/proc:/etc:/dev/urandom\" " "%s=\"/dev/null:/dev/full:/dev/zero:/dev/pts:/tmp\" " + "%s=\"9418\" " + "%s=\"80:443\" " "%s bash -i\n\n", - ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]); + ENV_FS_RO_NAME, ENV_FS_RW_NAME, ENV_TCP_BIND_NAME, + ENV_TCP_CONNECT_NAME, argv[0]); fprintf(stderr, "This sandboxer can use Landlock features " "up to ABI version %d.\n", @@ -255,7 +311,12 @@ int main(const int argc, char *const argv[], char *const *const envp) case 2: /* Removes LANDLOCK_ACCESS_FS_TRUNCATE for ABI < 3 */ ruleset_attr.handled_access_fs &= ~LANDLOCK_ACCESS_FS_TRUNCATE; - + __attribute__((fallthrough)); + case 3: + /* Removes network support for ABI < 4 */ + ruleset_attr.handled_access_net &= + ~(LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP); fprintf(stderr, "Hint: You should update the running kernel " "to leverage Landlock features " @@ -274,18 +335,42 @@ int main(const int argc, char *const argv[], char *const *const envp) access_fs_ro &= ruleset_attr.handled_access_fs; access_fs_rw &= ruleset_attr.handled_access_fs; + /* Removes bind access attribute if not supported by a user. */ + env_port_name = getenv(ENV_TCP_BIND_NAME); + if (!env_port_name) { + ruleset_attr.handled_access_net &= + ~LANDLOCK_ACCESS_NET_BIND_TCP; + } + /* Removes connect access attribute if not supported by a user. */ + env_port_name = getenv(ENV_TCP_CONNECT_NAME); + if (!env_port_name) { + ruleset_attr.handled_access_net &= + ~LANDLOCK_ACCESS_NET_CONNECT_TCP; + } + ruleset_fd = landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); if (ruleset_fd < 0) { perror("Failed to create a ruleset"); return 1; } - if (populate_ruleset(ENV_FS_RO_NAME, ruleset_fd, access_fs_ro)) { + + if (populate_ruleset_fs(ENV_FS_RO_NAME, ruleset_fd, access_fs_ro)) { + goto err_close_ruleset; + } + if (populate_ruleset_fs(ENV_FS_RW_NAME, ruleset_fd, access_fs_rw)) { goto err_close_ruleset; } - if (populate_ruleset(ENV_FS_RW_NAME, ruleset_fd, access_fs_rw)) { + + if (populate_ruleset_net(ENV_TCP_BIND_NAME, ruleset_fd, + LANDLOCK_ACCESS_NET_BIND_TCP)) { + goto err_close_ruleset; + } + if (populate_ruleset_net(ENV_TCP_CONNECT_NAME, ruleset_fd, + LANDLOCK_ACCESS_NET_CONNECT_TCP)) { goto err_close_ruleset; } + if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) { perror("Failed to restrict privileges"); goto err_close_ruleset; diff --git a/security/landlock/Kconfig b/security/landlock/Kconfig index c1e862a38410..c4bf0d5eff39 100644 --- a/security/landlock/Kconfig +++ b/security/landlock/Kconfig @@ -3,6 +3,7 @@ config SECURITY_LANDLOCK bool "Landlock support" depends on SECURITY + select SECURITY_NETWORK select SECURITY_PATH help Landlock is a sandboxing mechanism that enables processes to restrict diff --git a/security/landlock/Makefile b/security/landlock/Makefile index 7bbd2f413b3e..c2e116f2a299 100644 --- a/security/landlock/Makefile +++ b/security/landlock/Makefile @@ -2,3 +2,5 @@ obj-$(CONFIG_SECURITY_LANDLOCK) := landlock.o landlock-y := setup.o syscalls.o object.o ruleset.o \ cred.o ptrace.o fs.o + +landlock-$(CONFIG_INET) += net.o diff --git a/security/landlock/fs.c b/security/landlock/fs.c index 1c0c198f6fdb..bc7c126deea2 100644 --- a/security/landlock/fs.c +++ b/security/landlock/fs.c @@ -151,16 +151,6 @@ retry: /* clang-format on */ /* - * All access rights that are denied by default whether they are handled or not - * by a ruleset/layer. This must be ORed with all ruleset->fs_access_masks[] - * entries when we need to get the absolute handled access masks. - */ -/* clang-format off */ -#define ACCESS_INITIALLY_DENIED ( \ - LANDLOCK_ACCESS_FS_REFER) -/* clang-format on */ - -/* * @path: Should have been checked by get_path_from_fd(). */ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset, @@ -168,7 +158,9 @@ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset, access_mask_t access_rights) { int err; - struct landlock_object *object; + struct landlock_id id = { + .type = LANDLOCK_KEY_INODE, + }; /* Files only get access rights that make sense. */ if (!d_is_dir(path->dentry) && @@ -178,20 +170,19 @@ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset, return -EINVAL; /* Transforms relative access rights to absolute ones. */ - access_rights |= - LANDLOCK_MASK_ACCESS_FS & - ~(ruleset->fs_access_masks[0] | ACCESS_INITIALLY_DENIED); - object = get_inode_object(d_backing_inode(path->dentry)); - if (IS_ERR(object)) - return PTR_ERR(object); + access_rights |= LANDLOCK_MASK_ACCESS_FS & + ~landlock_get_fs_access_mask(ruleset, 0); + id.key.object = get_inode_object(d_backing_inode(path->dentry)); + if (IS_ERR(id.key.object)) + return PTR_ERR(id.key.object); mutex_lock(&ruleset->lock); - err = landlock_insert_rule(ruleset, object, access_rights); + err = landlock_insert_rule(ruleset, id, access_rights); mutex_unlock(&ruleset->lock); /* * No need to check for an error because landlock_insert_rule() * increments the refcount for the new object if needed. */ - landlock_put_object(object); + landlock_put_object(id.key.object); return err; } @@ -208,6 +199,9 @@ find_rule(const struct landlock_ruleset *const domain, { const struct landlock_rule *rule; const struct inode *inode; + struct landlock_id id = { + .type = LANDLOCK_KEY_INODE, + }; /* Ignores nonexistent leafs. */ if (d_is_negative(dentry)) @@ -215,67 +209,13 @@ find_rule(const struct landlock_ruleset *const domain, inode = d_backing_inode(dentry); rcu_read_lock(); - rule = landlock_find_rule( - domain, rcu_dereference(landlock_inode(inode)->object)); + id.key.object = rcu_dereference(landlock_inode(inode)->object); + rule = landlock_find_rule(domain, id); rcu_read_unlock(); return rule; } /* - * @layer_masks is read and may be updated according to the access request and - * the matching rule. - * - * Returns true if the request is allowed (i.e. relevant layer masks for the - * request are empty). - */ -static inline bool -unmask_layers(const struct landlock_rule *const rule, - const access_mask_t access_request, - layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]) -{ - size_t layer_level; - - if (!access_request || !layer_masks) - return true; - if (!rule) - return false; - - /* - * An access is granted if, for each policy layer, at least one rule - * encountered on the pathwalk grants the requested access, - * regardless of its position in the layer stack. We must then check - * the remaining layers for each inode, from the first added layer to - * the last one. When there is multiple requested accesses, for each - * policy layer, the full set of requested accesses may not be granted - * by only one rule, but by the union (binary OR) of multiple rules. - * E.g. /a/b <execute> + /a <read> => /a/b <execute + read> - */ - for (layer_level = 0; layer_level < rule->num_layers; layer_level++) { - const struct landlock_layer *const layer = - &rule->layers[layer_level]; - const layer_mask_t layer_bit = BIT_ULL(layer->level - 1); - const unsigned long access_req = access_request; - unsigned long access_bit; - bool is_empty; - - /* - * Records in @layer_masks which layer grants access to each - * requested access. - */ - is_empty = true; - for_each_set_bit(access_bit, &access_req, - ARRAY_SIZE(*layer_masks)) { - if (layer->access & BIT_ULL(access_bit)) - (*layer_masks)[access_bit] &= ~layer_bit; - is_empty = is_empty && !(*layer_masks)[access_bit]; - } - if (is_empty) - return true; - } - return false; -} - -/* * Allows access to pseudo filesystems that will never be mountable (e.g. * sockfs, pipefs), but can still be reachable through * /proc/<pid>/fd/<file-descriptor> @@ -287,64 +227,35 @@ static inline bool is_nouser_or_private(const struct dentry *dentry) unlikely(IS_PRIVATE(d_backing_inode(dentry)))); } -static inline access_mask_t -get_handled_accesses(const struct landlock_ruleset *const domain) +static access_mask_t +get_raw_handled_fs_accesses(const struct landlock_ruleset *const domain) { - access_mask_t access_dom = ACCESS_INITIALLY_DENIED; + access_mask_t access_dom = 0; size_t layer_level; for (layer_level = 0; layer_level < domain->num_layers; layer_level++) - access_dom |= domain->fs_access_masks[layer_level]; - return access_dom & LANDLOCK_MASK_ACCESS_FS; + access_dom |= + landlock_get_raw_fs_access_mask(domain, layer_level); + return access_dom; } -/** - * init_layer_masks - Initialize layer masks from an access request - * - * Populates @layer_masks such that for each access right in @access_request, - * the bits for all the layers are set where this access right is handled. - * - * @domain: The domain that defines the current restrictions. - * @access_request: The requested access rights to check. - * @layer_masks: The layer masks to populate. - * - * Returns: An access mask where each access right bit is set which is handled - * in any of the active layers in @domain. - */ -static inline access_mask_t -init_layer_masks(const struct landlock_ruleset *const domain, - const access_mask_t access_request, - layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]) +static access_mask_t +get_handled_fs_accesses(const struct landlock_ruleset *const domain) { - access_mask_t handled_accesses = 0; - size_t layer_level; + /* Handles all initially denied by default access rights. */ + return get_raw_handled_fs_accesses(domain) | + LANDLOCK_ACCESS_FS_INITIALLY_DENIED; +} - memset(layer_masks, 0, sizeof(*layer_masks)); - /* An empty access request can happen because of O_WRONLY | O_RDWR. */ - if (!access_request) - return 0; +static const struct landlock_ruleset *get_current_fs_domain(void) +{ + const struct landlock_ruleset *const dom = + landlock_get_current_domain(); - /* Saves all handled accesses per layer. */ - for (layer_level = 0; layer_level < domain->num_layers; layer_level++) { - const unsigned long access_req = access_request; - unsigned long access_bit; + if (!dom || !get_raw_handled_fs_accesses(dom)) + return NULL; - for_each_set_bit(access_bit, &access_req, - ARRAY_SIZE(*layer_masks)) { - /* - * Artificially handles all initially denied by default - * access rights. - */ - if (BIT_ULL(access_bit) & - (domain->fs_access_masks[layer_level] | - ACCESS_INITIALLY_DENIED)) { - (*layer_masks)[access_bit] |= - BIT_ULL(layer_level); - handled_accesses |= BIT_ULL(access_bit); - } - } - } - return handled_accesses; + return dom; } /* @@ -519,7 +430,7 @@ static bool is_access_to_paths_allowed( * a superset of the meaningful requested accesses). */ access_masked_parent1 = access_masked_parent2 = - get_handled_accesses(domain); + get_handled_fs_accesses(domain); is_dom_check = true; } else { if (WARN_ON_ONCE(dentry_child1 || dentry_child2)) @@ -531,18 +442,22 @@ static bool is_access_to_paths_allowed( } if (unlikely(dentry_child1)) { - unmask_layers(find_rule(domain, dentry_child1), - init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS, - &_layer_masks_child1), - &_layer_masks_child1); + landlock_unmask_layers( + find_rule(domain, dentry_child1), + landlock_init_layer_masks( + domain, LANDLOCK_MASK_ACCESS_FS, + &_layer_masks_child1, LANDLOCK_KEY_INODE), + &_layer_masks_child1, ARRAY_SIZE(_layer_masks_child1)); layer_masks_child1 = &_layer_masks_child1; child1_is_directory = d_is_dir(dentry_child1); } if (unlikely(dentry_child2)) { - unmask_layers(find_rule(domain, dentry_child2), - init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS, - &_layer_masks_child2), - &_layer_masks_child2); + landlock_unmask_layers( + find_rule(domain, dentry_child2), + landlock_init_layer_masks( + domain, LANDLOCK_MASK_ACCESS_FS, + &_layer_masks_child2, LANDLOCK_KEY_INODE), + &_layer_masks_child2, ARRAY_SIZE(_layer_masks_child2)); layer_masks_child2 = &_layer_masks_child2; child2_is_directory = d_is_dir(dentry_child2); } @@ -594,15 +509,16 @@ static bool is_access_to_paths_allowed( } rule = find_rule(domain, walker_path.dentry); - allowed_parent1 = unmask_layers(rule, access_masked_parent1, - layer_masks_parent1); - allowed_parent2 = unmask_layers(rule, access_masked_parent2, - layer_masks_parent2); + allowed_parent1 = landlock_unmask_layers( + rule, access_masked_parent1, layer_masks_parent1, + ARRAY_SIZE(*layer_masks_parent1)); + allowed_parent2 = landlock_unmask_layers( + rule, access_masked_parent2, layer_masks_parent2, + ARRAY_SIZE(*layer_masks_parent2)); /* Stops when a rule from each layer grants access. */ if (allowed_parent1 && allowed_parent2) break; - jump_up: if (walker_path.dentry == walker_path.mnt->mnt_root) { if (follow_up(&walker_path)) { @@ -641,7 +557,8 @@ static inline int check_access_path(const struct landlock_ruleset *const domain, { layer_mask_t layer_masks[LANDLOCK_NUM_ACCESS_FS] = {}; - access_request = init_layer_masks(domain, access_request, &layer_masks); + access_request = landlock_init_layer_masks( + domain, access_request, &layer_masks, LANDLOCK_KEY_INODE); if (is_access_to_paths_allowed(domain, path, access_request, &layer_masks, NULL, 0, NULL, NULL)) return 0; @@ -651,8 +568,7 @@ static inline int check_access_path(const struct landlock_ruleset *const domain, static inline int current_check_access_path(const struct path *const path, const access_mask_t access_request) { - const struct landlock_ruleset *const dom = - landlock_get_current_domain(); + const struct landlock_ruleset *const dom = get_current_fs_domain(); if (!dom) return 0; @@ -727,16 +643,18 @@ static bool collect_domain_accesses( if (is_nouser_or_private(dir)) return true; - access_dom = init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS, - layer_masks_dom); + access_dom = landlock_init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS, + layer_masks_dom, + LANDLOCK_KEY_INODE); dget(dir); while (true) { struct dentry *parent_dentry; /* Gets all layers allowing all domain accesses. */ - if (unmask_layers(find_rule(domain, dir), access_dom, - layer_masks_dom)) { + if (landlock_unmask_layers(find_rule(domain, dir), access_dom, + layer_masks_dom, + ARRAY_SIZE(*layer_masks_dom))) { /* * Stops when all handled accesses are allowed by at * least one rule in each layer. @@ -815,8 +733,7 @@ static int current_check_refer_path(struct dentry *const old_dentry, struct dentry *const new_dentry, const bool removable, const bool exchange) { - const struct landlock_ruleset *const dom = - landlock_get_current_domain(); + const struct landlock_ruleset *const dom = get_current_fs_domain(); bool allow_parent1, allow_parent2; access_mask_t access_request_parent1, access_request_parent2; struct path mnt_dir; @@ -850,9 +767,9 @@ static int current_check_refer_path(struct dentry *const old_dentry, * The LANDLOCK_ACCESS_FS_REFER access right is not required * for same-directory referer (i.e. no reparenting). */ - access_request_parent1 = init_layer_masks( + access_request_parent1 = landlock_init_layer_masks( dom, access_request_parent1 | access_request_parent2, - &layer_masks_parent1); + &layer_masks_parent1, LANDLOCK_KEY_INODE); if (is_access_to_paths_allowed( dom, new_dir, access_request_parent1, &layer_masks_parent1, NULL, 0, NULL, NULL)) @@ -1050,7 +967,7 @@ static int hook_sb_mount(const char *const dev_name, const struct path *const path, const char *const type, const unsigned long flags, void *const data) { - if (!landlock_get_current_domain()) + if (!get_current_fs_domain()) return 0; return -EPERM; } @@ -1058,7 +975,7 @@ static int hook_sb_mount(const char *const dev_name, static int hook_move_mount(const struct path *const from_path, const struct path *const to_path) { - if (!landlock_get_current_domain()) + if (!get_current_fs_domain()) return 0; return -EPERM; } @@ -1069,14 +986,14 @@ static int hook_move_mount(const struct path *const from_path, */ static int hook_sb_umount(struct vfsmount *const mnt, const int flags) { - if (!landlock_get_current_domain()) + if (!get_current_fs_domain()) return 0; return -EPERM; } static int hook_sb_remount(struct super_block *const sb, void *const mnt_opts) { - if (!landlock_get_current_domain()) + if (!get_current_fs_domain()) return 0; return -EPERM; } @@ -1092,7 +1009,7 @@ static int hook_sb_remount(struct super_block *const sb, void *const mnt_opts) static int hook_sb_pivotroot(const struct path *const old_path, const struct path *const new_path) { - if (!landlock_get_current_domain()) + if (!get_current_fs_domain()) return 0; return -EPERM; } @@ -1128,8 +1045,7 @@ static int hook_path_mknod(const struct path *const dir, struct dentry *const dentry, const umode_t mode, const unsigned int dev) { - const struct landlock_ruleset *const dom = - landlock_get_current_domain(); + const struct landlock_ruleset *const dom = get_current_fs_domain(); if (!dom) return 0; @@ -1208,8 +1124,7 @@ static int hook_file_open(struct file *const file) layer_mask_t layer_masks[LANDLOCK_NUM_ACCESS_FS] = {}; access_mask_t open_access_request, full_access_request, allowed_access; const access_mask_t optional_access = LANDLOCK_ACCESS_FS_TRUNCATE; - const struct landlock_ruleset *const dom = - landlock_get_current_domain(); + const struct landlock_ruleset *const dom = get_current_fs_domain(); if (!dom) return 0; @@ -1229,7 +1144,8 @@ static int hook_file_open(struct file *const file) if (is_access_to_paths_allowed( dom, &file->f_path, - init_layer_masks(dom, full_access_request, &layer_masks), + landlock_init_layer_masks(dom, full_access_request, + &layer_masks, LANDLOCK_KEY_INODE), &layer_masks, NULL, 0, NULL, NULL)) { allowed_access = full_access_request; } else { diff --git a/security/landlock/limits.h b/security/landlock/limits.h index 82288f0e9e5e..93c9c6f91556 100644 --- a/security/landlock/limits.h +++ b/security/landlock/limits.h @@ -21,6 +21,12 @@ #define LANDLOCK_LAST_ACCESS_FS LANDLOCK_ACCESS_FS_TRUNCATE #define LANDLOCK_MASK_ACCESS_FS ((LANDLOCK_LAST_ACCESS_FS << 1) - 1) #define LANDLOCK_NUM_ACCESS_FS __const_hweight64(LANDLOCK_MASK_ACCESS_FS) +#define LANDLOCK_SHIFT_ACCESS_FS 0 + +#define LANDLOCK_LAST_ACCESS_NET LANDLOCK_ACCESS_NET_CONNECT_TCP +#define LANDLOCK_MASK_ACCESS_NET ((LANDLOCK_LAST_ACCESS_NET << 1) - 1) +#define LANDLOCK_NUM_ACCESS_NET __const_hweight64(LANDLOCK_MASK_ACCESS_NET) +#define LANDLOCK_SHIFT_ACCESS_NET LANDLOCK_NUM_ACCESS_FS /* clang-format on */ diff --git a/security/landlock/net.c b/security/landlock/net.c new file mode 100644 index 000000000000..aaa92c2b1f08 --- /dev/null +++ b/security/landlock/net.c @@ -0,0 +1,200 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Landlock LSM - Network management and hooks + * + * Copyright © 2022-2023 Huawei Tech. Co., Ltd. + * Copyright © 2022-2023 Microsoft Corporation + */ + +#include <linux/in.h> +#include <linux/net.h> +#include <linux/socket.h> +#include <net/ipv6.h> + +#include "common.h" +#include "cred.h" +#include "limits.h" +#include "net.h" +#include "ruleset.h" + +int landlock_append_net_rule(struct landlock_ruleset *const ruleset, + const u16 port, access_mask_t access_rights) +{ + int err; + const struct landlock_id id = { + .key.data = (__force uintptr_t)htons(port), + .type = LANDLOCK_KEY_NET_PORT, + }; + + BUILD_BUG_ON(sizeof(port) > sizeof(id.key.data)); + + /* Transforms relative access rights to absolute ones. */ + access_rights |= LANDLOCK_MASK_ACCESS_NET & + ~landlock_get_net_access_mask(ruleset, 0); + + mutex_lock(&ruleset->lock); + err = landlock_insert_rule(ruleset, id, access_rights); + mutex_unlock(&ruleset->lock); + + return err; +} + +static access_mask_t +get_raw_handled_net_accesses(const struct landlock_ruleset *const domain) +{ + access_mask_t access_dom = 0; + size_t layer_level; + + for (layer_level = 0; layer_level < domain->num_layers; layer_level++) + access_dom |= landlock_get_net_access_mask(domain, layer_level); + return access_dom; +} + +static const struct landlock_ruleset *get_current_net_domain(void) +{ + const struct landlock_ruleset *const dom = + landlock_get_current_domain(); + + if (!dom || !get_raw_handled_net_accesses(dom)) + return NULL; + + return dom; +} + +static int current_check_access_socket(struct socket *const sock, + struct sockaddr *const address, + const int addrlen, + const access_mask_t access_request) +{ + __be16 port; + layer_mask_t layer_masks[LANDLOCK_NUM_ACCESS_NET] = {}; + const struct landlock_rule *rule; + access_mask_t handled_access; + struct landlock_id id = { + .type = LANDLOCK_KEY_NET_PORT, + }; + const struct landlock_ruleset *const dom = get_current_net_domain(); + + if (!dom) + return 0; + if (WARN_ON_ONCE(dom->num_layers < 1)) + return -EACCES; + + /* Checks if it's a (potential) TCP socket. */ + if (sock->type != SOCK_STREAM) + return 0; + + /* Checks for minimal header length to safely read sa_family. */ + if (addrlen < offsetofend(typeof(*address), sa_family)) + return -EINVAL; + + switch (address->sa_family) { + case AF_UNSPEC: + case AF_INET: + if (addrlen < sizeof(struct sockaddr_in)) + return -EINVAL; + port = ((struct sockaddr_in *)address)->sin_port; + break; + +#if IS_ENABLED(CONFIG_IPV6) + case AF_INET6: + if (addrlen < SIN6_LEN_RFC2133) + return -EINVAL; + port = ((struct sockaddr_in6 *)address)->sin6_port; + break; +#endif /* IS_ENABLED(CONFIG_IPV6) */ + + default: + return 0; + } + + /* Specific AF_UNSPEC handling. */ + if (address->sa_family == AF_UNSPEC) { + /* + * Connecting to an address with AF_UNSPEC dissolves the TCP + * association, which have the same effect as closing the + * connection while retaining the socket object (i.e., the file + * descriptor). As for dropping privileges, closing + * connections is always allowed. + * + * For a TCP access control system, this request is legitimate. + * Let the network stack handle potential inconsistencies and + * return -EINVAL if needed. + */ + if (access_request == LANDLOCK_ACCESS_NET_CONNECT_TCP) + return 0; + + /* + * For compatibility reason, accept AF_UNSPEC for bind + * accesses (mapped to AF_INET) only if the address is + * INADDR_ANY (cf. __inet_bind). Checking the address is + * required to not wrongfully return -EACCES instead of + * -EAFNOSUPPORT. + * + * We could return 0 and let the network stack handle these + * checks, but it is safer to return a proper error and test + * consistency thanks to kselftest. + */ + if (access_request == LANDLOCK_ACCESS_NET_BIND_TCP) { + /* addrlen has already been checked for AF_UNSPEC. */ + const struct sockaddr_in *const sockaddr = + (struct sockaddr_in *)address; + + if (sock->sk->__sk_common.skc_family != AF_INET) + return -EINVAL; + + if (sockaddr->sin_addr.s_addr != htonl(INADDR_ANY)) + return -EAFNOSUPPORT; + } + } else { + /* + * Checks sa_family consistency to not wrongfully return + * -EACCES instead of -EINVAL. Valid sa_family changes are + * only (from AF_INET or AF_INET6) to AF_UNSPEC. + * + * We could return 0 and let the network stack handle this + * check, but it is safer to return a proper error and test + * consistency thanks to kselftest. + */ + if (address->sa_family != sock->sk->__sk_common.skc_family) + return -EINVAL; + } + + id.key.data = (__force uintptr_t)port; + BUILD_BUG_ON(sizeof(port) > sizeof(id.key.data)); + + rule = landlock_find_rule(dom, id); + handled_access = landlock_init_layer_masks( + dom, access_request, &layer_masks, LANDLOCK_KEY_NET_PORT); + if (landlock_unmask_layers(rule, handled_access, &layer_masks, + ARRAY_SIZE(layer_masks))) + return 0; + + return -EACCES; +} + +static int hook_socket_bind(struct socket *const sock, + struct sockaddr *const address, const int addrlen) +{ + return current_check_access_socket(sock, address, addrlen, + LANDLOCK_ACCESS_NET_BIND_TCP); +} + +static int hook_socket_connect(struct socket *const sock, + struct sockaddr *const address, + const int addrlen) +{ + return current_check_access_socket(sock, address, addrlen, + LANDLOCK_ACCESS_NET_CONNECT_TCP); +} + +static struct security_hook_list landlock_hooks[] __ro_after_init = { + LSM_HOOK_INIT(socket_bind, hook_socket_bind), + LSM_HOOK_INIT(socket_connect, hook_socket_connect), +}; + +__init void landlock_add_net_hooks(void) +{ + security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks), + LANDLOCK_NAME); +} diff --git a/security/landlock/net.h b/security/landlock/net.h new file mode 100644 index 000000000000..09960c237a13 --- /dev/null +++ b/security/landlock/net.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Landlock LSM - Network management and hooks + * + * Copyright © 2022-2023 Huawei Tech. Co., Ltd. + */ + +#ifndef _SECURITY_LANDLOCK_NET_H +#define _SECURITY_LANDLOCK_NET_H + +#include "common.h" +#include "ruleset.h" +#include "setup.h" + +#if IS_ENABLED(CONFIG_INET) +__init void landlock_add_net_hooks(void); + +int landlock_append_net_rule(struct landlock_ruleset *const ruleset, + const u16 port, access_mask_t access_rights); +#else /* IS_ENABLED(CONFIG_INET) */ +static inline void landlock_add_net_hooks(void) +{ +} + +static inline int +landlock_append_net_rule(struct landlock_ruleset *const ruleset, const u16 port, + access_mask_t access_rights) +{ + return -EAFNOSUPPORT; +} +#endif /* IS_ENABLED(CONFIG_INET) */ + +#endif /* _SECURITY_LANDLOCK_NET_H */ diff --git a/security/landlock/ruleset.c b/security/landlock/ruleset.c index 996484f98bfd..ffedc99f2b68 100644 --- a/security/landlock/ruleset.c +++ b/security/landlock/ruleset.c @@ -29,33 +29,43 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers) struct landlock_ruleset *new_ruleset; new_ruleset = - kzalloc(struct_size(new_ruleset, fs_access_masks, num_layers), + kzalloc(struct_size(new_ruleset, access_masks, num_layers), GFP_KERNEL_ACCOUNT); if (!new_ruleset) return ERR_PTR(-ENOMEM); refcount_set(&new_ruleset->usage, 1); mutex_init(&new_ruleset->lock); - new_ruleset->root = RB_ROOT; + new_ruleset->root_inode = RB_ROOT; + +#if IS_ENABLED(CONFIG_INET) + new_ruleset->root_net_port = RB_ROOT; +#endif /* IS_ENABLED(CONFIG_INET) */ + new_ruleset->num_layers = num_layers; /* * hierarchy = NULL * num_rules = 0 - * fs_access_masks[] = 0 + * access_masks[] = 0 */ return new_ruleset; } struct landlock_ruleset * -landlock_create_ruleset(const access_mask_t fs_access_mask) +landlock_create_ruleset(const access_mask_t fs_access_mask, + const access_mask_t net_access_mask) { struct landlock_ruleset *new_ruleset; /* Informs about useless ruleset. */ - if (!fs_access_mask) + if (!fs_access_mask && !net_access_mask) return ERR_PTR(-ENOMSG); new_ruleset = create_ruleset(1); - if (!IS_ERR(new_ruleset)) - new_ruleset->fs_access_masks[0] = fs_access_mask; + if (IS_ERR(new_ruleset)) + return new_ruleset; + if (fs_access_mask) + landlock_add_fs_access_mask(new_ruleset, fs_access_mask, 0); + if (net_access_mask) + landlock_add_net_access_mask(new_ruleset, net_access_mask, 0); return new_ruleset; } @@ -68,8 +78,25 @@ static void build_check_rule(void) BUILD_BUG_ON(rule.num_layers < LANDLOCK_MAX_NUM_LAYERS); } +static bool is_object_pointer(const enum landlock_key_type key_type) +{ + switch (key_type) { + case LANDLOCK_KEY_INODE: + return true; + +#if IS_ENABLED(CONFIG_INET) + case LANDLOCK_KEY_NET_PORT: + return false; +#endif /* IS_ENABLED(CONFIG_INET) */ + + default: + WARN_ON_ONCE(1); + return false; + } +} + static struct landlock_rule * -create_rule(struct landlock_object *const object, +create_rule(const struct landlock_id id, const struct landlock_layer (*const layers)[], const u32 num_layers, const struct landlock_layer *const new_layer) { @@ -90,8 +117,13 @@ create_rule(struct landlock_object *const object, if (!new_rule) return ERR_PTR(-ENOMEM); RB_CLEAR_NODE(&new_rule->node); - landlock_get_object(object); - new_rule->object = object; + if (is_object_pointer(id.type)) { + /* This should be catched by insert_rule(). */ + WARN_ON_ONCE(!id.key.object); + landlock_get_object(id.key.object); + } + + new_rule->key = id.key; new_rule->num_layers = new_num_layers; /* Copies the original layer stack. */ memcpy(new_rule->layers, layers, @@ -102,12 +134,32 @@ create_rule(struct landlock_object *const object, return new_rule; } -static void free_rule(struct landlock_rule *const rule) +static struct rb_root *get_root(struct landlock_ruleset *const ruleset, + const enum landlock_key_type key_type) +{ + switch (key_type) { + case LANDLOCK_KEY_INODE: + return &ruleset->root_inode; + +#if IS_ENABLED(CONFIG_INET) + case LANDLOCK_KEY_NET_PORT: + return &ruleset->root_net_port; +#endif /* IS_ENABLED(CONFIG_INET) */ + + default: + WARN_ON_ONCE(1); + return ERR_PTR(-EINVAL); + } +} + +static void free_rule(struct landlock_rule *const rule, + const enum landlock_key_type key_type) { might_sleep(); if (!rule) return; - landlock_put_object(rule->object); + if (is_object_pointer(key_type)) + landlock_put_object(rule->key.object); kfree(rule); } @@ -117,19 +169,21 @@ static void build_check_ruleset(void) .num_rules = ~0, .num_layers = ~0, }; - typeof(ruleset.fs_access_masks[0]) fs_access_mask = ~0; + typeof(ruleset.access_masks[0]) access_masks = ~0; BUILD_BUG_ON(ruleset.num_rules < LANDLOCK_MAX_NUM_RULES); BUILD_BUG_ON(ruleset.num_layers < LANDLOCK_MAX_NUM_LAYERS); - BUILD_BUG_ON(fs_access_mask < LANDLOCK_MASK_ACCESS_FS); + BUILD_BUG_ON(access_masks < + ((LANDLOCK_MASK_ACCESS_FS << LANDLOCK_SHIFT_ACCESS_FS) | + (LANDLOCK_MASK_ACCESS_NET << LANDLOCK_SHIFT_ACCESS_NET))); } /** * insert_rule - Create and insert a rule in a ruleset * * @ruleset: The ruleset to be updated. - * @object: The object to build the new rule with. The underlying kernel - * object must be held by the caller. + * @id: The ID to build the new rule with. The underlying kernel object, if + * any, must be held by the caller. * @layers: One or multiple layers to be copied into the new rule. * @num_layers: The number of @layers entries. * @@ -143,26 +197,35 @@ static void build_check_ruleset(void) * access rights. */ static int insert_rule(struct landlock_ruleset *const ruleset, - struct landlock_object *const object, + const struct landlock_id id, const struct landlock_layer (*const layers)[], - size_t num_layers) + const size_t num_layers) { struct rb_node **walker_node; struct rb_node *parent_node = NULL; struct landlock_rule *new_rule; + struct rb_root *root; might_sleep(); lockdep_assert_held(&ruleset->lock); - if (WARN_ON_ONCE(!object || !layers)) + if (WARN_ON_ONCE(!layers)) return -ENOENT; - walker_node = &(ruleset->root.rb_node); + + if (is_object_pointer(id.type) && WARN_ON_ONCE(!id.key.object)) + return -ENOENT; + + root = get_root(ruleset, id.type); + if (IS_ERR(root)) + return PTR_ERR(root); + + walker_node = &root->rb_node; while (*walker_node) { struct landlock_rule *const this = rb_entry(*walker_node, struct landlock_rule, node); - if (this->object != object) { + if (this->key.data != id.key.data) { parent_node = *walker_node; - if (this->object < object) + if (this->key.data < id.key.data) walker_node = &((*walker_node)->rb_right); else walker_node = &((*walker_node)->rb_left); @@ -194,24 +257,24 @@ static int insert_rule(struct landlock_ruleset *const ruleset, * Intersects access rights when it is a merge between a * ruleset and a domain. */ - new_rule = create_rule(object, &this->layers, this->num_layers, + new_rule = create_rule(id, &this->layers, this->num_layers, &(*layers)[0]); if (IS_ERR(new_rule)) return PTR_ERR(new_rule); - rb_replace_node(&this->node, &new_rule->node, &ruleset->root); - free_rule(this); + rb_replace_node(&this->node, &new_rule->node, root); + free_rule(this, id.type); return 0; } - /* There is no match for @object. */ + /* There is no match for @id. */ build_check_ruleset(); if (ruleset->num_rules >= LANDLOCK_MAX_NUM_RULES) return -E2BIG; - new_rule = create_rule(object, layers, num_layers, NULL); + new_rule = create_rule(id, layers, num_layers, NULL); if (IS_ERR(new_rule)) return PTR_ERR(new_rule); rb_link_node(&new_rule->node, parent_node, walker_node); - rb_insert_color(&new_rule->node, &ruleset->root); + rb_insert_color(&new_rule->node, root); ruleset->num_rules++; return 0; } @@ -229,7 +292,7 @@ static void build_check_layer(void) /* @ruleset must be locked by the caller. */ int landlock_insert_rule(struct landlock_ruleset *const ruleset, - struct landlock_object *const object, + const struct landlock_id id, const access_mask_t access) { struct landlock_layer layers[] = { { @@ -239,7 +302,7 @@ int landlock_insert_rule(struct landlock_ruleset *const ruleset, } }; build_check_layer(); - return insert_rule(ruleset, object, &layers, ARRAY_SIZE(layers)); + return insert_rule(ruleset, id, &layers, ARRAY_SIZE(layers)); } static inline void get_hierarchy(struct landlock_hierarchy *const hierarchy) @@ -258,10 +321,51 @@ static void put_hierarchy(struct landlock_hierarchy *hierarchy) } } +static int merge_tree(struct landlock_ruleset *const dst, + struct landlock_ruleset *const src, + const enum landlock_key_type key_type) +{ + struct landlock_rule *walker_rule, *next_rule; + struct rb_root *src_root; + int err = 0; + + might_sleep(); + lockdep_assert_held(&dst->lock); + lockdep_assert_held(&src->lock); + + src_root = get_root(src, key_type); + if (IS_ERR(src_root)) + return PTR_ERR(src_root); + + /* Merges the @src tree. */ + rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, src_root, + node) { + struct landlock_layer layers[] = { { + .level = dst->num_layers, + } }; + const struct landlock_id id = { + .key = walker_rule->key, + .type = key_type, + }; + + if (WARN_ON_ONCE(walker_rule->num_layers != 1)) + return -EINVAL; + + if (WARN_ON_ONCE(walker_rule->layers[0].level != 0)) + return -EINVAL; + + layers[0].access = walker_rule->layers[0].access; + + err = insert_rule(dst, id, &layers, ARRAY_SIZE(layers)); + if (err) + return err; + } + return err; +} + static int merge_ruleset(struct landlock_ruleset *const dst, struct landlock_ruleset *const src) { - struct landlock_rule *walker_rule, *next_rule; int err = 0; might_sleep(); @@ -281,29 +385,19 @@ static int merge_ruleset(struct landlock_ruleset *const dst, err = -EINVAL; goto out_unlock; } - dst->fs_access_masks[dst->num_layers - 1] = src->fs_access_masks[0]; + dst->access_masks[dst->num_layers - 1] = src->access_masks[0]; - /* Merges the @src tree. */ - rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, &src->root, - node) { - struct landlock_layer layers[] = { { - .level = dst->num_layers, - } }; + /* Merges the @src inode tree. */ + err = merge_tree(dst, src, LANDLOCK_KEY_INODE); + if (err) + goto out_unlock; - if (WARN_ON_ONCE(walker_rule->num_layers != 1)) { - err = -EINVAL; - goto out_unlock; - } - if (WARN_ON_ONCE(walker_rule->layers[0].level != 0)) { - err = -EINVAL; - goto out_unlock; - } - layers[0].access = walker_rule->layers[0].access; - err = insert_rule(dst, walker_rule->object, &layers, - ARRAY_SIZE(layers)); - if (err) - goto out_unlock; - } +#if IS_ENABLED(CONFIG_INET) + /* Merges the @src network port tree. */ + err = merge_tree(dst, src, LANDLOCK_KEY_NET_PORT); + if (err) + goto out_unlock; +#endif /* IS_ENABLED(CONFIG_INET) */ out_unlock: mutex_unlock(&src->lock); @@ -311,10 +405,41 @@ out_unlock: return err; } +static int inherit_tree(struct landlock_ruleset *const parent, + struct landlock_ruleset *const child, + const enum landlock_key_type key_type) +{ + struct landlock_rule *walker_rule, *next_rule; + struct rb_root *parent_root; + int err = 0; + + might_sleep(); + lockdep_assert_held(&parent->lock); + lockdep_assert_held(&child->lock); + + parent_root = get_root(parent, key_type); + if (IS_ERR(parent_root)) + return PTR_ERR(parent_root); + + /* Copies the @parent inode or network tree. */ + rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, + parent_root, node) { + const struct landlock_id id = { + .key = walker_rule->key, + .type = key_type, + }; + + err = insert_rule(child, id, &walker_rule->layers, + walker_rule->num_layers); + if (err) + return err; + } + return err; +} + static int inherit_ruleset(struct landlock_ruleset *const parent, struct landlock_ruleset *const child) { - struct landlock_rule *walker_rule, *next_rule; int err = 0; might_sleep(); @@ -325,23 +450,25 @@ static int inherit_ruleset(struct landlock_ruleset *const parent, mutex_lock(&child->lock); mutex_lock_nested(&parent->lock, SINGLE_DEPTH_NESTING); - /* Copies the @parent tree. */ - rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, - &parent->root, node) { - err = insert_rule(child, walker_rule->object, - &walker_rule->layers, - walker_rule->num_layers); - if (err) - goto out_unlock; - } + /* Copies the @parent inode tree. */ + err = inherit_tree(parent, child, LANDLOCK_KEY_INODE); + if (err) + goto out_unlock; + +#if IS_ENABLED(CONFIG_INET) + /* Copies the @parent network port tree. */ + err = inherit_tree(parent, child, LANDLOCK_KEY_NET_PORT); + if (err) + goto out_unlock; +#endif /* IS_ENABLED(CONFIG_INET) */ if (WARN_ON_ONCE(child->num_layers <= parent->num_layers)) { err = -EINVAL; goto out_unlock; } /* Copies the parent layer stack and leaves a space for the new layer. */ - memcpy(child->fs_access_masks, parent->fs_access_masks, - flex_array_size(parent, fs_access_masks, parent->num_layers)); + memcpy(child->access_masks, parent->access_masks, + flex_array_size(parent, access_masks, parent->num_layers)); if (WARN_ON_ONCE(!parent->hierarchy)) { err = -EINVAL; @@ -361,8 +488,16 @@ static void free_ruleset(struct landlock_ruleset *const ruleset) struct landlock_rule *freeme, *next; might_sleep(); - rbtree_postorder_for_each_entry_safe(freeme, next, &ruleset->root, node) - free_rule(freeme); + rbtree_postorder_for_each_entry_safe(freeme, next, &ruleset->root_inode, + node) + free_rule(freeme, LANDLOCK_KEY_INODE); + +#if IS_ENABLED(CONFIG_INET) + rbtree_postorder_for_each_entry_safe(freeme, next, + &ruleset->root_net_port, node) + free_rule(freeme, LANDLOCK_KEY_NET_PORT); +#endif /* IS_ENABLED(CONFIG_INET) */ + put_hierarchy(ruleset->hierarchy); kfree(ruleset); } @@ -453,23 +588,151 @@ out_put_dom: */ const struct landlock_rule * landlock_find_rule(const struct landlock_ruleset *const ruleset, - const struct landlock_object *const object) + const struct landlock_id id) { + const struct rb_root *root; const struct rb_node *node; - if (!object) + root = get_root((struct landlock_ruleset *)ruleset, id.type); + if (IS_ERR(root)) return NULL; - node = ruleset->root.rb_node; + node = root->rb_node; + while (node) { struct landlock_rule *this = rb_entry(node, struct landlock_rule, node); - if (this->object == object) + if (this->key.data == id.key.data) return this; - if (this->object < object) + if (this->key.data < id.key.data) node = node->rb_right; else node = node->rb_left; } return NULL; } + +/* + * @layer_masks is read and may be updated according to the access request and + * the matching rule. + * @masks_array_size must be equal to ARRAY_SIZE(*layer_masks). + * + * Returns true if the request is allowed (i.e. relevant layer masks for the + * request are empty). + */ +bool landlock_unmask_layers(const struct landlock_rule *const rule, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[], + const size_t masks_array_size) +{ + size_t layer_level; + + if (!access_request || !layer_masks) + return true; + if (!rule) + return false; + + /* + * An access is granted if, for each policy layer, at least one rule + * encountered on the pathwalk grants the requested access, + * regardless of its position in the layer stack. We must then check + * the remaining layers for each inode, from the first added layer to + * the last one. When there is multiple requested accesses, for each + * policy layer, the full set of requested accesses may not be granted + * by only one rule, but by the union (binary OR) of multiple rules. + * E.g. /a/b <execute> + /a <read> => /a/b <execute + read> + */ + for (layer_level = 0; layer_level < rule->num_layers; layer_level++) { + const struct landlock_layer *const layer = + &rule->layers[layer_level]; + const layer_mask_t layer_bit = BIT_ULL(layer->level - 1); + const unsigned long access_req = access_request; + unsigned long access_bit; + bool is_empty; + + /* + * Records in @layer_masks which layer grants access to each + * requested access. + */ + is_empty = true; + for_each_set_bit(access_bit, &access_req, masks_array_size) { + if (layer->access & BIT_ULL(access_bit)) + (*layer_masks)[access_bit] &= ~layer_bit; + is_empty = is_empty && !(*layer_masks)[access_bit]; + } + if (is_empty) + return true; + } + return false; +} + +typedef access_mask_t +get_access_mask_t(const struct landlock_ruleset *const ruleset, + const u16 layer_level); + +/** + * landlock_init_layer_masks - Initialize layer masks from an access request + * + * Populates @layer_masks such that for each access right in @access_request, + * the bits for all the layers are set where this access right is handled. + * + * @domain: The domain that defines the current restrictions. + * @access_request: The requested access rights to check. + * @layer_masks: It must contain %LANDLOCK_NUM_ACCESS_FS or + * %LANDLOCK_NUM_ACCESS_NET elements according to @key_type. + * @key_type: The key type to switch between access masks of different types. + * + * Returns: An access mask where each access right bit is set which is handled + * in any of the active layers in @domain. + */ +access_mask_t +landlock_init_layer_masks(const struct landlock_ruleset *const domain, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[], + const enum landlock_key_type key_type) +{ + access_mask_t handled_accesses = 0; + size_t layer_level, num_access; + get_access_mask_t *get_access_mask; + + switch (key_type) { + case LANDLOCK_KEY_INODE: + get_access_mask = landlock_get_fs_access_mask; + num_access = LANDLOCK_NUM_ACCESS_FS; + break; + +#if IS_ENABLED(CONFIG_INET) + case LANDLOCK_KEY_NET_PORT: + get_access_mask = landlock_get_net_access_mask; + num_access = LANDLOCK_NUM_ACCESS_NET; + break; +#endif /* IS_ENABLED(CONFIG_INET) */ + + default: + WARN_ON_ONCE(1); + return 0; + } + + memset(layer_masks, 0, + array_size(sizeof((*layer_masks)[0]), num_access)); + + /* An empty access request can happen because of O_WRONLY | O_RDWR. */ + if (!access_request) + return 0; + + /* Saves all handled accesses per layer. */ + for (layer_level = 0; layer_level < domain->num_layers; layer_level++) { + const unsigned long access_req = access_request; + unsigned long access_bit; + + for_each_set_bit(access_bit, &access_req, num_access) { + if (BIT_ULL(access_bit) & + get_access_mask(domain, layer_level)) { + (*layer_masks)[access_bit] |= + BIT_ULL(layer_level); + handled_accesses |= BIT_ULL(access_bit); + } + } + } + return handled_accesses; +} diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h index 55b1df8f66a8..c7f1526784fd 100644 --- a/security/landlock/ruleset.h +++ b/security/landlock/ruleset.h @@ -15,16 +15,35 @@ #include <linux/rbtree.h> #include <linux/refcount.h> #include <linux/workqueue.h> +#include <uapi/linux/landlock.h> #include "limits.h" #include "object.h" +/* + * All access rights that are denied by default whether they are handled or not + * by a ruleset/layer. This must be ORed with all ruleset->access_masks[] + * entries when we need to get the absolute handled access masks. + */ +/* clang-format off */ +#define LANDLOCK_ACCESS_FS_INITIALLY_DENIED ( \ + LANDLOCK_ACCESS_FS_REFER) +/* clang-format on */ + typedef u16 access_mask_t; /* Makes sure all filesystem access rights can be stored. */ static_assert(BITS_PER_TYPE(access_mask_t) >= LANDLOCK_NUM_ACCESS_FS); +/* Makes sure all network access rights can be stored. */ +static_assert(BITS_PER_TYPE(access_mask_t) >= LANDLOCK_NUM_ACCESS_NET); /* Makes sure for_each_set_bit() and for_each_clear_bit() calls are OK. */ static_assert(sizeof(unsigned long) >= sizeof(access_mask_t)); +/* Ruleset access masks. */ +typedef u32 access_masks_t; +/* Makes sure all ruleset access rights can be stored. */ +static_assert(BITS_PER_TYPE(access_masks_t) >= + LANDLOCK_NUM_ACCESS_FS + LANDLOCK_NUM_ACCESS_NET); + typedef u16 layer_mask_t; /* Makes sure all layers can be checked. */ static_assert(BITS_PER_TYPE(layer_mask_t) >= LANDLOCK_MAX_NUM_LAYERS); @@ -45,6 +64,52 @@ struct landlock_layer { }; /** + * union landlock_key - Key of a ruleset's red-black tree + */ +union landlock_key { + /** + * @object: Pointer to identify a kernel object (e.g. an inode). + */ + struct landlock_object *object; + /** + * @data: Raw data to identify an arbitrary 32-bit value + * (e.g. a TCP port). + */ + uintptr_t data; +}; + +/** + * enum landlock_key_type - Type of &union landlock_key + */ +enum landlock_key_type { + /** + * @LANDLOCK_KEY_INODE: Type of &landlock_ruleset.root_inode's node + * keys. + */ + LANDLOCK_KEY_INODE = 1, + /** + * @LANDLOCK_KEY_NET_PORT: Type of &landlock_ruleset.root_net_port's + * node keys. + */ + LANDLOCK_KEY_NET_PORT, +}; + +/** + * struct landlock_id - Unique rule identifier for a ruleset + */ +struct landlock_id { + /** + * @key: Identifies either a kernel object (e.g. an inode) or + * a raw value (e.g. a TCP port). + */ + union landlock_key key; + /** + * @type: Type of a landlock_ruleset's root tree. + */ + const enum landlock_key_type type; +}; + +/** * struct landlock_rule - Access rights tied to an object */ struct landlock_rule { @@ -53,12 +118,13 @@ struct landlock_rule { */ struct rb_node node; /** - * @object: Pointer to identify a kernel object (e.g. an inode). This - * is used as a key for this ruleset element. This pointer is set once - * and never modified. It always points to an allocated object because - * each rule increments the refcount of its object. + * @key: A union to identify either a kernel object (e.g. an inode) or + * a raw data value (e.g. a network socket port). This is used as a key + * for this ruleset element. The pointer is set once and never + * modified. It always points to an allocated object because each rule + * increments the refcount of its object. */ - struct landlock_object *object; + union landlock_key key; /** * @num_layers: Number of entries in @layers. */ @@ -94,11 +160,23 @@ struct landlock_hierarchy { */ struct landlock_ruleset { /** - * @root: Root of a red-black tree containing &struct landlock_rule - * nodes. Once a ruleset is tied to a process (i.e. as a domain), this - * tree is immutable until @usage reaches zero. + * @root_inode: Root of a red-black tree containing &struct + * landlock_rule nodes with inode object. Once a ruleset is tied to a + * process (i.e. as a domain), this tree is immutable until @usage + * reaches zero. */ - struct rb_root root; + struct rb_root root_inode; + +#if IS_ENABLED(CONFIG_INET) + /** + * @root_net_port: Root of a red-black tree containing &struct + * landlock_rule nodes with network port. Once a ruleset is tied to a + * process (i.e. as a domain), this tree is immutable until @usage + * reaches zero. + */ + struct rb_root root_net_port; +#endif /* IS_ENABLED(CONFIG_INET) */ + /** * @hierarchy: Enables hierarchy identification even when a parent * domain vanishes. This is needed for the ptrace protection. @@ -110,7 +188,7 @@ struct landlock_ruleset { * section. This is only used by * landlock_put_ruleset_deferred() when @usage reaches zero. * The fields @lock, @usage, @num_rules, @num_layers and - * @fs_access_masks are then unused. + * @access_masks are then unused. */ struct work_struct work_free; struct { @@ -137,30 +215,31 @@ struct landlock_ruleset { */ u32 num_layers; /** - * @fs_access_masks: Contains the subset of filesystem - * actions that are restricted by a ruleset. A domain - * saves all layers of merged rulesets in a stack - * (FAM), starting from the first layer to the last - * one. These layers are used when merging rulesets, - * for user space backward compatibility (i.e. - * future-proof), and to properly handle merged + * @access_masks: Contains the subset of filesystem and + * network actions that are restricted by a ruleset. + * A domain saves all layers of merged rulesets in a + * stack (FAM), starting from the first layer to the + * last one. These layers are used when merging + * rulesets, for user space backward compatibility + * (i.e. future-proof), and to properly handle merged * rulesets without overlapping access rights. These * layers are set once and never changed for the * lifetime of the ruleset. */ - access_mask_t fs_access_masks[]; + access_masks_t access_masks[]; }; }; }; struct landlock_ruleset * -landlock_create_ruleset(const access_mask_t fs_access_mask); +landlock_create_ruleset(const access_mask_t access_mask_fs, + const access_mask_t access_mask_net); void landlock_put_ruleset(struct landlock_ruleset *const ruleset); void landlock_put_ruleset_deferred(struct landlock_ruleset *const ruleset); int landlock_insert_rule(struct landlock_ruleset *const ruleset, - struct landlock_object *const object, + const struct landlock_id id, const access_mask_t access); struct landlock_ruleset * @@ -169,7 +248,7 @@ landlock_merge_ruleset(struct landlock_ruleset *const parent, const struct landlock_rule * landlock_find_rule(const struct landlock_ruleset *const ruleset, - const struct landlock_object *const object); + const struct landlock_id id); static inline void landlock_get_ruleset(struct landlock_ruleset *const ruleset) { @@ -177,4 +256,68 @@ static inline void landlock_get_ruleset(struct landlock_ruleset *const ruleset) refcount_inc(&ruleset->usage); } +static inline void +landlock_add_fs_access_mask(struct landlock_ruleset *const ruleset, + const access_mask_t fs_access_mask, + const u16 layer_level) +{ + access_mask_t fs_mask = fs_access_mask & LANDLOCK_MASK_ACCESS_FS; + + /* Should already be checked in sys_landlock_create_ruleset(). */ + WARN_ON_ONCE(fs_access_mask != fs_mask); + ruleset->access_masks[layer_level] |= + (fs_mask << LANDLOCK_SHIFT_ACCESS_FS); +} + +static inline void +landlock_add_net_access_mask(struct landlock_ruleset *const ruleset, + const access_mask_t net_access_mask, + const u16 layer_level) +{ + access_mask_t net_mask = net_access_mask & LANDLOCK_MASK_ACCESS_NET; + + /* Should already be checked in sys_landlock_create_ruleset(). */ + WARN_ON_ONCE(net_access_mask != net_mask); + ruleset->access_masks[layer_level] |= + (net_mask << LANDLOCK_SHIFT_ACCESS_NET); +} + +static inline access_mask_t +landlock_get_raw_fs_access_mask(const struct landlock_ruleset *const ruleset, + const u16 layer_level) +{ + return (ruleset->access_masks[layer_level] >> + LANDLOCK_SHIFT_ACCESS_FS) & + LANDLOCK_MASK_ACCESS_FS; +} + +static inline access_mask_t +landlock_get_fs_access_mask(const struct landlock_ruleset *const ruleset, + const u16 layer_level) +{ + /* Handles all initially denied by default access rights. */ + return landlock_get_raw_fs_access_mask(ruleset, layer_level) | + LANDLOCK_ACCESS_FS_INITIALLY_DENIED; +} + +static inline access_mask_t +landlock_get_net_access_mask(const struct landlock_ruleset *const ruleset, + const u16 layer_level) +{ + return (ruleset->access_masks[layer_level] >> + LANDLOCK_SHIFT_ACCESS_NET) & + LANDLOCK_MASK_ACCESS_NET; +} + +bool landlock_unmask_layers(const struct landlock_rule *const rule, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[], + const size_t masks_array_size); + +access_mask_t +landlock_init_layer_masks(const struct landlock_ruleset *const domain, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[], + const enum landlock_key_type key_type); + #endif /* _SECURITY_LANDLOCK_RULESET_H */ diff --git a/security/landlock/setup.c b/security/landlock/setup.c index 0f6113528fa4..3e11d303542f 100644 --- a/security/landlock/setup.c +++ b/security/landlock/setup.c @@ -12,6 +12,7 @@ #include "common.h" #include "cred.h" #include "fs.h" +#include "net.h" #include "ptrace.h" #include "setup.h" @@ -29,6 +30,7 @@ static int __init landlock_init(void) landlock_add_cred_hooks(); landlock_add_ptrace_hooks(); landlock_add_fs_hooks(); + landlock_add_net_hooks(); landlock_initialized = true; pr_info("Up and running.\n"); return 0; diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c index 245cc650a4dc..898358f57fa0 100644 --- a/security/landlock/syscalls.c +++ b/security/landlock/syscalls.c @@ -29,6 +29,7 @@ #include "cred.h" #include "fs.h" #include "limits.h" +#include "net.h" #include "ruleset.h" #include "setup.h" @@ -74,7 +75,8 @@ static void build_check_abi(void) { struct landlock_ruleset_attr ruleset_attr; struct landlock_path_beneath_attr path_beneath_attr; - size_t ruleset_size, path_beneath_size; + struct landlock_net_port_attr net_port_attr; + size_t ruleset_size, path_beneath_size, net_port_size; /* * For each user space ABI structures, first checks that there is no @@ -82,13 +84,19 @@ static void build_check_abi(void) * struct size. */ ruleset_size = sizeof(ruleset_attr.handled_access_fs); + ruleset_size += sizeof(ruleset_attr.handled_access_net); BUILD_BUG_ON(sizeof(ruleset_attr) != ruleset_size); - BUILD_BUG_ON(sizeof(ruleset_attr) != 8); + BUILD_BUG_ON(sizeof(ruleset_attr) != 16); path_beneath_size = sizeof(path_beneath_attr.allowed_access); path_beneath_size += sizeof(path_beneath_attr.parent_fd); BUILD_BUG_ON(sizeof(path_beneath_attr) != path_beneath_size); BUILD_BUG_ON(sizeof(path_beneath_attr) != 12); + + net_port_size = sizeof(net_port_attr.allowed_access); + net_port_size += sizeof(net_port_attr.port); + BUILD_BUG_ON(sizeof(net_port_attr) != net_port_size); + BUILD_BUG_ON(sizeof(net_port_attr) != 16); } /* Ruleset handling */ @@ -129,7 +137,7 @@ static const struct file_operations ruleset_fops = { .write = fop_dummy_write, }; -#define LANDLOCK_ABI_VERSION 3 +#define LANDLOCK_ABI_VERSION 4 /** * sys_landlock_create_ruleset - Create a new ruleset @@ -188,8 +196,14 @@ SYSCALL_DEFINE3(landlock_create_ruleset, LANDLOCK_MASK_ACCESS_FS) return -EINVAL; + /* Checks network content (and 32-bits cast). */ + if ((ruleset_attr.handled_access_net | LANDLOCK_MASK_ACCESS_NET) != + LANDLOCK_MASK_ACCESS_NET) + return -EINVAL; + /* Checks arguments and transforms to kernel struct. */ - ruleset = landlock_create_ruleset(ruleset_attr.handled_access_fs); + ruleset = landlock_create_ruleset(ruleset_attr.handled_access_fs, + ruleset_attr.handled_access_net); if (IS_ERR(ruleset)) return PTR_ERR(ruleset); @@ -274,13 +288,84 @@ out_fdput: return err; } +static int add_rule_path_beneath(struct landlock_ruleset *const ruleset, + const void __user *const rule_attr) +{ + struct landlock_path_beneath_attr path_beneath_attr; + struct path path; + int res, err; + access_mask_t mask; + + /* Copies raw user space buffer. */ + res = copy_from_user(&path_beneath_attr, rule_attr, + sizeof(path_beneath_attr)); + if (res) + return -EFAULT; + + /* + * Informs about useless rule: empty allowed_access (i.e. deny rules) + * are ignored in path walks. + */ + if (!path_beneath_attr.allowed_access) + return -ENOMSG; + + /* Checks that allowed_access matches the @ruleset constraints. */ + mask = landlock_get_raw_fs_access_mask(ruleset, 0); + if ((path_beneath_attr.allowed_access | mask) != mask) + return -EINVAL; + + /* Gets and checks the new rule. */ + err = get_path_from_fd(path_beneath_attr.parent_fd, &path); + if (err) + return err; + + /* Imports the new rule. */ + err = landlock_append_fs_rule(ruleset, &path, + path_beneath_attr.allowed_access); + path_put(&path); + return err; +} + +static int add_rule_net_port(struct landlock_ruleset *ruleset, + const void __user *const rule_attr) +{ + struct landlock_net_port_attr net_port_attr; + int res; + access_mask_t mask; + + /* Copies raw user space buffer. */ + res = copy_from_user(&net_port_attr, rule_attr, sizeof(net_port_attr)); + if (res) + return -EFAULT; + + /* + * Informs about useless rule: empty allowed_access (i.e. deny rules) + * are ignored by network actions. + */ + if (!net_port_attr.allowed_access) + return -ENOMSG; + + /* Checks that allowed_access matches the @ruleset constraints. */ + mask = landlock_get_net_access_mask(ruleset, 0); + if ((net_port_attr.allowed_access | mask) != mask) + return -EINVAL; + + /* Denies inserting a rule with port greater than 65535. */ + if (net_port_attr.port > U16_MAX) + return -EINVAL; + + /* Imports the new rule. */ + return landlock_append_net_rule(ruleset, net_port_attr.port, + net_port_attr.allowed_access); +} + /** * sys_landlock_add_rule - Add a new rule to a ruleset * * @ruleset_fd: File descriptor tied to the ruleset that should be extended * with the new rule. - * @rule_type: Identify the structure type pointed to by @rule_attr (only - * %LANDLOCK_RULE_PATH_BENEATH for now). + * @rule_type: Identify the structure type pointed to by @rule_attr: + * %LANDLOCK_RULE_PATH_BENEATH or %LANDLOCK_RULE_NET_PORT. * @rule_attr: Pointer to a rule (only of type &struct * landlock_path_beneath_attr for now). * @flags: Must be 0. @@ -291,9 +376,13 @@ out_fdput: * Possible returned errors are: * * - %EOPNOTSUPP: Landlock is supported by the kernel but disabled at boot time; + * - %EAFNOSUPPORT: @rule_type is %LANDLOCK_RULE_NET_PORT but TCP/IP is not + * supported by the running kernel; * - %EINVAL: @flags is not 0, or inconsistent access in the rule (i.e. - * &landlock_path_beneath_attr.allowed_access is not a subset of the - * ruleset handled accesses); + * &landlock_path_beneath_attr.allowed_access or + * &landlock_net_port_attr.allowed_access is not a subset of the + * ruleset handled accesses), or &landlock_net_port_attr.port is + * greater than 65535; * - %ENOMSG: Empty accesses (e.g. &landlock_path_beneath_attr.allowed_access); * - %EBADF: @ruleset_fd is not a file descriptor for the current thread, or a * member of @rule_attr is not a file descriptor as expected; @@ -306,10 +395,8 @@ SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd, const enum landlock_rule_type, rule_type, const void __user *const, rule_attr, const __u32, flags) { - struct landlock_path_beneath_attr path_beneath_attr; - struct path path; struct landlock_ruleset *ruleset; - int res, err; + int err; if (!landlock_initialized) return -EOPNOTSUPP; @@ -323,48 +410,17 @@ SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd, if (IS_ERR(ruleset)) return PTR_ERR(ruleset); - if (rule_type != LANDLOCK_RULE_PATH_BENEATH) { + switch (rule_type) { + case LANDLOCK_RULE_PATH_BENEATH: + err = add_rule_path_beneath(ruleset, rule_attr); + break; + case LANDLOCK_RULE_NET_PORT: + err = add_rule_net_port(ruleset, rule_attr); + break; + default: err = -EINVAL; - goto out_put_ruleset; - } - - /* Copies raw user space buffer, only one type for now. */ - res = copy_from_user(&path_beneath_attr, rule_attr, - sizeof(path_beneath_attr)); - if (res) { - err = -EFAULT; - goto out_put_ruleset; + break; } - - /* - * Informs about useless rule: empty allowed_access (i.e. deny rules) - * are ignored in path walks. - */ - if (!path_beneath_attr.allowed_access) { - err = -ENOMSG; - goto out_put_ruleset; - } - /* - * Checks that allowed_access matches the @ruleset constraints - * (ruleset->fs_access_masks[0] is automatically upgraded to 64-bits). - */ - if ((path_beneath_attr.allowed_access | ruleset->fs_access_masks[0]) != - ruleset->fs_access_masks[0]) { - err = -EINVAL; - goto out_put_ruleset; - } - - /* Gets and checks the new rule. */ - err = get_path_from_fd(path_beneath_attr.parent_fd, &path); - if (err) - goto out_put_ruleset; - - /* Imports the new rule. */ - err = landlock_append_fs_rule(ruleset, &path, - path_beneath_attr.allowed_access); - path_put(&path); - -out_put_ruleset: landlock_put_ruleset(ruleset); return err; } diff --git a/tools/testing/selftests/landlock/base_test.c b/tools/testing/selftests/landlock/base_test.c index 792c3f0a59b4..646f778dfb1e 100644 --- a/tools/testing/selftests/landlock/base_test.c +++ b/tools/testing/selftests/landlock/base_test.c @@ -75,7 +75,7 @@ TEST(abi_version) const struct landlock_ruleset_attr ruleset_attr = { .handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE, }; - ASSERT_EQ(3, landlock_create_ruleset(NULL, 0, + ASSERT_EQ(4, landlock_create_ruleset(NULL, 0, LANDLOCK_CREATE_RULESET_VERSION)); ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 0, diff --git a/tools/testing/selftests/landlock/common.h b/tools/testing/selftests/landlock/common.h index d7987ae8d7fc..5b79758cae62 100644 --- a/tools/testing/selftests/landlock/common.h +++ b/tools/testing/selftests/landlock/common.h @@ -112,10 +112,13 @@ static void _init_caps(struct __test_metadata *const _metadata, bool drop_all) cap_t cap_p; /* Only these three capabilities are useful for the tests. */ const cap_value_t caps[] = { + /* clang-format off */ CAP_DAC_OVERRIDE, CAP_MKNOD, CAP_SYS_ADMIN, CAP_SYS_CHROOT, + CAP_NET_BIND_SERVICE, + /* clang-format on */ }; cap_p = cap_get_proc(); @@ -256,3 +259,13 @@ static int __maybe_unused send_fd(int usock, int fd_tx) return -errno; return 0; } + +static void __maybe_unused +enforce_ruleset(struct __test_metadata *const _metadata, const int ruleset_fd) +{ + ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)); + ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0)) + { + TH_LOG("Failed to enforce ruleset: %s", strerror(errno)); + } +} diff --git a/tools/testing/selftests/landlock/config b/tools/testing/selftests/landlock/config index 3dc9e438eab1..0086efaa7b68 100644 --- a/tools/testing/selftests/landlock/config +++ b/tools/testing/selftests/landlock/config @@ -1,5 +1,9 @@ CONFIG_CGROUPS=y CONFIG_CGROUP_SCHED=y +CONFIG_INET=y +CONFIG_IPV6=y +CONFIG_NET=y +CONFIG_NET_NS=y CONFIG_OVERLAY_FS=y CONFIG_PROC_FS=y CONFIG_SECURITY=y diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c index 251594306d40..18e1f86a6234 100644 --- a/tools/testing/selftests/landlock/fs_test.c +++ b/tools/testing/selftests/landlock/fs_test.c @@ -677,16 +677,6 @@ static int create_ruleset(struct __test_metadata *const _metadata, return ruleset_fd; } -static void enforce_ruleset(struct __test_metadata *const _metadata, - const int ruleset_fd) -{ - ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)); - ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0)) - { - TH_LOG("Failed to enforce ruleset: %s", strerror(errno)); - } -} - TEST_F_FORK(layout0, proc_nsfs) { const struct rule rules[] = { @@ -1635,6 +1625,65 @@ TEST_F_FORK(layout1, move_mount) clear_cap(_metadata, CAP_SYS_ADMIN); } +TEST_F_FORK(layout1, topology_changes_with_net_only) +{ + const struct landlock_ruleset_attr ruleset_net = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + int ruleset_fd; + + /* Add network restrictions. */ + ruleset_fd = + landlock_create_ruleset(&ruleset_net, sizeof(ruleset_net), 0); + ASSERT_LE(0, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + ASSERT_EQ(0, close(ruleset_fd)); + + /* Mount, remount, move_mount, umount, and pivot_root checks. */ + set_cap(_metadata, CAP_SYS_ADMIN); + ASSERT_EQ(0, mount_opt(&mnt_tmp, dir_s1d2)); + ASSERT_EQ(0, mount(NULL, dir_s1d2, NULL, MS_PRIVATE | MS_REC, NULL)); + ASSERT_EQ(0, syscall(__NR_move_mount, AT_FDCWD, dir_s1d2, AT_FDCWD, + dir_s2d2, 0)); + ASSERT_EQ(0, umount(dir_s2d2)); + ASSERT_EQ(0, syscall(__NR_pivot_root, dir_s3d2, dir_s3d3)); + ASSERT_EQ(0, chdir("/")); + clear_cap(_metadata, CAP_SYS_ADMIN); +} + +TEST_F_FORK(layout1, topology_changes_with_net_and_fs) +{ + const struct landlock_ruleset_attr ruleset_net_fs = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .handled_access_fs = LANDLOCK_ACCESS_FS_EXECUTE, + }; + int ruleset_fd; + + /* Add network and filesystem restrictions. */ + ruleset_fd = landlock_create_ruleset(&ruleset_net_fs, + sizeof(ruleset_net_fs), 0); + ASSERT_LE(0, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + ASSERT_EQ(0, close(ruleset_fd)); + + /* Mount, remount, move_mount, umount, and pivot_root checks. */ + set_cap(_metadata, CAP_SYS_ADMIN); + ASSERT_EQ(-1, mount_opt(&mnt_tmp, dir_s1d2)); + ASSERT_EQ(EPERM, errno); + ASSERT_EQ(-1, mount(NULL, dir_s3d2, NULL, MS_PRIVATE | MS_REC, NULL)); + ASSERT_EQ(EPERM, errno); + ASSERT_EQ(-1, syscall(__NR_move_mount, AT_FDCWD, dir_s3d2, AT_FDCWD, + dir_s2d2, 0)); + ASSERT_EQ(EPERM, errno); + ASSERT_EQ(-1, umount(dir_s3d2)); + ASSERT_EQ(EPERM, errno); + ASSERT_EQ(-1, syscall(__NR_pivot_root, dir_s3d2, dir_s3d3)); + ASSERT_EQ(EPERM, errno); + clear_cap(_metadata, CAP_SYS_ADMIN); +} + TEST_F_FORK(layout1, release_inodes) { const struct rule rules[] = { diff --git a/tools/testing/selftests/landlock/net_test.c b/tools/testing/selftests/landlock/net_test.c new file mode 100644 index 000000000000..929e21c4db05 --- /dev/null +++ b/tools/testing/selftests/landlock/net_test.c @@ -0,0 +1,1738 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Landlock tests - Network + * + * Copyright © 2022-2023 Huawei Tech. Co., Ltd. + * Copyright © 2023 Microsoft Corporation + */ + +#define _GNU_SOURCE +#include <arpa/inet.h> +#include <errno.h> +#include <fcntl.h> +#include <linux/landlock.h> +#include <linux/in.h> +#include <sched.h> +#include <stdint.h> +#include <string.h> +#include <sys/prctl.h> +#include <sys/socket.h> +#include <sys/un.h> + +#include "common.h" + +const short sock_port_start = (1 << 10); + +static const char loopback_ipv4[] = "127.0.0.1"; +static const char loopback_ipv6[] = "::1"; + +/* Number pending connections queue to be hold. */ +const short backlog = 10; + +enum sandbox_type { + NO_SANDBOX, + /* This may be used to test rules that allow *and* deny accesses. */ + TCP_SANDBOX, +}; + +struct protocol_variant { + int domain; + int type; +}; + +struct service_fixture { + struct protocol_variant protocol; + /* port is also stored in ipv4_addr.sin_port or ipv6_addr.sin6_port */ + unsigned short port; + union { + struct sockaddr_in ipv4_addr; + struct sockaddr_in6 ipv6_addr; + struct { + struct sockaddr_un unix_addr; + socklen_t unix_addr_len; + }; + }; +}; + +static int set_service(struct service_fixture *const srv, + const struct protocol_variant prot, + const unsigned short index) +{ + memset(srv, 0, sizeof(*srv)); + + /* + * Copies all protocol properties in case of the variant only contains + * a subset of them. + */ + srv->protocol = prot; + + /* Checks for port overflow. */ + if (index > 2) + return 1; + srv->port = sock_port_start << (2 * index); + + switch (prot.domain) { + case AF_UNSPEC: + case AF_INET: + srv->ipv4_addr.sin_family = prot.domain; + srv->ipv4_addr.sin_port = htons(srv->port); + srv->ipv4_addr.sin_addr.s_addr = inet_addr(loopback_ipv4); + return 0; + + case AF_INET6: + srv->ipv6_addr.sin6_family = prot.domain; + srv->ipv6_addr.sin6_port = htons(srv->port); + inet_pton(AF_INET6, loopback_ipv6, &srv->ipv6_addr.sin6_addr); + return 0; + + case AF_UNIX: + srv->unix_addr.sun_family = prot.domain; + sprintf(srv->unix_addr.sun_path, + "_selftests-landlock-net-tid%d-index%d", gettid(), + index); + srv->unix_addr_len = SUN_LEN(&srv->unix_addr); + srv->unix_addr.sun_path[0] = '\0'; + return 0; + } + return 1; +} + +static void setup_loopback(struct __test_metadata *const _metadata) +{ + set_cap(_metadata, CAP_SYS_ADMIN); + ASSERT_EQ(0, unshare(CLONE_NEWNET)); + ASSERT_EQ(0, system("ip link set dev lo up")); + clear_cap(_metadata, CAP_SYS_ADMIN); +} + +static bool is_restricted(const struct protocol_variant *const prot, + const enum sandbox_type sandbox) +{ + switch (prot->domain) { + case AF_INET: + case AF_INET6: + switch (prot->type) { + case SOCK_STREAM: + return sandbox == TCP_SANDBOX; + } + break; + } + return false; +} + +static int socket_variant(const struct service_fixture *const srv) +{ + int ret; + + ret = socket(srv->protocol.domain, srv->protocol.type | SOCK_CLOEXEC, + 0); + if (ret < 0) + return -errno; + return ret; +} + +#ifndef SIN6_LEN_RFC2133 +#define SIN6_LEN_RFC2133 24 +#endif + +static socklen_t get_addrlen(const struct service_fixture *const srv, + const bool minimal) +{ + switch (srv->protocol.domain) { + case AF_UNSPEC: + case AF_INET: + return sizeof(srv->ipv4_addr); + + case AF_INET6: + if (minimal) + return SIN6_LEN_RFC2133; + return sizeof(srv->ipv6_addr); + + case AF_UNIX: + if (minimal) + return sizeof(srv->unix_addr) - + sizeof(srv->unix_addr.sun_path); + return srv->unix_addr_len; + + default: + return 0; + } +} + +static void set_port(struct service_fixture *const srv, uint16_t port) +{ + switch (srv->protocol.domain) { + case AF_UNSPEC: + case AF_INET: + srv->ipv4_addr.sin_port = htons(port); + return; + + case AF_INET6: + srv->ipv6_addr.sin6_port = htons(port); + return; + + default: + return; + } +} + +static uint16_t get_binded_port(int socket_fd, + const struct protocol_variant *const prot) +{ + struct sockaddr_in ipv4_addr; + struct sockaddr_in6 ipv6_addr; + socklen_t ipv4_addr_len, ipv6_addr_len; + + /* Gets binded port. */ + switch (prot->domain) { + case AF_UNSPEC: + case AF_INET: + ipv4_addr_len = sizeof(ipv4_addr); + getsockname(socket_fd, &ipv4_addr, &ipv4_addr_len); + return ntohs(ipv4_addr.sin_port); + + case AF_INET6: + ipv6_addr_len = sizeof(ipv6_addr); + getsockname(socket_fd, &ipv6_addr, &ipv6_addr_len); + return ntohs(ipv6_addr.sin6_port); + + default: + return 0; + } +} + +static int bind_variant_addrlen(const int sock_fd, + const struct service_fixture *const srv, + const socklen_t addrlen) +{ + int ret; + + switch (srv->protocol.domain) { + case AF_UNSPEC: + case AF_INET: + ret = bind(sock_fd, &srv->ipv4_addr, addrlen); + break; + + case AF_INET6: + ret = bind(sock_fd, &srv->ipv6_addr, addrlen); + break; + + case AF_UNIX: + ret = bind(sock_fd, &srv->unix_addr, addrlen); + break; + + default: + errno = EAFNOSUPPORT; + return -errno; + } + + if (ret < 0) + return -errno; + return ret; +} + +static int bind_variant(const int sock_fd, + const struct service_fixture *const srv) +{ + return bind_variant_addrlen(sock_fd, srv, get_addrlen(srv, false)); +} + +static int connect_variant_addrlen(const int sock_fd, + const struct service_fixture *const srv, + const socklen_t addrlen) +{ + int ret; + + switch (srv->protocol.domain) { + case AF_UNSPEC: + case AF_INET: + ret = connect(sock_fd, &srv->ipv4_addr, addrlen); + break; + + case AF_INET6: + ret = connect(sock_fd, &srv->ipv6_addr, addrlen); + break; + + case AF_UNIX: + ret = connect(sock_fd, &srv->unix_addr, addrlen); + break; + + default: + errno = -EAFNOSUPPORT; + return -errno; + } + + if (ret < 0) + return -errno; + return ret; +} + +static int connect_variant(const int sock_fd, + const struct service_fixture *const srv) +{ + return connect_variant_addrlen(sock_fd, srv, get_addrlen(srv, false)); +} + +FIXTURE(protocol) +{ + struct service_fixture srv0, srv1, srv2, unspec_any0, unspec_srv0; +}; + +FIXTURE_VARIANT(protocol) +{ + const enum sandbox_type sandbox; + const struct protocol_variant prot; +}; + +FIXTURE_SETUP(protocol) +{ + const struct protocol_variant prot_unspec = { + .domain = AF_UNSPEC, + .type = SOCK_STREAM, + }; + + disable_caps(_metadata); + + ASSERT_EQ(0, set_service(&self->srv0, variant->prot, 0)); + ASSERT_EQ(0, set_service(&self->srv1, variant->prot, 1)); + ASSERT_EQ(0, set_service(&self->srv2, variant->prot, 2)); + + ASSERT_EQ(0, set_service(&self->unspec_srv0, prot_unspec, 0)); + + ASSERT_EQ(0, set_service(&self->unspec_any0, prot_unspec, 0)); + self->unspec_any0.ipv4_addr.sin_addr.s_addr = htonl(INADDR_ANY); + + setup_loopback(_metadata); +}; + +FIXTURE_TEARDOWN(protocol) +{ +} + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_tcp) { + /* clang-format on */ + .sandbox = NO_SANDBOX, + .prot = { + .domain = AF_INET, + .type = SOCK_STREAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_tcp) { + /* clang-format on */ + .sandbox = NO_SANDBOX, + .prot = { + .domain = AF_INET6, + .type = SOCK_STREAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_udp) { + /* clang-format on */ + .sandbox = NO_SANDBOX, + .prot = { + .domain = AF_INET, + .type = SOCK_DGRAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_udp) { + /* clang-format on */ + .sandbox = NO_SANDBOX, + .prot = { + .domain = AF_INET6, + .type = SOCK_DGRAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_unix_stream) { + /* clang-format on */ + .sandbox = NO_SANDBOX, + .prot = { + .domain = AF_UNIX, + .type = SOCK_STREAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_unix_datagram) { + /* clang-format on */ + .sandbox = NO_SANDBOX, + .prot = { + .domain = AF_UNIX, + .type = SOCK_DGRAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_tcp) { + /* clang-format on */ + .sandbox = TCP_SANDBOX, + .prot = { + .domain = AF_INET, + .type = SOCK_STREAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_tcp) { + /* clang-format on */ + .sandbox = TCP_SANDBOX, + .prot = { + .domain = AF_INET6, + .type = SOCK_STREAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_udp) { + /* clang-format on */ + .sandbox = TCP_SANDBOX, + .prot = { + .domain = AF_INET, + .type = SOCK_DGRAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_udp) { + /* clang-format on */ + .sandbox = TCP_SANDBOX, + .prot = { + .domain = AF_INET6, + .type = SOCK_DGRAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_unix_stream) { + /* clang-format on */ + .sandbox = TCP_SANDBOX, + .prot = { + .domain = AF_UNIX, + .type = SOCK_STREAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_unix_datagram) { + /* clang-format on */ + .sandbox = TCP_SANDBOX, + .prot = { + .domain = AF_UNIX, + .type = SOCK_DGRAM, + }, +}; + +static void test_bind_and_connect(struct __test_metadata *const _metadata, + const struct service_fixture *const srv, + const bool deny_bind, const bool deny_connect) +{ + char buf = '\0'; + int inval_fd, bind_fd, client_fd, status, ret; + pid_t child; + + /* Starts invalid addrlen tests with bind. */ + inval_fd = socket_variant(srv); + ASSERT_LE(0, inval_fd) + { + TH_LOG("Failed to create socket: %s", strerror(errno)); + } + + /* Tries to bind with zero as addrlen. */ + EXPECT_EQ(-EINVAL, bind_variant_addrlen(inval_fd, srv, 0)); + + /* Tries to bind with too small addrlen. */ + EXPECT_EQ(-EINVAL, bind_variant_addrlen(inval_fd, srv, + get_addrlen(srv, true) - 1)); + + /* Tries to bind with minimal addrlen. */ + ret = bind_variant_addrlen(inval_fd, srv, get_addrlen(srv, true)); + if (deny_bind) { + EXPECT_EQ(-EACCES, ret); + } else { + EXPECT_EQ(0, ret) + { + TH_LOG("Failed to bind to socket: %s", strerror(errno)); + } + } + EXPECT_EQ(0, close(inval_fd)); + + /* Starts invalid addrlen tests with connect. */ + inval_fd = socket_variant(srv); + ASSERT_LE(0, inval_fd); + + /* Tries to connect with zero as addrlen. */ + EXPECT_EQ(-EINVAL, connect_variant_addrlen(inval_fd, srv, 0)); + + /* Tries to connect with too small addrlen. */ + EXPECT_EQ(-EINVAL, connect_variant_addrlen(inval_fd, srv, + get_addrlen(srv, true) - 1)); + + /* Tries to connect with minimal addrlen. */ + ret = connect_variant_addrlen(inval_fd, srv, get_addrlen(srv, true)); + if (srv->protocol.domain == AF_UNIX) { + EXPECT_EQ(-EINVAL, ret); + } else if (deny_connect) { + EXPECT_EQ(-EACCES, ret); + } else if (srv->protocol.type == SOCK_STREAM) { + /* No listening server, whatever the value of deny_bind. */ + EXPECT_EQ(-ECONNREFUSED, ret); + } else { + EXPECT_EQ(0, ret) + { + TH_LOG("Failed to connect to socket: %s", + strerror(errno)); + } + } + EXPECT_EQ(0, close(inval_fd)); + + /* Starts connection tests. */ + bind_fd = socket_variant(srv); + ASSERT_LE(0, bind_fd); + + ret = bind_variant(bind_fd, srv); + if (deny_bind) { + EXPECT_EQ(-EACCES, ret); + } else { + EXPECT_EQ(0, ret); + + /* Creates a listening socket. */ + if (srv->protocol.type == SOCK_STREAM) + EXPECT_EQ(0, listen(bind_fd, backlog)); + } + + child = fork(); + ASSERT_LE(0, child); + if (child == 0) { + int connect_fd, ret; + + /* Closes listening socket for the child. */ + EXPECT_EQ(0, close(bind_fd)); + + /* Starts connection tests. */ + connect_fd = socket_variant(srv); + ASSERT_LE(0, connect_fd); + ret = connect_variant(connect_fd, srv); + if (deny_connect) { + EXPECT_EQ(-EACCES, ret); + } else if (deny_bind) { + /* No listening server. */ + EXPECT_EQ(-ECONNREFUSED, ret); + } else { + EXPECT_EQ(0, ret); + EXPECT_EQ(1, write(connect_fd, ".", 1)); + } + + EXPECT_EQ(0, close(connect_fd)); + _exit(_metadata->passed ? EXIT_SUCCESS : EXIT_FAILURE); + return; + } + + /* Accepts connection from the child. */ + client_fd = bind_fd; + if (!deny_bind && !deny_connect) { + if (srv->protocol.type == SOCK_STREAM) { + client_fd = accept(bind_fd, NULL, 0); + ASSERT_LE(0, client_fd); + } + + EXPECT_EQ(1, read(client_fd, &buf, 1)); + EXPECT_EQ('.', buf); + } + + EXPECT_EQ(child, waitpid(child, &status, 0)); + EXPECT_EQ(1, WIFEXITED(status)); + EXPECT_EQ(EXIT_SUCCESS, WEXITSTATUS(status)); + + /* Closes connection, if any. */ + if (client_fd != bind_fd) + EXPECT_LE(0, close(client_fd)); + + /* Closes listening socket. */ + EXPECT_EQ(0, close(bind_fd)); +} + +TEST_F(protocol, bind) +{ + if (variant->sandbox == TCP_SANDBOX) { + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + const struct landlock_net_port_attr tcp_bind_connect_p0 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->srv0.port, + }; + const struct landlock_net_port_attr tcp_connect_p1 = { + .allowed_access = LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->srv1.port, + }; + int ruleset_fd; + + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Allows connect and bind for the first port. */ + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_connect_p0, 0)); + + /* Allows connect and denies bind for the second port. */ + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_connect_p1, 0)); + + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + /* Binds a socket to the first port. */ + test_bind_and_connect(_metadata, &self->srv0, false, false); + + /* Binds a socket to the second port. */ + test_bind_and_connect(_metadata, &self->srv1, + is_restricted(&variant->prot, variant->sandbox), + false); + + /* Binds a socket to the third port. */ + test_bind_and_connect(_metadata, &self->srv2, + is_restricted(&variant->prot, variant->sandbox), + is_restricted(&variant->prot, variant->sandbox)); +} + +TEST_F(protocol, connect) +{ + if (variant->sandbox == TCP_SANDBOX) { + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + const struct landlock_net_port_attr tcp_bind_connect_p0 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->srv0.port, + }; + const struct landlock_net_port_attr tcp_bind_p1 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = self->srv1.port, + }; + int ruleset_fd; + + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Allows connect and bind for the first port. */ + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_connect_p0, 0)); + + /* Allows bind and denies connect for the second port. */ + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_p1, 0)); + + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + test_bind_and_connect(_metadata, &self->srv0, false, false); + + test_bind_and_connect(_metadata, &self->srv1, false, + is_restricted(&variant->prot, variant->sandbox)); + + test_bind_and_connect(_metadata, &self->srv2, + is_restricted(&variant->prot, variant->sandbox), + is_restricted(&variant->prot, variant->sandbox)); +} + +TEST_F(protocol, bind_unspec) +{ + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP, + }; + const struct landlock_net_port_attr tcp_bind = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = self->srv0.port, + }; + int bind_fd, ret; + + if (variant->sandbox == TCP_SANDBOX) { + const int ruleset_fd = landlock_create_ruleset( + &ruleset_attr, sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Allows bind. */ + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind, 0)); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + bind_fd = socket_variant(&self->srv0); + ASSERT_LE(0, bind_fd); + + /* Allowed bind on AF_UNSPEC/INADDR_ANY. */ + ret = bind_variant(bind_fd, &self->unspec_any0); + if (variant->prot.domain == AF_INET) { + EXPECT_EQ(0, ret) + { + TH_LOG("Failed to bind to unspec/any socket: %s", + strerror(errno)); + } + } else { + EXPECT_EQ(-EINVAL, ret); + } + EXPECT_EQ(0, close(bind_fd)); + + if (variant->sandbox == TCP_SANDBOX) { + const int ruleset_fd = landlock_create_ruleset( + &ruleset_attr, sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Denies bind. */ + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + bind_fd = socket_variant(&self->srv0); + ASSERT_LE(0, bind_fd); + + /* Denied bind on AF_UNSPEC/INADDR_ANY. */ + ret = bind_variant(bind_fd, &self->unspec_any0); + if (variant->prot.domain == AF_INET) { + if (is_restricted(&variant->prot, variant->sandbox)) { + EXPECT_EQ(-EACCES, ret); + } else { + EXPECT_EQ(0, ret); + } + } else { + EXPECT_EQ(-EINVAL, ret); + } + EXPECT_EQ(0, close(bind_fd)); + + /* Checks bind with AF_UNSPEC and the loopback address. */ + bind_fd = socket_variant(&self->srv0); + ASSERT_LE(0, bind_fd); + ret = bind_variant(bind_fd, &self->unspec_srv0); + if (variant->prot.domain == AF_INET) { + EXPECT_EQ(-EAFNOSUPPORT, ret); + } else { + EXPECT_EQ(-EINVAL, ret) + { + TH_LOG("Wrong bind error: %s", strerror(errno)); + } + } + EXPECT_EQ(0, close(bind_fd)); +} + +TEST_F(protocol, connect_unspec) +{ + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + const struct landlock_net_port_attr tcp_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->srv0.port, + }; + int bind_fd, client_fd, status; + pid_t child; + + /* Specific connection tests. */ + bind_fd = socket_variant(&self->srv0); + ASSERT_LE(0, bind_fd); + EXPECT_EQ(0, bind_variant(bind_fd, &self->srv0)); + if (self->srv0.protocol.type == SOCK_STREAM) + EXPECT_EQ(0, listen(bind_fd, backlog)); + + child = fork(); + ASSERT_LE(0, child); + if (child == 0) { + int connect_fd, ret; + + /* Closes listening socket for the child. */ + EXPECT_EQ(0, close(bind_fd)); + + connect_fd = socket_variant(&self->srv0); + ASSERT_LE(0, connect_fd); + EXPECT_EQ(0, connect_variant(connect_fd, &self->srv0)); + + /* Tries to connect again, or set peer. */ + ret = connect_variant(connect_fd, &self->srv0); + if (self->srv0.protocol.type == SOCK_STREAM) { + EXPECT_EQ(-EISCONN, ret); + } else { + EXPECT_EQ(0, ret); + } + + if (variant->sandbox == TCP_SANDBOX) { + const int ruleset_fd = landlock_create_ruleset( + &ruleset_attr, sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Allows connect. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_PORT, + &tcp_connect, 0)); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + /* Disconnects already connected socket, or set peer. */ + ret = connect_variant(connect_fd, &self->unspec_any0); + if (self->srv0.protocol.domain == AF_UNIX && + self->srv0.protocol.type == SOCK_STREAM) { + EXPECT_EQ(-EINVAL, ret); + } else { + EXPECT_EQ(0, ret); + } + + /* Tries to reconnect, or set peer. */ + ret = connect_variant(connect_fd, &self->srv0); + if (self->srv0.protocol.domain == AF_UNIX && + self->srv0.protocol.type == SOCK_STREAM) { + EXPECT_EQ(-EISCONN, ret); + } else { + EXPECT_EQ(0, ret); + } + + if (variant->sandbox == TCP_SANDBOX) { + const int ruleset_fd = landlock_create_ruleset( + &ruleset_attr, sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Denies connect. */ + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + ret = connect_variant(connect_fd, &self->unspec_any0); + if (self->srv0.protocol.domain == AF_UNIX && + self->srv0.protocol.type == SOCK_STREAM) { + EXPECT_EQ(-EINVAL, ret); + } else { + /* Always allowed to disconnect. */ + EXPECT_EQ(0, ret); + } + + EXPECT_EQ(0, close(connect_fd)); + _exit(_metadata->passed ? EXIT_SUCCESS : EXIT_FAILURE); + return; + } + + client_fd = bind_fd; + if (self->srv0.protocol.type == SOCK_STREAM) { + client_fd = accept(bind_fd, NULL, 0); + ASSERT_LE(0, client_fd); + } + + EXPECT_EQ(child, waitpid(child, &status, 0)); + EXPECT_EQ(1, WIFEXITED(status)); + EXPECT_EQ(EXIT_SUCCESS, WEXITSTATUS(status)); + + /* Closes connection, if any. */ + if (client_fd != bind_fd) + EXPECT_LE(0, close(client_fd)); + + /* Closes listening socket. */ + EXPECT_EQ(0, close(bind_fd)); +} + +FIXTURE(ipv4) +{ + struct service_fixture srv0, srv1; +}; + +FIXTURE_VARIANT(ipv4) +{ + const enum sandbox_type sandbox; + const int type; +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(ipv4, no_sandbox_with_tcp) { + /* clang-format on */ + .sandbox = NO_SANDBOX, + .type = SOCK_STREAM, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(ipv4, tcp_sandbox_with_tcp) { + /* clang-format on */ + .sandbox = TCP_SANDBOX, + .type = SOCK_STREAM, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(ipv4, no_sandbox_with_udp) { + /* clang-format on */ + .sandbox = NO_SANDBOX, + .type = SOCK_DGRAM, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(ipv4, tcp_sandbox_with_udp) { + /* clang-format on */ + .sandbox = TCP_SANDBOX, + .type = SOCK_DGRAM, +}; + +FIXTURE_SETUP(ipv4) +{ + const struct protocol_variant prot = { + .domain = AF_INET, + .type = variant->type, + }; + + disable_caps(_metadata); + + set_service(&self->srv0, prot, 0); + set_service(&self->srv1, prot, 1); + + setup_loopback(_metadata); +}; + +FIXTURE_TEARDOWN(ipv4) +{ +} + +TEST_F(ipv4, from_unix_to_inet) +{ + int unix_stream_fd, unix_dgram_fd; + + if (variant->sandbox == TCP_SANDBOX) { + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + const struct landlock_net_port_attr tcp_bind_connect_p0 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->srv0.port, + }; + int ruleset_fd; + + /* Denies connect and bind to check errno value. */ + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Allows connect and bind for srv0. */ + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_connect_p0, 0)); + + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + unix_stream_fd = socket(AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0); + ASSERT_LE(0, unix_stream_fd); + + unix_dgram_fd = socket(AF_UNIX, SOCK_DGRAM | SOCK_CLOEXEC, 0); + ASSERT_LE(0, unix_dgram_fd); + + /* Checks unix stream bind and connect for srv0. */ + EXPECT_EQ(-EINVAL, bind_variant(unix_stream_fd, &self->srv0)); + EXPECT_EQ(-EINVAL, connect_variant(unix_stream_fd, &self->srv0)); + + /* Checks unix stream bind and connect for srv1. */ + EXPECT_EQ(-EINVAL, bind_variant(unix_stream_fd, &self->srv1)) + { + TH_LOG("Wrong bind error: %s", strerror(errno)); + } + EXPECT_EQ(-EINVAL, connect_variant(unix_stream_fd, &self->srv1)); + + /* Checks unix datagram bind and connect for srv0. */ + EXPECT_EQ(-EINVAL, bind_variant(unix_dgram_fd, &self->srv0)); + EXPECT_EQ(-EINVAL, connect_variant(unix_dgram_fd, &self->srv0)); + + /* Checks unix datagram bind and connect for srv1. */ + EXPECT_EQ(-EINVAL, bind_variant(unix_dgram_fd, &self->srv1)); + EXPECT_EQ(-EINVAL, connect_variant(unix_dgram_fd, &self->srv1)); +} + +FIXTURE(tcp_layers) +{ + struct service_fixture srv0, srv1; +}; + +FIXTURE_VARIANT(tcp_layers) +{ + const size_t num_layers; + const int domain; +}; + +FIXTURE_SETUP(tcp_layers) +{ + const struct protocol_variant prot = { + .domain = variant->domain, + .type = SOCK_STREAM, + }; + + disable_caps(_metadata); + + ASSERT_EQ(0, set_service(&self->srv0, prot, 0)); + ASSERT_EQ(0, set_service(&self->srv1, prot, 1)); + + setup_loopback(_metadata); +}; + +FIXTURE_TEARDOWN(tcp_layers) +{ +} + +/* clang-format off */ +FIXTURE_VARIANT_ADD(tcp_layers, no_sandbox_with_ipv4) { + /* clang-format on */ + .domain = AF_INET, + .num_layers = 0, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(tcp_layers, one_sandbox_with_ipv4) { + /* clang-format on */ + .domain = AF_INET, + .num_layers = 1, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(tcp_layers, two_sandboxes_with_ipv4) { + /* clang-format on */ + .domain = AF_INET, + .num_layers = 2, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(tcp_layers, three_sandboxes_with_ipv4) { + /* clang-format on */ + .domain = AF_INET, + .num_layers = 3, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(tcp_layers, no_sandbox_with_ipv6) { + /* clang-format on */ + .domain = AF_INET6, + .num_layers = 0, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(tcp_layers, one_sandbox_with_ipv6) { + /* clang-format on */ + .domain = AF_INET6, + .num_layers = 1, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(tcp_layers, two_sandboxes_with_ipv6) { + /* clang-format on */ + .domain = AF_INET6, + .num_layers = 2, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(tcp_layers, three_sandboxes_with_ipv6) { + /* clang-format on */ + .domain = AF_INET6, + .num_layers = 3, +}; + +TEST_F(tcp_layers, ruleset_overlap) +{ + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + const struct landlock_net_port_attr tcp_bind = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = self->srv0.port, + }; + const struct landlock_net_port_attr tcp_bind_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->srv0.port, + }; + + if (variant->num_layers >= 1) { + int ruleset_fd; + + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Allows bind. */ + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind, 0)); + /* Also allows bind, but allows connect too. */ + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_connect, 0)); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + if (variant->num_layers >= 2) { + int ruleset_fd; + + /* Creates another ruleset layer. */ + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Only allows bind. */ + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind, 0)); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + if (variant->num_layers >= 3) { + int ruleset_fd; + + /* Creates another ruleset layer. */ + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Try to allow bind and connect. */ + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_connect, 0)); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + /* + * Forbids to connect to the socket because only one ruleset layer + * allows connect. + */ + test_bind_and_connect(_metadata, &self->srv0, false, + variant->num_layers >= 2); +} + +TEST_F(tcp_layers, ruleset_expand) +{ + if (variant->num_layers >= 1) { + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP, + }; + /* Allows bind for srv0. */ + const struct landlock_net_port_attr bind_srv0 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = self->srv0.port, + }; + int ruleset_fd; + + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &bind_srv0, 0)); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + if (variant->num_layers >= 2) { + /* Expands network mask with connect action. */ + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + /* Allows bind for srv0 and connect to srv0. */ + const struct landlock_net_port_attr tcp_bind_connect_p0 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->srv0.port, + }; + /* Try to allow bind for srv1. */ + const struct landlock_net_port_attr tcp_bind_p1 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = self->srv1.port, + }; + int ruleset_fd; + + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_connect_p0, 0)); + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_p1, 0)); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + if (variant->num_layers >= 3) { + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + /* Allows connect to srv0, without bind rule. */ + const struct landlock_net_port_attr tcp_bind_p0 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = self->srv0.port, + }; + int ruleset_fd; + + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_p0, 0)); + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + test_bind_and_connect(_metadata, &self->srv0, false, + variant->num_layers >= 3); + + test_bind_and_connect(_metadata, &self->srv1, variant->num_layers >= 1, + variant->num_layers >= 2); +} + +/* clang-format off */ +FIXTURE(mini) {}; +/* clang-format on */ + +FIXTURE_SETUP(mini) +{ + disable_caps(_metadata); + + setup_loopback(_metadata); +}; + +FIXTURE_TEARDOWN(mini) +{ +} + +/* clang-format off */ + +#define ACCESS_LAST LANDLOCK_ACCESS_NET_CONNECT_TCP + +#define ACCESS_ALL ( \ + LANDLOCK_ACCESS_NET_BIND_TCP | \ + LANDLOCK_ACCESS_NET_CONNECT_TCP) + +/* clang-format on */ + +TEST_F(mini, network_access_rights) +{ + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = ACCESS_ALL, + }; + struct landlock_net_port_attr net_port = { + .port = sock_port_start, + }; + int ruleset_fd; + __u64 access; + + ruleset_fd = + landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + for (access = 1; access <= ACCESS_LAST; access <<= 1) { + net_port.allowed_access = access; + EXPECT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &net_port, 0)) + { + TH_LOG("Failed to add rule with access 0x%llx: %s", + access, strerror(errno)); + } + } + EXPECT_EQ(0, close(ruleset_fd)); +} + +/* Checks invalid attribute, out of landlock network access range. */ +TEST_F(mini, unknown_access_rights) +{ + __u64 access_mask; + + for (access_mask = 1ULL << 63; access_mask != ACCESS_LAST; + access_mask >>= 1) { + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = access_mask, + }; + + EXPECT_EQ(-1, landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0)); + EXPECT_EQ(EINVAL, errno); + } +} + +TEST_F(mini, inval) +{ + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP + }; + const struct landlock_net_port_attr tcp_bind_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = sock_port_start, + }; + const struct landlock_net_port_attr tcp_denied = { + .allowed_access = 0, + .port = sock_port_start, + }; + const struct landlock_net_port_attr tcp_bind = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = sock_port_start, + }; + int ruleset_fd; + + ruleset_fd = + landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Checks unhandled allowed_access. */ + EXPECT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_connect, 0)); + EXPECT_EQ(EINVAL, errno); + + /* Checks zero access value. */ + EXPECT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_denied, 0)); + EXPECT_EQ(ENOMSG, errno); + + /* Adds with legitimate values. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind, 0)); +} + +TEST_F(mini, tcp_port_overflow) +{ + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + const struct landlock_net_port_attr port_max_bind = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = UINT16_MAX, + }; + const struct landlock_net_port_attr port_max_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = UINT16_MAX, + }; + const struct landlock_net_port_attr port_overflow1 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = UINT16_MAX + 1, + }; + const struct landlock_net_port_attr port_overflow2 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = UINT16_MAX + 2, + }; + const struct landlock_net_port_attr port_overflow3 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = UINT32_MAX + 1UL, + }; + const struct landlock_net_port_attr port_overflow4 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = UINT32_MAX + 2UL, + }; + const struct protocol_variant ipv4_tcp = { + .domain = AF_INET, + .type = SOCK_STREAM, + }; + struct service_fixture srv_denied, srv_max_allowed; + int ruleset_fd; + + ASSERT_EQ(0, set_service(&srv_denied, ipv4_tcp, 0)); + + /* Be careful to avoid port inconsistencies. */ + srv_max_allowed = srv_denied; + srv_max_allowed.port = port_max_bind.port; + srv_max_allowed.ipv4_addr.sin_port = htons(port_max_bind.port); + + ruleset_fd = + landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &port_max_bind, 0)); + + EXPECT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &port_overflow1, 0)); + EXPECT_EQ(EINVAL, errno); + + EXPECT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &port_overflow2, 0)); + EXPECT_EQ(EINVAL, errno); + + EXPECT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &port_overflow3, 0)); + EXPECT_EQ(EINVAL, errno); + + /* Interleaves with invalid rule additions. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &port_max_connect, 0)); + + EXPECT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &port_overflow4, 0)); + EXPECT_EQ(EINVAL, errno); + + enforce_ruleset(_metadata, ruleset_fd); + + test_bind_and_connect(_metadata, &srv_denied, true, true); + test_bind_and_connect(_metadata, &srv_max_allowed, false, false); +} + +FIXTURE(ipv4_tcp) +{ + struct service_fixture srv0, srv1; +}; + +FIXTURE_SETUP(ipv4_tcp) +{ + const struct protocol_variant ipv4_tcp = { + .domain = AF_INET, + .type = SOCK_STREAM, + }; + + disable_caps(_metadata); + + ASSERT_EQ(0, set_service(&self->srv0, ipv4_tcp, 0)); + ASSERT_EQ(0, set_service(&self->srv1, ipv4_tcp, 1)); + + setup_loopback(_metadata); +}; + +FIXTURE_TEARDOWN(ipv4_tcp) +{ +} + +TEST_F(ipv4_tcp, port_endianness) +{ + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + const struct landlock_net_port_attr bind_host_endian_p0 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + /* Host port format. */ + .port = self->srv0.port, + }; + const struct landlock_net_port_attr connect_big_endian_p0 = { + .allowed_access = LANDLOCK_ACCESS_NET_CONNECT_TCP, + /* Big endian port format. */ + .port = htons(self->srv0.port), + }; + const struct landlock_net_port_attr bind_connect_host_endian_p1 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + /* Host port format. */ + .port = self->srv1.port, + }; + const unsigned int one = 1; + const char little_endian = *(const char *)&one; + int ruleset_fd; + + ruleset_fd = + landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &bind_host_endian_p0, 0)); + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &connect_big_endian_p0, 0)); + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &bind_connect_host_endian_p1, 0)); + enforce_ruleset(_metadata, ruleset_fd); + + /* No restriction for big endinan CPU. */ + test_bind_and_connect(_metadata, &self->srv0, false, little_endian); + + /* No restriction for any CPU. */ + test_bind_and_connect(_metadata, &self->srv1, false, false); +} + +TEST_F(ipv4_tcp, with_fs) +{ + const struct landlock_ruleset_attr ruleset_attr_fs_net = { + .handled_access_fs = LANDLOCK_ACCESS_FS_READ_DIR, + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP, + }; + struct landlock_path_beneath_attr path_beneath = { + .allowed_access = LANDLOCK_ACCESS_FS_READ_DIR, + .parent_fd = -1, + }; + struct landlock_net_port_attr tcp_bind = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = self->srv0.port, + }; + int ruleset_fd, bind_fd, dir_fd; + + /* Creates ruleset both for filesystem and network access. */ + ruleset_fd = landlock_create_ruleset(&ruleset_attr_fs_net, + sizeof(ruleset_attr_fs_net), 0); + ASSERT_LE(0, ruleset_fd); + + /* Adds a filesystem rule. */ + path_beneath.parent_fd = open("/dev", O_PATH | O_DIRECTORY | O_CLOEXEC); + ASSERT_LE(0, path_beneath.parent_fd); + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH, + &path_beneath, 0)); + EXPECT_EQ(0, close(path_beneath.parent_fd)); + + /* Adds a network rule. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind, 0)); + + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + + /* Tests file access. */ + dir_fd = open("/dev", O_RDONLY); + EXPECT_LE(0, dir_fd); + EXPECT_EQ(0, close(dir_fd)); + + dir_fd = open("/", O_RDONLY); + EXPECT_EQ(-1, dir_fd); + EXPECT_EQ(EACCES, errno); + + /* Tests port binding. */ + bind_fd = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, 0); + ASSERT_LE(0, bind_fd); + EXPECT_EQ(0, bind_variant(bind_fd, &self->srv0)); + EXPECT_EQ(0, close(bind_fd)); + + bind_fd = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, 0); + ASSERT_LE(0, bind_fd); + EXPECT_EQ(-EACCES, bind_variant(bind_fd, &self->srv1)); +} + +FIXTURE(port_specific) +{ + struct service_fixture srv0; +}; + +FIXTURE_VARIANT(port_specific) +{ + const enum sandbox_type sandbox; + const struct protocol_variant prot; +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(port_specific, no_sandbox_with_ipv4) { + /* clang-format on */ + .sandbox = NO_SANDBOX, + .prot = { + .domain = AF_INET, + .type = SOCK_STREAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(port_specific, sandbox_with_ipv4) { + /* clang-format on */ + .sandbox = TCP_SANDBOX, + .prot = { + .domain = AF_INET, + .type = SOCK_STREAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(port_specific, no_sandbox_with_ipv6) { + /* clang-format on */ + .sandbox = NO_SANDBOX, + .prot = { + .domain = AF_INET6, + .type = SOCK_STREAM, + }, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(port_specific, sandbox_with_ipv6) { + /* clang-format on */ + .sandbox = TCP_SANDBOX, + .prot = { + .domain = AF_INET6, + .type = SOCK_STREAM, + }, +}; + +FIXTURE_SETUP(port_specific) +{ + disable_caps(_metadata); + + ASSERT_EQ(0, set_service(&self->srv0, variant->prot, 0)); + + setup_loopback(_metadata); +}; + +FIXTURE_TEARDOWN(port_specific) +{ +} + +TEST_F(port_specific, bind_connect_zero) +{ + int bind_fd, connect_fd, ret; + uint16_t port; + + /* Adds a rule layer with bind and connect actions. */ + if (variant->sandbox == TCP_SANDBOX) { + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP + }; + const struct landlock_net_port_attr tcp_bind_connect_zero = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = 0, + }; + int ruleset_fd; + + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Checks zero port value on bind and connect actions. */ + EXPECT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_connect_zero, 0)); + + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + bind_fd = socket_variant(&self->srv0); + ASSERT_LE(0, bind_fd); + + connect_fd = socket_variant(&self->srv0); + ASSERT_LE(0, connect_fd); + + /* Sets address port to 0 for both protocol families. */ + set_port(&self->srv0, 0); + /* + * Binds on port 0, which selects a random port within + * ip_local_port_range. + */ + ret = bind_variant(bind_fd, &self->srv0); + EXPECT_EQ(0, ret); + + EXPECT_EQ(0, listen(bind_fd, backlog)); + + /* Connects on port 0. */ + ret = connect_variant(connect_fd, &self->srv0); + EXPECT_EQ(-ECONNREFUSED, ret); + + /* Sets binded port for both protocol families. */ + port = get_binded_port(bind_fd, &variant->prot); + EXPECT_NE(0, port); + set_port(&self->srv0, port); + /* Connects on the binded port. */ + ret = connect_variant(connect_fd, &self->srv0); + if (is_restricted(&variant->prot, variant->sandbox)) { + /* Denied by Landlock. */ + EXPECT_EQ(-EACCES, ret); + } else { + EXPECT_EQ(0, ret); + } + + EXPECT_EQ(0, close(connect_fd)); + EXPECT_EQ(0, close(bind_fd)); +} + +TEST_F(port_specific, bind_connect_1023) +{ + int bind_fd, connect_fd, ret; + + /* Adds a rule layer with bind and connect actions. */ + if (variant->sandbox == TCP_SANDBOX) { + const struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP + }; + /* A rule with port value less than 1024. */ + const struct landlock_net_port_attr tcp_bind_connect_low_range = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = 1023, + }; + /* A rule with 1024 port. */ + const struct landlock_net_port_attr tcp_bind_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = 1024, + }; + int ruleset_fd; + + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_connect_low_range, 0)); + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_PORT, + &tcp_bind_connect, 0)); + + enforce_ruleset(_metadata, ruleset_fd); + EXPECT_EQ(0, close(ruleset_fd)); + } + + bind_fd = socket_variant(&self->srv0); + ASSERT_LE(0, bind_fd); + + connect_fd = socket_variant(&self->srv0); + ASSERT_LE(0, connect_fd); + + /* Sets address port to 1023 for both protocol families. */ + set_port(&self->srv0, 1023); + /* Binds on port 1023. */ + ret = bind_variant(bind_fd, &self->srv0); + /* Denied by the system. */ + EXPECT_EQ(-EACCES, ret); + + /* Binds on port 1023. */ + set_cap(_metadata, CAP_NET_BIND_SERVICE); + ret = bind_variant(bind_fd, &self->srv0); + clear_cap(_metadata, CAP_NET_BIND_SERVICE); + EXPECT_EQ(0, ret); + EXPECT_EQ(0, listen(bind_fd, backlog)); + + /* Connects on the binded port 1023. */ + ret = connect_variant(connect_fd, &self->srv0); + EXPECT_EQ(0, ret); + + EXPECT_EQ(0, close(connect_fd)); + EXPECT_EQ(0, close(bind_fd)); + + bind_fd = socket_variant(&self->srv0); + ASSERT_LE(0, bind_fd); + + connect_fd = socket_variant(&self->srv0); + ASSERT_LE(0, connect_fd); + + /* Sets address port to 1024 for both protocol families. */ + set_port(&self->srv0, 1024); + /* Binds on port 1024. */ + ret = bind_variant(bind_fd, &self->srv0); + EXPECT_EQ(0, ret); + EXPECT_EQ(0, listen(bind_fd, backlog)); + + /* Connects on the binded port 1024. */ + ret = connect_variant(connect_fd, &self->srv0); + EXPECT_EQ(0, ret); + + EXPECT_EQ(0, close(connect_fd)); + EXPECT_EQ(0, close(bind_fd)); +} + +TEST_HARNESS_MAIN |