summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--Documentation/device-mapper/dm-log.txt2
-rw-r--r--Documentation/device-mapper/persistent-data.txt84
-rw-r--r--Documentation/device-mapper/thin-provisioning.txt285
-rw-r--r--drivers/md/Kconfig36
-rw-r--r--drivers/md/Makefile4
-rw-r--r--drivers/md/dm-bufio.c1699
-rw-r--r--drivers/md/dm-bufio.h112
-rw-r--r--drivers/md/dm-ioctl.c11
-rw-r--r--drivers/md/dm-kcopyd.c31
-rw-r--r--drivers/md/dm-log-userspace-base.c37
-rw-r--r--drivers/md/dm-raid.c48
-rw-r--r--drivers/md/dm-table.c73
-rw-r--r--drivers/md/dm-thin-metadata.c1391
-rw-r--r--drivers/md/dm-thin-metadata.h156
-rw-r--r--drivers/md/dm-thin.c2428
-rw-r--r--drivers/md/dm.c21
-rw-r--r--drivers/md/dm.h2
-rw-r--r--drivers/md/persistent-data/Kconfig8
-rw-r--r--drivers/md/persistent-data/Makefile11
-rw-r--r--drivers/md/persistent-data/dm-block-manager.c620
-rw-r--r--drivers/md/persistent-data/dm-block-manager.h123
-rw-r--r--drivers/md/persistent-data/dm-btree-internal.h137
-rw-r--r--drivers/md/persistent-data/dm-btree-remove.c566
-rw-r--r--drivers/md/persistent-data/dm-btree-spine.c244
-rw-r--r--drivers/md/persistent-data/dm-btree.c805
-rw-r--r--drivers/md/persistent-data/dm-btree.h145
-rw-r--r--drivers/md/persistent-data/dm-persistent-data-internal.h19
-rw-r--r--drivers/md/persistent-data/dm-space-map-checker.c437
-rw-r--r--drivers/md/persistent-data/dm-space-map-checker.h26
-rw-r--r--drivers/md/persistent-data/dm-space-map-common.c705
-rw-r--r--drivers/md/persistent-data/dm-space-map-common.h126
-rw-r--r--drivers/md/persistent-data/dm-space-map-disk.c335
-rw-r--r--drivers/md/persistent-data/dm-space-map-disk.h25
-rw-r--r--drivers/md/persistent-data/dm-space-map-metadata.c596
-rw-r--r--drivers/md/persistent-data/dm-space-map-metadata.h33
-rw-r--r--drivers/md/persistent-data/dm-space-map.h134
-rw-r--r--drivers/md/persistent-data/dm-transaction-manager.c400
-rw-r--r--drivers/md/persistent-data/dm-transaction-manager.h130
-rw-r--r--include/linux/device-mapper.h45
-rw-r--r--include/linux/dm-ioctl.h4
-rw-r--r--include/linux/dm-kcopyd.h4
-rw-r--r--include/linux/dm-log-userspace.h18
42 files changed, 12079 insertions, 37 deletions
diff --git a/Documentation/device-mapper/dm-log.txt b/Documentation/device-mapper/dm-log.txt
index 994dd75475a6..c155ac569c44 100644
--- a/Documentation/device-mapper/dm-log.txt
+++ b/Documentation/device-mapper/dm-log.txt
@@ -48,7 +48,7 @@ kernel and userspace, 'connector' is used as the interface for
communication.
There are currently two userspace log implementations that leverage this
-framework - "clustered_disk" and "clustered_core". These implementations
+framework - "clustered-disk" and "clustered-core". These implementations
provide a cluster-coherent log for shared-storage. Device-mapper mirroring
can be used in a shared-storage environment when the cluster log implementations
are employed.
diff --git a/Documentation/device-mapper/persistent-data.txt b/Documentation/device-mapper/persistent-data.txt
new file mode 100644
index 000000000000..0e5df9b04ad2
--- /dev/null
+++ b/Documentation/device-mapper/persistent-data.txt
@@ -0,0 +1,84 @@
+Introduction
+============
+
+The more-sophisticated device-mapper targets require complex metadata
+that is managed in kernel. In late 2010 we were seeing that various
+different targets were rolling their own data strutures, for example:
+
+- Mikulas Patocka's multisnap implementation
+- Heinz Mauelshagen's thin provisioning target
+- Another btree-based caching target posted to dm-devel
+- Another multi-snapshot target based on a design of Daniel Phillips
+
+Maintaining these data structures takes a lot of work, so if possible
+we'd like to reduce the number.
+
+The persistent-data library is an attempt to provide a re-usable
+framework for people who want to store metadata in device-mapper
+targets. It's currently used by the thin-provisioning target and an
+upcoming hierarchical storage target.
+
+Overview
+========
+
+The main documentation is in the header files which can all be found
+under drivers/md/persistent-data.
+
+The block manager
+-----------------
+
+dm-block-manager.[hc]
+
+This provides access to the data on disk in fixed sized-blocks. There
+is a read/write locking interface to prevent concurrent accesses, and
+keep data that is being used in the cache.
+
+Clients of persistent-data are unlikely to use this directly.
+
+The transaction manager
+-----------------------
+
+dm-transaction-manager.[hc]
+
+This restricts access to blocks and enforces copy-on-write semantics.
+The only way you can get hold of a writable block through the
+transaction manager is by shadowing an existing block (ie. doing
+copy-on-write) or allocating a fresh one. Shadowing is elided within
+the same transaction so performance is reasonable. The commit method
+ensures that all data is flushed before it writes the superblock.
+On power failure your metadata will be as it was when last committed.
+
+The Space Maps
+--------------
+
+dm-space-map.h
+dm-space-map-metadata.[hc]
+dm-space-map-disk.[hc]
+
+On-disk data structures that keep track of reference counts of blocks.
+Also acts as the allocator of new blocks. Currently two
+implementations: a simpler one for managing blocks on a different
+device (eg. thinly-provisioned data blocks); and one for managing
+the metadata space. The latter is complicated by the need to store
+its own data within the space it's managing.
+
+The data structures
+-------------------
+
+dm-btree.[hc]
+dm-btree-remove.c
+dm-btree-spine.c
+dm-btree-internal.h
+
+Currently there is only one data structure, a hierarchical btree.
+There are plans to add more. For example, something with an
+array-like interface would see a lot of use.
+
+The btree is 'hierarchical' in that you can define it to be composed
+of nested btrees, and take multiple keys. For example, the
+thin-provisioning target uses a btree with two levels of nesting.
+The first maps a device id to a mapping tree, and that in turn maps a
+virtual block to a physical block.
+
+Values stored in the btrees can have arbitrary size. Keys are always
+64bits, although nesting allows you to use multiple keys.
diff --git a/Documentation/device-mapper/thin-provisioning.txt b/Documentation/device-mapper/thin-provisioning.txt
new file mode 100644
index 000000000000..801d9d1cf82b
--- /dev/null
+++ b/Documentation/device-mapper/thin-provisioning.txt
@@ -0,0 +1,285 @@
+Introduction
+============
+
+This document descibes a collection of device-mapper targets that
+between them implement thin-provisioning and snapshots.
+
+The main highlight of this implementation, compared to the previous
+implementation of snapshots, is that it allows many virtual devices to
+be stored on the same data volume. This simplifies administration and
+allows the sharing of data between volumes, thus reducing disk usage.
+
+Another significant feature is support for an arbitrary depth of
+recursive snapshots (snapshots of snapshots of snapshots ...). The
+previous implementation of snapshots did this by chaining together
+lookup tables, and so performance was O(depth). This new
+implementation uses a single data structure to avoid this degradation
+with depth. Fragmentation may still be an issue, however, in some
+scenarios.
+
+Metadata is stored on a separate device from data, giving the
+administrator some freedom, for example to:
+
+- Improve metadata resilience by storing metadata on a mirrored volume
+ but data on a non-mirrored one.
+
+- Improve performance by storing the metadata on SSD.
+
+Status
+======
+
+These targets are very much still in the EXPERIMENTAL state. Please
+do not yet rely on them in production. But do experiment and offer us
+feedback. Different use cases will have different performance
+characteristics, for example due to fragmentation of the data volume.
+
+If you find this software is not performing as expected please mail
+dm-devel@redhat.com with details and we'll try our best to improve
+things for you.
+
+Userspace tools for checking and repairing the metadata are under
+development.
+
+Cookbook
+========
+
+This section describes some quick recipes for using thin provisioning.
+They use the dmsetup program to control the device-mapper driver
+directly. End users will be advised to use a higher-level volume
+manager such as LVM2 once support has been added.
+
+Pool device
+-----------
+
+The pool device ties together the metadata volume and the data volume.
+It maps I/O linearly to the data volume and updates the metadata via
+two mechanisms:
+
+- Function calls from the thin targets
+
+- Device-mapper 'messages' from userspace which control the creation of new
+ virtual devices amongst other things.
+
+Setting up a fresh pool device
+------------------------------
+
+Setting up a pool device requires a valid metadata device, and a
+data device. If you do not have an existing metadata device you can
+make one by zeroing the first 4k to indicate empty metadata.
+
+ dd if=/dev/zero of=$metadata_dev bs=4096 count=1
+
+The amount of metadata you need will vary according to how many blocks
+are shared between thin devices (i.e. through snapshots). If you have
+less sharing than average you'll need a larger-than-average metadata device.
+
+As a guide, we suggest you calculate the number of bytes to use in the
+metadata device as 48 * $data_dev_size / $data_block_size but round it up
+to 2MB if the answer is smaller. The largest size supported is 16GB.
+
+If you're creating large numbers of snapshots which are recording large
+amounts of change, you may need find you need to increase this.
+
+Reloading a pool table
+----------------------
+
+You may reload a pool's table, indeed this is how the pool is resized
+if it runs out of space. (N.B. While specifying a different metadata
+device when reloading is not forbidden at the moment, things will go
+wrong if it does not route I/O to exactly the same on-disk location as
+previously.)
+
+Using an existing pool device
+-----------------------------
+
+ dmsetup create pool \
+ --table "0 20971520 thin-pool $metadata_dev $data_dev \
+ $data_block_size $low_water_mark"
+
+$data_block_size gives the smallest unit of disk space that can be
+allocated at a time expressed in units of 512-byte sectors. People
+primarily interested in thin provisioning may want to use a value such
+as 1024 (512KB). People doing lots of snapshotting may want a smaller value
+such as 128 (64KB). If you are not zeroing newly-allocated data,
+a larger $data_block_size in the region of 256000 (128MB) is suggested.
+$data_block_size must be the same for the lifetime of the
+metadata device.
+
+$low_water_mark is expressed in blocks of size $data_block_size. If
+free space on the data device drops below this level then a dm event
+will be triggered which a userspace daemon should catch allowing it to
+extend the pool device. Only one such event will be sent.
+Resuming a device with a new table itself triggers an event so the
+userspace daemon can use this to detect a situation where a new table
+already exceeds the threshold.
+
+Thin provisioning
+-----------------
+
+i) Creating a new thinly-provisioned volume.
+
+ To create a new thinly- provisioned volume you must send a message to an
+ active pool device, /dev/mapper/pool in this example.
+
+ dmsetup message /dev/mapper/pool 0 "create_thin 0"
+
+ Here '0' is an identifier for the volume, a 24-bit number. It's up
+ to the caller to allocate and manage these identifiers. If the
+ identifier is already in use, the message will fail with -EEXIST.
+
+ii) Using a thinly-provisioned volume.
+
+ Thinly-provisioned volumes are activated using the 'thin' target:
+
+ dmsetup create thin --table "0 2097152 thin /dev/mapper/pool 0"
+
+ The last parameter is the identifier for the thinp device.
+
+Internal snapshots
+------------------
+
+i) Creating an internal snapshot.
+
+ Snapshots are created with another message to the pool.
+
+ N.B. If the origin device that you wish to snapshot is active, you
+ must suspend it before creating the snapshot to avoid corruption.
+ This is NOT enforced at the moment, so please be careful!
+
+ dmsetup suspend /dev/mapper/thin
+ dmsetup message /dev/mapper/pool 0 "create_snap 1 0"
+ dmsetup resume /dev/mapper/thin
+
+ Here '1' is the identifier for the volume, a 24-bit number. '0' is the
+ identifier for the origin device.
+
+ii) Using an internal snapshot.
+
+ Once created, the user doesn't have to worry about any connection
+ between the origin and the snapshot. Indeed the snapshot is no
+ different from any other thinly-provisioned device and can be
+ snapshotted itself via the same method. It's perfectly legal to
+ have only one of them active, and there's no ordering requirement on
+ activating or removing them both. (This differs from conventional
+ device-mapper snapshots.)
+
+ Activate it exactly the same way as any other thinly-provisioned volume:
+
+ dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 1"
+
+Deactivation
+------------
+
+All devices using a pool must be deactivated before the pool itself
+can be.
+
+ dmsetup remove thin
+ dmsetup remove snap
+ dmsetup remove pool
+
+Reference
+=========
+
+'thin-pool' target
+------------------
+
+i) Constructor
+
+ thin-pool <metadata dev> <data dev> <data block size (sectors)> \
+ <low water mark (blocks)> [<number of feature args> [<arg>]*]
+
+ Optional feature arguments:
+ - 'skip_block_zeroing': skips the zeroing of newly-provisioned blocks.
+
+ Data block size must be between 64KB (128 sectors) and 1GB
+ (2097152 sectors) inclusive.
+
+
+ii) Status
+
+ <transaction id> <used metadata blocks>/<total metadata blocks>
+ <used data blocks>/<total data blocks> <held metadata root>
+
+
+ transaction id:
+ A 64-bit number used by userspace to help synchronise with metadata
+ from volume managers.
+
+ used data blocks / total data blocks
+ If the number of free blocks drops below the pool's low water mark a
+ dm event will be sent to userspace. This event is edge-triggered and
+ it will occur only once after each resume so volume manager writers
+ should register for the event and then check the target's status.
+
+ held metadata root:
+ The location, in sectors, of the metadata root that has been
+ 'held' for userspace read access. '-' indicates there is no
+ held root. This feature is not yet implemented so '-' is
+ always returned.
+
+iii) Messages
+
+ create_thin <dev id>
+
+ Create a new thinly-provisioned device.
+ <dev id> is an arbitrary unique 24-bit identifier chosen by
+ the caller.
+
+ create_snap <dev id> <origin id>
+
+ Create a new snapshot of another thinly-provisioned device.
+ <dev id> is an arbitrary unique 24-bit identifier chosen by
+ the caller.
+ <origin id> is the identifier of the thinly-provisioned device
+ of which the new device will be a snapshot.
+
+ delete <dev id>
+
+ Deletes a thin device. Irreversible.
+
+ trim <dev id> <new size in sectors>
+
+ Delete mappings from the end of a thin device. Irreversible.
+ You might want to use this if you're reducing the size of
+ your thinly-provisioned device. In many cases, due to the
+ sharing of blocks between devices, it is not possible to
+ determine in advance how much space 'trim' will release. (In
+ future a userspace tool might be able to perform this
+ calculation.)
+
+ set_transaction_id <current id> <new id>
+
+ Userland volume managers, such as LVM, need a way to
+ synchronise their external metadata with the internal metadata of the
+ pool target. The thin-pool target offers to store an
+ arbitrary 64-bit transaction id and return it on the target's
+ status line. To avoid races you must provide what you think
+ the current transaction id is when you change it with this
+ compare-and-swap message.
+
+'thin' target
+-------------
+
+i) Constructor
+
+ thin <pool dev> <dev id>
+
+ pool dev:
+ the thin-pool device, e.g. /dev/mapper/my_pool or 253:0
+
+ dev id:
+ the internal device identifier of the device to be
+ activated.
+
+The pool doesn't store any size against the thin devices. If you
+load a thin target that is smaller than you've been using previously,
+then you'll have no access to blocks mapped beyond the end. If you
+load a target that is bigger than before, then extra blocks will be
+provisioned as and when needed.
+
+If you wish to reduce the size of your thin device and potentially
+regain some space then send the 'trim' message to the pool.
+
+ii) Status
+
+ <nr mapped sectors> <highest mapped sector>
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index f75a66e7d312..faa4741df6d3 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -208,6 +208,16 @@ config DM_DEBUG
If unsure, say N.
+config DM_BUFIO
+ tristate
+ depends on BLK_DEV_DM && EXPERIMENTAL
+ ---help---
+ This interface allows you to do buffered I/O on a device and acts
+ as a cache, holding recently-read blocks in memory and performing
+ delayed writes.
+
+source "drivers/md/persistent-data/Kconfig"
+
config DM_CRYPT
tristate "Crypt target support"
depends on BLK_DEV_DM
@@ -233,6 +243,32 @@ config DM_SNAPSHOT
---help---
Allow volume managers to take writable snapshots of a device.
+config DM_THIN_PROVISIONING
+ tristate "Thin provisioning target (EXPERIMENTAL)"
+ depends on BLK_DEV_DM && EXPERIMENTAL
+ select DM_PERSISTENT_DATA
+ ---help---
+ Provides thin provisioning and snapshots that share a data store.
+
+config DM_DEBUG_BLOCK_STACK_TRACING
+ boolean "Keep stack trace of thin provisioning block lock holders"
+ depends on STACKTRACE_SUPPORT && DM_THIN_PROVISIONING
+ select STACKTRACE
+ ---help---
+ Enable this for messages that may help debug problems with the
+ block manager locking used by thin provisioning.
+
+ If unsure, say N.
+
+config DM_DEBUG_SPACE_MAPS
+ boolean "Extra validation for thin provisioning space maps"
+ depends on DM_THIN_PROVISIONING
+ ---help---
+ Enable this for messages that may help debug problems with the
+ space maps used by thin provisioning.
+
+ If unsure, say N.
+
config DM_MIRROR
tristate "Mirror target"
depends on BLK_DEV_DM
diff --git a/drivers/md/Makefile b/drivers/md/Makefile
index 448838b1f92a..046860c7a166 100644
--- a/drivers/md/Makefile
+++ b/drivers/md/Makefile
@@ -10,6 +10,7 @@ dm-snapshot-y += dm-snap.o dm-exception-store.o dm-snap-transient.o \
dm-mirror-y += dm-raid1.o
dm-log-userspace-y \
+= dm-log-userspace-base.o dm-log-userspace-transfer.o
+dm-thin-pool-y += dm-thin.o dm-thin-metadata.o
md-mod-y += md.o bitmap.o
raid456-y += raid5.o
@@ -27,6 +28,7 @@ obj-$(CONFIG_MD_MULTIPATH) += multipath.o
obj-$(CONFIG_MD_FAULTY) += faulty.o
obj-$(CONFIG_BLK_DEV_MD) += md-mod.o
obj-$(CONFIG_BLK_DEV_DM) += dm-mod.o
+obj-$(CONFIG_DM_BUFIO) += dm-bufio.o
obj-$(CONFIG_DM_CRYPT) += dm-crypt.o
obj-$(CONFIG_DM_DELAY) += dm-delay.o
obj-$(CONFIG_DM_FLAKEY) += dm-flakey.o
@@ -34,10 +36,12 @@ obj-$(CONFIG_DM_MULTIPATH) += dm-multipath.o dm-round-robin.o
obj-$(CONFIG_DM_MULTIPATH_QL) += dm-queue-length.o
obj-$(CONFIG_DM_MULTIPATH_ST) += dm-service-time.o
obj-$(CONFIG_DM_SNAPSHOT) += dm-snapshot.o
+obj-$(CONFIG_DM_PERSISTENT_DATA) += persistent-data/
obj-$(CONFIG_DM_MIRROR) += dm-mirror.o dm-log.o dm-region-hash.o
obj-$(CONFIG_DM_LOG_USERSPACE) += dm-log-userspace.o
obj-$(CONFIG_DM_ZERO) += dm-zero.o
obj-$(CONFIG_DM_RAID) += dm-raid.o
+obj-$(CONFIG_DM_THIN_PROVISIONING) += dm-thin-pool.o
ifeq ($(CONFIG_DM_UEVENT),y)
dm-mod-objs += dm-uevent.o
diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
new file mode 100644
index 000000000000..cb246667dd52
--- /dev/null
+++ b/drivers/md/dm-bufio.c
@@ -0,0 +1,1699 @@
+/*
+ * Copyright (C) 2009-2011 Red Hat, Inc.
+ *
+ * Author: Mikulas Patocka <mpatocka@redhat.com>
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-bufio.h"
+
+#include <linux/device-mapper.h>
+#include <linux/dm-io.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/version.h>
+#include <linux/shrinker.h>
+
+#define DM_MSG_PREFIX "bufio"
+
+/*
+ * Memory management policy:
+ * Limit the number of buffers to DM_BUFIO_MEMORY_PERCENT of main memory
+ * or DM_BUFIO_VMALLOC_PERCENT of vmalloc memory (whichever is lower).
+ * Always allocate at least DM_BUFIO_MIN_BUFFERS buffers.
+ * Start background writeback when there are DM_BUFIO_WRITEBACK_PERCENT
+ * dirty buffers.
+ */
+#define DM_BUFIO_MIN_BUFFERS 8
+
+#define DM_BUFIO_MEMORY_PERCENT 2
+#define DM_BUFIO_VMALLOC_PERCENT 25
+#define DM_BUFIO_WRITEBACK_PERCENT 75
+
+/*
+ * Check buffer ages in this interval (seconds)
+ */
+#define DM_BUFIO_WORK_TIMER_SECS 10
+
+/*
+ * Free buffers when they are older than this (seconds)
+ */
+#define DM_BUFIO_DEFAULT_AGE_SECS 60
+
+/*
+ * The number of bvec entries that are embedded directly in the buffer.
+ * If the chunk size is larger, dm-io is used to do the io.
+ */
+#define DM_BUFIO_INLINE_VECS 16
+
+/*
+ * Buffer hash
+ */
+#define DM_BUFIO_HASH_BITS 20
+#define DM_BUFIO_HASH(block) \
+ ((((block) >> DM_BUFIO_HASH_BITS) ^ (block)) & \
+ ((1 << DM_BUFIO_HASH_BITS) - 1))
+
+/*
+ * Don't try to use kmem_cache_alloc for blocks larger than this.
+ * For explanation, see alloc_buffer_data below.
+ */
+#define DM_BUFIO_BLOCK_SIZE_SLAB_LIMIT (PAGE_SIZE >> 1)
+#define DM_BUFIO_BLOCK_SIZE_GFP_LIMIT (PAGE_SIZE << (MAX_ORDER - 1))
+
+/*
+ * dm_buffer->list_mode
+ */
+#define LIST_CLEAN 0
+#define LIST_DIRTY 1
+#define LIST_SIZE 2
+
+/*
+ * Linking of buffers:
+ * All buffers are linked to cache_hash with their hash_list field.
+ *
+ * Clean buffers that are not being written (B_WRITING not set)
+ * are linked to lru[LIST_CLEAN] with their lru_list field.
+ *
+ * Dirty and clean buffers that are being written are linked to
+ * lru[LIST_DIRTY] with their lru_list field. When the write
+ * finishes, the buffer cannot be relinked immediately (because we
+ * are in an interrupt context and relinking requires process
+ * context), so some clean-not-writing buffers can be held on
+ * dirty_lru too. They are later added to lru in the process
+ * context.
+ */
+struct dm_bufio_client {
+ struct mutex lock;
+
+ struct list_head lru[LIST_SIZE];
+ unsigned long n_buffers[LIST_SIZE];
+
+ struct block_device *bdev;
+ unsigned block_size;
+ unsigned char sectors_per_block_bits;
+ unsigned char pages_per_block_bits;
+ unsigned char blocks_per_page_bits;
+ unsigned aux_size;
+ void (*alloc_callback)(struct dm_buffer *);
+ void (*write_callback)(struct dm_buffer *);
+
+ struct dm_io_client *dm_io;
+
+ struct list_head reserved_buffers;
+ unsigned need_reserved_buffers;
+
+ struct hlist_head *cache_hash;
+ wait_queue_head_t free_buffer_wait;
+
+ int async_write_error;
+
+ struct list_head client_list;
+ struct shrinker shrinker;
+};
+
+/*
+ * Buffer state bits.
+ */
+#define B_READING 0
+#define B_WRITING 1
+#define B_DIRTY 2
+
+/*
+ * Describes how the block was allocated:
+ * kmem_cache_alloc(), __get_free_pages() or vmalloc().
+ * See the comment at alloc_buffer_data.
+ */
+enum data_mode {
+ DATA_MODE_SLAB = 0,
+ DATA_MODE_GET_FREE_PAGES = 1,
+ DATA_MODE_VMALLOC = 2,
+ DATA_MODE_LIMIT = 3
+};
+
+struct dm_buffer {
+ struct hlist_node hash_list;
+ struct list_head lru_list;
+ sector_t block;
+ void *data;
+ enum data_mode data_mode;
+ unsigned char list_mode; /* LIST_* */
+ unsigned hold_count;
+ int read_error;
+ int write_error;
+ unsigned long state;
+ unsigned long last_accessed;
+ struct dm_bufio_client *c;
+ struct bio bio;
+ struct bio_vec bio_vec[DM_BUFIO_INLINE_VECS];
+};
+
+/*----------------------------------------------------------------*/
+
+static struct kmem_cache *dm_bufio_caches[PAGE_SHIFT - SECTOR_SHIFT];
+static char *dm_bufio_cache_names[PAGE_SHIFT - SECTOR_SHIFT];
+
+static inline int dm_bufio_cache_index(struct dm_bufio_client *c)
+{
+ unsigned ret = c->blocks_per_page_bits - 1;
+
+ BUG_ON(ret >= ARRAY_SIZE(dm_bufio_caches));
+
+ return ret;
+}
+
+#define DM_BUFIO_CACHE(c) (dm_bufio_caches[dm_bufio_cache_index(c)])
+#define DM_BUFIO_CACHE_NAME(c) (dm_bufio_cache_names[dm_bufio_cache_index(c)])
+
+#define dm_bufio_in_request() (!!current->bio_list)
+
+static void dm_bufio_lock(struct dm_bufio_client *c)
+{
+ mutex_lock_nested(&c->lock, dm_bufio_in_request());
+}
+
+static int dm_bufio_trylock(struct dm_bufio_client *c)
+{
+ return mutex_trylock(&c->lock);
+}
+
+static void dm_bufio_unlock(struct dm_bufio_client *c)
+{
+ mutex_unlock(&c->lock);
+}
+
+/*
+ * FIXME Move to sched.h?
+ */
+#ifdef CONFIG_PREEMPT_VOLUNTARY
+# define dm_bufio_cond_resched() \
+do { \
+ if (unlikely(need_resched())) \
+ _cond_resched(); \
+} while (0)
+#else
+# define dm_bufio_cond_resched() do { } while (0)
+#endif
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Default cache size: available memory divided by the ratio.
+ */
+static unsigned long dm_bufio_default_cache_size;
+
+/*
+ * Total cache size set by the user.
+ */
+static unsigned long dm_bufio_cache_size;
+
+/*
+ * A copy of dm_bufio_cache_size because dm_bufio_cache_size can change
+ * at any time. If it disagrees, the user has changed cache size.
+ */
+static unsigned long dm_bufio_cache_size_latch;
+
+static DEFINE_SPINLOCK(param_spinlock);
+
+/*
+ * Buffers are freed after this timeout
+ */
+static unsigned dm_bufio_max_age = DM_BUFIO_DEFAULT_AGE_SECS;
+
+static unsigned long dm_bufio_peak_allocated;
+static unsigned long dm_bufio_allocated_kmem_cache;
+static unsigned long dm_bufio_allocated_get_free_pages;
+static unsigned long dm_bufio_allocated_vmalloc;
+static unsigned long dm_bufio_current_allocated;
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Per-client cache: dm_bufio_cache_size / dm_bufio_client_count
+ */
+static unsigned long dm_bufio_cache_size_per_client;
+
+/*
+ * The current number of clients.
+ */
+static int dm_bufio_client_count;
+
+/*
+ * The list of all clients.
+ */
+static LIST_HEAD(dm_bufio_all_clients);
+
+/*
+ * This mutex protects dm_bufio_cache_size_latch,
+ * dm_bufio_cache_size_per_client and dm_bufio_client_count
+ */
+static DEFINE_MUTEX(dm_bufio_clients_lock);
+
+/*----------------------------------------------------------------*/
+
+static void adjust_total_allocated(enum data_mode data_mode, long diff)
+{
+ static unsigned long * const class_ptr[DATA_MODE_LIMIT] = {
+ &dm_bufio_allocated_kmem_cache,
+ &dm_bufio_allocated_get_free_pages,
+ &dm_bufio_allocated_vmalloc,
+ };
+
+ spin_lock(&param_spinlock);
+
+ *class_ptr[data_mode] += diff;
+
+ dm_bufio_current_allocated += diff;
+
+ if (dm_bufio_current_allocated > dm_bufio_peak_allocated)
+ dm_bufio_peak_allocated = dm_bufio_current_allocated;
+
+ spin_unlock(&param_spinlock);
+}
+
+/*
+ * Change the number of clients and recalculate per-client limit.
+ */
+static void __cache_size_refresh(void)
+{
+ BUG_ON(!mutex_is_locked(&dm_bufio_clients_lock));
+ BUG_ON(dm_bufio_client_count < 0);
+
+ dm_bufio_cache_size_latch = dm_bufio_cache_size;
+
+ barrier();
+
+ /*
+ * Use default if set to 0 and report the actual cache size used.
+ */
+ if (!dm_bufio_cache_size_latch) {
+ (void)cmpxchg(&dm_bufio_cache_size, 0,
+ dm_bufio_default_cache_size);
+ dm_bufio_cache_size_latch = dm_bufio_default_cache_size;
+ }
+
+ dm_bufio_cache_size_per_client = dm_bufio_cache_size_latch /
+ (dm_bufio_client_count ? : 1);
+}
+
+/*
+ * Allocating buffer data.
+ *
+ * Small buffers are allocated with kmem_cache, to use space optimally.
+ *
+ * For large buffers, we choose between get_free_pages and vmalloc.
+ * Each has advantages and disadvantages.
+ *
+ * __get_free_pages can randomly fail if the memory is fragmented.
+ * __vmalloc won't randomly fail, but vmalloc space is limited (it may be
+ * as low as 128M) so using it for caching is not appropriate.
+ *
+ * If the allocation may fail we use __get_free_pages. Memory fragmentation
+ * won't have a fatal effect here, but it just causes flushes of some other
+ * buffers and more I/O will be performed. Don't use __get_free_pages if it
+ * always fails (i.e. order >= MAX_ORDER).
+ *
+ * If the allocation shouldn't fail we use __vmalloc. This is only for the
+ * initial reserve allocation, so there's no risk of wasting all vmalloc
+ * space.
+ */
+static void *alloc_buffer_data(struct dm_bufio_client *c, gfp_t gfp_mask,
+ enum data_mode *data_mode)
+{
+ if (c->block_size <= DM_BUFIO_BLOCK_SIZE_SLAB_LIMIT) {
+ *data_mode = DATA_MODE_SLAB;
+ return kmem_cache_alloc(DM_BUFIO_CACHE(c), gfp_mask);
+ }
+
+ if (c->block_size <= DM_BUFIO_BLOCK_SIZE_GFP_LIMIT &&
+ gfp_mask & __GFP_NORETRY) {
+ *data_mode = DATA_MODE_GET_FREE_PAGES;
+ return (void *)__get_free_pages(gfp_mask,
+ c->pages_per_block_bits);
+ }
+
+ *data_mode = DATA_MODE_VMALLOC;
+ return __vmalloc(c->block_size, gfp_mask, PAGE_KERNEL);
+}
+
+/*
+ * Free buffer's data.
+ */
+static void free_buffer_data(struct dm_bufio_client *c,
+ void *data, enum data_mode data_mode)
+{
+ switch (data_mode) {
+ case DATA_MODE_SLAB:
+ kmem_cache_free(DM_BUFIO_CACHE(c), data);
+ break;
+
+ case DATA_MODE_GET_FREE_PAGES:
+ free_pages((unsigned long)data, c->pages_per_block_bits);
+ break;
+
+ case DATA_MODE_VMALLOC:
+ vfree(data);
+ break;
+
+ default:
+ DMCRIT("dm_bufio_free_buffer_data: bad data mode: %d",
+ data_mode);
+ BUG();
+ }
+}
+
+/*
+ * Allocate buffer and its data.
+ */
+static struct dm_buffer *alloc_buffer(struct dm_bufio_client *c, gfp_t gfp_mask)
+{
+ struct dm_buffer *b = kmalloc(sizeof(struct dm_buffer) + c->aux_size,
+ gfp_mask);
+
+ if (!b)
+ return NULL;
+
+ b->c = c;
+
+ b->data = alloc_buffer_data(c, gfp_mask, &b->data_mode);
+ if (!b->data) {
+ kfree(b);
+ return NULL;
+ }
+
+ adjust_total_allocated(b->data_mode, (long)c->block_size);
+
+ return b;
+}
+
+/*
+ * Free buffer and its data.
+ */
+static void free_buffer(struct dm_buffer *b)
+{
+ struct dm_bufio_client *c = b->c;
+
+ adjust_total_allocated(b->data_mode, -(long)c->block_size);
+
+ free_buffer_data(c, b->data, b->data_mode);
+ kfree(b);
+}
+
+/*
+ * Link buffer to the hash list and clean or dirty queue.
+ */
+static void __link_buffer(struct dm_buffer *b, sector_t block, int dirty)
+{
+ struct dm_bufio_client *c = b->c;
+
+ c->n_buffers[dirty]++;
+ b->block = block;
+ b->list_mode = dirty;
+ list_add(&b->lru_list, &c->lru[dirty]);
+ hlist_add_head(&b->hash_list, &c->cache_hash[DM_BUFIO_HASH(block)]);
+ b->last_accessed = jiffies;
+}
+
+/*
+ * Unlink buffer from the hash list and dirty or clean queue.
+ */
+static void __unlink_buffer(struct dm_buffer *b)
+{
+ struct dm_bufio_client *c = b->c;
+
+ BUG_ON(!c->n_buffers[b->list_mode]);
+
+ c->n_buffers[b->list_mode]--;
+ hlist_del(&b->hash_list);
+ list_del(&b->lru_list);
+}
+
+/*
+ * Place the buffer to the head of dirty or clean LRU queue.
+ */
+static void __relink_lru(struct dm_buffer *b, int dirty)
+{
+ struct dm_bufio_client *c = b->c;
+
+ BUG_ON(!c->n_buffers[b->list_mode]);
+
+ c->n_buffers[b->list_mode]--;
+ c->n_buffers[dirty]++;
+ b->list_mode = dirty;
+ list_del(&b->lru_list);
+ list_add(&b->lru_list, &c->lru[dirty]);
+}
+
+/*----------------------------------------------------------------
+ * Submit I/O on the buffer.
+ *
+ * Bio interface is faster but it has some problems:
+ * the vector list is limited (increasing this limit increases
+ * memory-consumption per buffer, so it is not viable);
+ *
+ * the memory must be direct-mapped, not vmalloced;
+ *
+ * the I/O driver can reject requests spuriously if it thinks that
+ * the requests are too big for the device or if they cross a
+ * controller-defined memory boundary.
+ *
+ * If the buffer is small enough (up to DM_BUFIO_INLINE_VECS pages) and
+ * it is not vmalloced, try using the bio interface.
+ *
+ * If the buffer is big, if it is vmalloced or if the underlying device
+ * rejects the bio because it is too large, use dm-io layer to do the I/O.
+ * The dm-io layer splits the I/O into multiple requests, avoiding the above
+ * shortcomings.
+ *--------------------------------------------------------------*/
+
+/*
+ * dm-io completion routine. It just calls b->bio.bi_end_io, pretending
+ * that the request was handled directly with bio interface.
+ */
+static void dmio_complete(unsigned long error, void *context)
+{
+ struct dm_buffer *b = context;
+
+ b->bio.bi_end_io(&b->bio, error ? -EIO : 0);
+}
+
+static void use_dmio(struct dm_buffer *b, int rw, sector_t block,
+ bio_end_io_t *end_io)
+{
+ int r;
+ struct dm_io_request io_req = {
+ .bi_rw = rw,
+ .notify.fn = dmio_complete,
+ .notify.context = b,
+ .client = b->c->dm_io,
+ };
+ struct dm_io_region region = {
+ .bdev = b->c->bdev,
+ .sector = block << b->c->sectors_per_block_bits,
+ .count = b->c->block_size >> SECTOR_SHIFT,
+ };
+
+ if (b->data_mode != DATA_MODE_VMALLOC) {
+ io_req.mem.type = DM_IO_KMEM;
+ io_req.mem.ptr.addr = b->data;
+ } else {
+ io_req.mem.type = DM_IO_VMA;
+ io_req.mem.ptr.vma = b->data;
+ }
+
+ b->bio.bi_end_io = end_io;
+
+ r = dm_io(&io_req, 1, &region, NULL);
+ if (r)
+ end_io(&b->bio, r);
+}
+
+static void use_inline_bio(struct dm_buffer *b, int rw, sector_t block,
+ bio_end_io_t *end_io)
+{
+ char *ptr;
+ int len;
+
+ bio_init(&b->bio);
+ b->bio.bi_io_vec = b->bio_vec;
+ b->bio.bi_max_vecs = DM_BUFIO_INLINE_VECS;
+ b->bio.bi_sector = block << b->c->sectors_per_block_bits;
+ b->bio.bi_bdev = b->c->bdev;
+ b->bio.bi_end_io = end_io;
+
+ /*
+ * We assume that if len >= PAGE_SIZE ptr is page-aligned.
+ * If len < PAGE_SIZE the buffer doesn't cross page boundary.
+ */
+ ptr = b->data;
+ len = b->c->block_size;
+
+ if (len >= PAGE_SIZE)
+ BUG_ON((unsigned long)ptr & (PAGE_SIZE - 1));
+ else
+ BUG_ON((unsigned long)ptr & (len - 1));
+
+ do {
+ if (!bio_add_page(&b->bio, virt_to_page(ptr),
+ len < PAGE_SIZE ? len : PAGE_SIZE,
+ virt_to_phys(ptr) & (PAGE_SIZE - 1))) {
+ BUG_ON(b->c->block_size <= PAGE_SIZE);
+ use_dmio(b, rw, block, end_io);
+ return;
+ }
+
+ len -= PAGE_SIZE;
+ ptr += PAGE_SIZE;
+ } while (len > 0);
+
+ submit_bio(rw, &b->bio);
+}
+
+static void submit_io(struct dm_buffer *b, int rw, sector_t block,
+ bio_end_io_t *end_io)
+{
+ if (rw == WRITE && b->c->write_callback)
+ b->c->write_callback(b);
+
+ if (b->c->block_size <= DM_BUFIO_INLINE_VECS * PAGE_SIZE &&
+ b->data_mode != DATA_MODE_VMALLOC)
+ use_inline_bio(b, rw, block, end_io);
+ else
+ use_dmio(b, rw, block, end_io);
+}
+
+/*----------------------------------------------------------------
+ * Writing dirty buffers
+ *--------------------------------------------------------------*/
+
+/*
+ * The endio routine for write.
+ *
+ * Set the error, clear B_WRITING bit and wake anyone who was waiting on
+ * it.
+ */
+static void write_endio(struct bio *bio, int error)
+{
+ struct dm_buffer *b = container_of(bio, struct dm_buffer, bio);
+
+ b->write_error = error;
+ if (error) {
+ struct dm_bufio_client *c = b->c;
+ (void)cmpxchg(&c->async_write_error, 0, error);
+ }
+
+ BUG_ON(!test_bit(B_WRITING, &b->state));
+
+ smp_mb__before_clear_bit();
+ clear_bit(B_WRITING, &b->state);
+ smp_mb__after_clear_bit();
+
+ wake_up_bit(&b->state, B_WRITING);
+}
+
+/*
+ * This function is called when wait_on_bit is actually waiting.
+ */
+static int do_io_schedule(void *word)
+{
+ io_schedule();
+
+ return 0;
+}
+
+/*
+ * Initiate a write on a dirty buffer, but don't wait for it.
+ *
+ * - If the buffer is not dirty, exit.
+ * - If there some previous write going on, wait for it to finish (we can't
+ * have two writes on the same buffer simultaneously).
+ * - Submit our write and don't wait on it. We set B_WRITING indicating
+ * that there is a write in progress.
+ */
+static void __write_dirty_buffer(struct dm_buffer *b)
+{
+ if (!test_bit(B_DIRTY, &b->state))
+ return;
+
+ clear_bit(B_DIRTY, &b->state);
+ wait_on_bit_lock(&b->state, B_WRITING,
+ do_io_schedule, TASK_UNINTERRUPTIBLE);
+
+ submit_io(b, WRITE, b->block, write_endio);
+}
+
+/*
+ * Wait until any activity on the buffer finishes. Possibly write the
+ * buffer if it is dirty. When this function finishes, there is no I/O
+ * running on the buffer and the buffer is not dirty.
+ */
+static void __make_buffer_clean(struct dm_buffer *b)
+{
+ BUG_ON(b->hold_count);
+
+ if (!b->state) /* fast case */
+ return;
+
+ wait_on_bit(&b->state, B_READING, do_io_schedule, TASK_UNINTERRUPTIBLE);
+ __write_dirty_buffer(b);
+ wait_on_bit(&b->state, B_WRITING, do_io_schedule, TASK_UNINTERRUPTIBLE);
+}
+
+/*
+ * Find some buffer that is not held by anybody, clean it, unlink it and
+ * return it.
+ */
+static struct dm_buffer *__get_unclaimed_buffer(struct dm_bufio_client *c)
+{
+ struct dm_buffer *b;
+
+ list_for_each_entry_reverse(b, &c->lru[LIST_CLEAN], lru_list) {
+ BUG_ON(test_bit(B_WRITING, &b->state));
+ BUG_ON(test_bit(B_DIRTY, &b->state));
+
+ if (!b->hold_count) {
+ __make_buffer_clean(b);
+ __unlink_buffer(b);
+ return b;
+ }
+ dm_bufio_cond_resched();
+ }
+
+ list_for_each_entry_reverse(b, &c->lru[LIST_DIRTY], lru_list) {
+ BUG_ON(test_bit(B_READING, &b->state));
+
+ if (!b->hold_count) {
+ __make_buffer_clean(b);
+ __unlink_buffer(b);
+ return b;
+ }
+ dm_bufio_cond_resched();
+ }
+
+ return NULL;
+}
+
+/*
+ * Wait until some other threads free some buffer or release hold count on
+ * some buffer.
+ *
+ * This function is entered with c->lock held, drops it and regains it
+ * before exiting.
+ */
+static void __wait_for_free_buffer(struct dm_bufio_client *c)
+{
+ DECLARE_WAITQUEUE(wait, current);
+
+ add_wait_queue(&c->free_buffer_wait, &wait);
+ set_task_state(current, TASK_UNINTERRUPTIBLE);
+ dm_bufio_unlock(c);
+
+ io_schedule();
+
+ set_task_state(current, TASK_RUNNING);
+ remove_wait_queue(&c->free_buffer_wait, &wait);
+
+ dm_bufio_lock(c);
+}
+
+/*
+ * Allocate a new buffer. If the allocation is not possible, wait until
+ * some other thread frees a buffer.
+ *
+ * May drop the lock and regain it.
+ */
+static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client *c)
+{
+ struct dm_buffer *b;
+
+ /*
+ * dm-bufio is resistant to allocation failures (it just keeps
+ * one buffer reserved in cases all the allocations fail).
+ * So set flags to not try too hard:
+ * GFP_NOIO: don't recurse into the I/O layer
+ * __GFP_NORETRY: don't retry and rather return failure
+ * __GFP_NOMEMALLOC: don't use emergency reserves
+ * __GFP_NOWARN: don't print a warning in case of failure
+ *
+ * For debugging, if we set the cache size to 1, no new buffers will
+ * be allocated.
+ */
+ while (1) {
+ if (dm_bufio_cache_size_latch != 1) {
+ b = alloc_buffer(c, GFP_NOIO | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
+ if (b)
+ return b;
+ }
+
+ if (!list_empty(&c->reserved_buffers)) {
+ b = list_entry(c->reserved_buffers.next,
+ struct dm_buffer, lru_list);
+ list_del(&b->lru_list);
+ c->need_reserved_buffers++;
+
+ return b;
+ }
+
+ b = __get_unclaimed_buffer(c);
+ if (b)
+ return b;
+
+ __wait_for_free_buffer(c);
+ }
+}
+
+static struct dm_buffer *__alloc_buffer_wait(struct dm_bufio_client *c)
+{
+ struct dm_buffer *b = __alloc_buffer_wait_no_callback(c);
+
+ if (c->alloc_callback)
+ c->alloc_callback(b);
+
+ return b;
+}
+
+/*
+ * Free a buffer and wake other threads waiting for free buffers.
+ */
+static void __free_buffer_wake(struct dm_buffer *b)
+{
+ struct dm_bufio_client *c = b->c;
+
+ if (!c->need_reserved_buffers)
+ free_buffer(b);
+ else {
+ list_add(&b->lru_list, &c->reserved_buffers);
+ c->need_reserved_buffers--;
+ }
+
+ wake_up(&c->free_buffer_wait);
+}
+
+static void __write_dirty_buffers_async(struct dm_bufio_client *c, int no_wait)
+{
+ struct dm_buffer *b, *tmp;
+
+ list_for_each_entry_safe_reverse(b, tmp, &c->lru[LIST_DIRTY], lru_list) {
+ BUG_ON(test_bit(B_READING, &b->state));
+
+ if (!test_bit(B_DIRTY, &b->state) &&
+ !test_bit(B_WRITING, &b->state)) {
+ __relink_lru(b, LIST_CLEAN);
+ continue;
+ }
+
+ if (no_wait && test_bit(B_WRITING, &b->state))
+ return;
+
+ __write_dirty_buffer(b);
+ dm_bufio_cond_resched();
+ }
+}
+
+/*
+ * Get writeback threshold and buffer limit for a given client.
+ */
+static void __get_memory_limit(struct dm_bufio_client *c,
+ unsigned long *threshold_buffers,
+ unsigned long *limit_buffers)
+{
+ unsigned long buffers;
+
+ if (dm_bufio_cache_size != dm_bufio_cache_size_latch) {
+ mutex_lock(&dm_bufio_clients_lock);
+ __cache_size_refresh();
+ mutex_unlock(&dm_bufio_clients_lock);
+ }
+
+ buffers = dm_bufio_cache_size_per_client >>
+ (c->sectors_per_block_bits + SECTOR_SHIFT);
+
+ if (buffers < DM_BUFIO_MIN_BUFFERS)
+ buffers = DM_BUFIO_MIN_BUFFERS;
+
+ *limit_buffers = buffers;
+ *threshold_buffers = buffers * DM_BUFIO_WRITEBACK_PERCENT / 100;
+}
+
+/*
+ * Check if we're over watermark.
+ * If we are over threshold_buffers, start freeing buffers.
+ * If we're over "limit_buffers", block until we get under the limit.
+ */
+static void __check_watermark(struct dm_bufio_client *c)
+{
+ unsigned long threshold_buffers, limit_buffers;
+
+ __get_memory_limit(c, &threshold_buffers, &limit_buffers);
+
+ while (c->n_buffers[LIST_CLEAN] + c->n_buffers[LIST_DIRTY] >
+ limit_buffers) {
+
+ struct dm_buffer *b = __get_unclaimed_buffer(c);
+
+ if (!b)
+ return;
+
+ __free_buffer_wake(b);
+ dm_bufio_cond_resched();
+ }
+
+ if (c->n_buffers[LIST_DIRTY] > threshold_buffers)
+ __write_dirty_buffers_async(c, 1);
+}
+
+/*
+ * Find a buffer in the hash.
+ */
+static struct dm_buffer *__find(struct dm_bufio_client *c, sector_t block)
+{
+ struct dm_buffer *b;
+ struct hlist_node *hn;
+
+ hlist_for_each_entry(b, hn, &c->cache_hash[DM_BUFIO_HASH(block)],
+ hash_list) {
+ dm_bufio_cond_resched();
+ if (b->block == block)
+ return b;
+ }
+
+ return NULL;
+}
+
+/*----------------------------------------------------------------
+ * Getting a buffer
+ *--------------------------------------------------------------*/
+
+enum new_flag {
+ NF_FRESH = 0,
+ NF_READ = 1,
+ NF_GET = 2
+};
+
+static struct dm_buffer *__bufio_new(struct dm_bufio_client *c, sector_t block,
+ enum new_flag nf, struct dm_buffer **bp,
+ int *need_submit)
+{
+ struct dm_buffer *b, *new_b = NULL;
+
+ *need_submit = 0;
+
+ b = __find(c, block);
+ if (b) {
+ b->hold_count++;
+ __relink_lru(b, test_bit(B_DIRTY, &b->state) ||
+ test_bit(B_WRITING, &b->state));
+ return b;
+ }
+
+ if (nf == NF_GET)
+ return NULL;
+
+ new_b = __alloc_buffer_wait(c);
+
+ /*
+ * We've had a period where the mutex was unlocked, so need to
+ * recheck the hash table.
+ */
+ b = __find(c, block);
+ if (b) {
+ __free_buffer_wake(new_b);
+ b->hold_count++;
+ __relink_lru(b, test_bit(B_DIRTY, &b->state) ||
+ test_bit(B_WRITING, &b->state));
+ return b;
+ }
+
+ __check_watermark(c);
+
+ b = new_b;
+ b->hold_count = 1;
+ b->read_error = 0;
+ b->write_error = 0;
+ __link_buffer(b, block, LIST_CLEAN);
+
+ if (nf == NF_FRESH) {
+ b->state = 0;
+ return b;
+ }
+
+ b->state = 1 << B_READING;
+ *need_submit = 1;
+
+ return b;
+}
+
+/*
+ * The endio routine for reading: set the error, clear the bit and wake up
+ * anyone waiting on the buffer.
+ */
+static void read_endio(struct bio *bio, int error)
+{
+ struct dm_buffer *b = container_of(bio, struct dm_buffer, bio);
+
+ b->read_error = error;
+
+ BUG_ON(!test_bit(B_READING, &b->state));
+
+ smp_mb__before_clear_bit();
+ clear_bit(B_READING, &b->state);
+ smp_mb__after_clear_bit();
+
+ wake_up_bit(&b->state, B_READING);
+}
+
+/*
+ * A common routine for dm_bufio_new and dm_bufio_read. Operation of these
+ * functions is similar except that dm_bufio_new doesn't read the
+ * buffer from the disk (assuming that the caller overwrites all the data
+ * and uses dm_bufio_mark_buffer_dirty to write new data back).
+ */
+static void *new_read(struct dm_bufio_client *c, sector_t block,
+ enum new_flag nf, struct dm_buffer **bp)
+{
+ int need_submit;
+ struct dm_buffer *b;
+
+ dm_bufio_lock(c);
+ b = __bufio_new(c, block, nf, bp, &need_submit);
+ dm_bufio_unlock(c);
+
+ if (!b || IS_ERR(b))
+ return b;
+
+ if (need_submit)
+ submit_io(b, READ, b->block, read_endio);
+
+ wait_on_bit(&b->state, B_READING, do_io_schedule, TASK_UNINTERRUPTIBLE);
+
+ if (b->read_error) {
+ int error = b->read_error;
+
+ dm_bufio_release(b);
+
+ return ERR_PTR(error);
+ }
+
+ *bp = b;
+
+ return b->data;
+}
+
+void *dm_bufio_get(struct dm_bufio_client *c, sector_t block,
+ struct dm_buffer **bp)
+{
+ return new_read(c, block, NF_GET, bp);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_get);
+
+void *dm_bufio_read(struct dm_bufio_client *c, sector_t block,
+ struct dm_buffer **bp)
+{
+ BUG_ON(dm_bufio_in_request());
+
+ return new_read(c, block, NF_READ, bp);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_read);
+
+void *dm_bufio_new(struct dm_bufio_client *c, sector_t block,
+ struct dm_buffer **bp)
+{
+ BUG_ON(dm_bufio_in_request());
+
+ return new_read(c, block, NF_FRESH, bp);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_new);
+
+void dm_bufio_release(struct dm_buffer *b)
+{
+ struct dm_bufio_client *c = b->c;
+
+ dm_bufio_lock(c);
+
+ BUG_ON(test_bit(B_READING, &b->state));
+ BUG_ON(!b->hold_count);
+
+ b->hold_count--;
+ if (!b->hold_count) {
+ wake_up(&c->free_buffer_wait);
+
+ /*
+ * If there were errors on the buffer, and the buffer is not
+ * to be written, free the buffer. There is no point in caching
+ * invalid buffer.
+ */
+ if ((b->read_error || b->write_error) &&
+ !test_bit(B_WRITING, &b->state) &&
+ !test_bit(B_DIRTY, &b->state)) {
+ __unlink_buffer(b);
+ __free_buffer_wake(b);
+ }
+ }
+
+ dm_bufio_unlock(c);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_release);
+
+void dm_bufio_mark_buffer_dirty(struct dm_buffer *b)
+{
+ struct dm_bufio_client *c = b->c;
+
+ dm_bufio_lock(c);
+
+ if (!test_and_set_bit(B_DIRTY, &b->state))
+ __relink_lru(b, LIST_DIRTY);
+
+ dm_bufio_unlock(c);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_mark_buffer_dirty);
+
+void dm_bufio_write_dirty_buffers_async(struct dm_bufio_client *c)
+{
+ BUG_ON(dm_bufio_in_request());
+
+ dm_bufio_lock(c);
+ __write_dirty_buffers_async(c, 0);
+ dm_bufio_unlock(c);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_write_dirty_buffers_async);
+
+/*
+ * For performance, it is essential that the buffers are written asynchronously
+ * and simultaneously (so that the block layer can merge the writes) and then
+ * waited upon.
+ *
+ * Finally, we flush hardware disk cache.
+ */
+int dm_bufio_write_dirty_buffers(struct dm_bufio_client *c)
+{
+ int a, f;
+ unsigned long buffers_processed = 0;
+ struct dm_buffer *b, *tmp;
+
+ dm_bufio_lock(c);
+ __write_dirty_buffers_async(c, 0);
+
+again:
+ list_for_each_entry_safe_reverse(b, tmp, &c->lru[LIST_DIRTY], lru_list) {
+ int dropped_lock = 0;
+
+ if (buffers_processed < c->n_buffers[LIST_DIRTY])
+ buffers_processed++;
+
+ BUG_ON(test_bit(B_READING, &b->state));
+
+ if (test_bit(B_WRITING, &b->state)) {
+ if (buffers_processed < c->n_buffers[LIST_DIRTY]) {
+ dropped_lock = 1;
+ b->hold_count++;
+ dm_bufio_unlock(c);
+ wait_on_bit(&b->state, B_WRITING,
+ do_io_schedule,
+ TASK_UNINTERRUPTIBLE);
+ dm_bufio_lock(c);
+ b->hold_count--;
+ } else
+ wait_on_bit(&b->state, B_WRITING,
+ do_io_schedule,
+ TASK_UNINTERRUPTIBLE);
+ }
+
+ if (!test_bit(B_DIRTY, &b->state) &&
+ !test_bit(B_WRITING, &b->state))
+ __relink_lru(b, LIST_CLEAN);
+
+ dm_bufio_cond_resched();
+
+ /*
+ * If we dropped the lock, the list is no longer consistent,
+ * so we must restart the search.
+ *
+ * In the most common case, the buffer just processed is
+ * relinked to the clean list, so we won't loop scanning the
+ * same buffer again and again.
+ *
+ * This may livelock if there is another thread simultaneously
+ * dirtying buffers, so we count the number of buffers walked
+ * and if it exceeds the total number of buffers, it means that
+ * someone is doing some writes simultaneously with us. In
+ * this case, stop, dropping the lock.
+ */
+ if (dropped_lock)
+ goto again;
+ }
+ wake_up(&c->free_buffer_wait);
+ dm_bufio_unlock(c);
+
+ a = xchg(&c->async_write_error, 0);
+ f = dm_bufio_issue_flush(c);
+ if (a)
+ return a;
+
+ return f;
+}
+EXPORT_SYMBOL_GPL(dm_bufio_write_dirty_buffers);
+
+/*
+ * Use dm-io to send and empty barrier flush the device.
+ */
+int dm_bufio_issue_flush(struct dm_bufio_client *c)
+{
+ struct dm_io_request io_req = {
+ .bi_rw = REQ_FLUSH,
+ .mem.type = DM_IO_KMEM,
+ .mem.ptr.addr = NULL,
+ .client = c->dm_io,
+ };
+ struct dm_io_region io_reg = {
+ .bdev = c->bdev,
+ .sector = 0,
+ .count = 0,
+ };
+
+ BUG_ON(dm_bufio_in_request());
+
+ return dm_io(&io_req, 1, &io_reg, NULL);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_issue_flush);
+
+/*
+ * We first delete any other buffer that may be at that new location.
+ *
+ * Then, we write the buffer to the original location if it was dirty.
+ *
+ * Then, if we are the only one who is holding the buffer, relink the buffer
+ * in the hash queue for the new location.
+ *
+ * If there was someone else holding the buffer, we write it to the new
+ * location but not relink it, because that other user needs to have the buffer
+ * at the same place.
+ */
+void dm_bufio_release_move(struct dm_buffer *b, sector_t new_block)
+{
+ struct dm_bufio_client *c = b->c;
+ struct dm_buffer *new;
+
+ BUG_ON(dm_bufio_in_request());
+
+ dm_bufio_lock(c);
+
+retry:
+ new = __find(c, new_block);
+ if (new) {
+ if (new->hold_count) {
+ __wait_for_free_buffer(c);
+ goto retry;
+ }
+
+ /*
+ * FIXME: Is there any point waiting for a write that's going
+ * to be overwritten in a bit?
+ */
+ __make_buffer_clean(new);
+ __unlink_buffer(new);
+ __free_buffer_wake(new);
+ }
+
+ BUG_ON(!b->hold_count);
+ BUG_ON(test_bit(B_READING, &b->state));
+
+ __write_dirty_buffer(b);
+ if (b->hold_count == 1) {
+ wait_on_bit(&b->state, B_WRITING,
+ do_io_schedule, TASK_UNINTERRUPTIBLE);
+ set_bit(B_DIRTY, &b->state);
+ __unlink_buffer(b);
+ __link_buffer(b, new_block, LIST_DIRTY);
+ } else {
+ sector_t old_block;
+ wait_on_bit_lock(&b->state, B_WRITING,
+ do_io_schedule, TASK_UNINTERRUPTIBLE);
+ /*
+ * Relink buffer to "new_block" so that write_callback
+ * sees "new_block" as a block number.
+ * After the write, link the buffer back to old_block.
+ * All this must be done in bufio lock, so that block number
+ * change isn't visible to other threads.
+ */
+ old_block = b->block;
+ __unlink_buffer(b);
+ __link_buffer(b, new_block, b->list_mode);
+ submit_io(b, WRITE, new_block, write_endio);
+ wait_on_bit(&b->state, B_WRITING,
+ do_io_schedule, TASK_UNINTERRUPTIBLE);
+ __unlink_buffer(b);
+ __link_buffer(b, old_block, b->list_mode);
+ }
+
+ dm_bufio_unlock(c);
+ dm_bufio_release(b);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_release_move);
+
+unsigned dm_bufio_get_block_size(struct dm_bufio_client *c)
+{
+ return c->block_size;
+}
+EXPORT_SYMBOL_GPL(dm_bufio_get_block_size);
+
+sector_t dm_bufio_get_device_size(struct dm_bufio_client *c)
+{
+ return i_size_read(c->bdev->bd_inode) >>
+ (SECTOR_SHIFT + c->sectors_per_block_bits);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_get_device_size);
+
+sector_t dm_bufio_get_block_number(struct dm_buffer *b)
+{
+ return b->block;
+}
+EXPORT_SYMBOL_GPL(dm_bufio_get_block_number);
+
+void *dm_bufio_get_block_data(struct dm_buffer *b)
+{
+ return b->data;
+}
+EXPORT_SYMBOL_GPL(dm_bufio_get_block_data);
+
+void *dm_bufio_get_aux_data(struct dm_buffer *b)
+{
+ return b + 1;
+}
+EXPORT_SYMBOL_GPL(dm_bufio_get_aux_data);
+
+struct dm_bufio_client *dm_bufio_get_client(struct dm_buffer *b)
+{
+ return b->c;
+}
+EXPORT_SYMBOL_GPL(dm_bufio_get_client);
+
+static void drop_buffers(struct dm_bufio_client *c)
+{
+ struct dm_buffer *b;
+ int i;
+
+ BUG_ON(dm_bufio_in_request());
+
+ /*
+ * An optimization so that the buffers are not written one-by-one.
+ */
+ dm_bufio_write_dirty_buffers_async(c);
+
+ dm_bufio_lock(c);
+
+ while ((b = __get_unclaimed_buffer(c)))
+ __free_buffer_wake(b);
+
+ for (i = 0; i < LIST_SIZE; i++)
+ list_for_each_entry(b, &c->lru[i], lru_list)
+ DMERR("leaked buffer %llx, hold count %u, list %d",
+ (unsigned long long)b->block, b->hold_count, i);
+
+ for (i = 0; i < LIST_SIZE; i++)
+ BUG_ON(!list_empty(&c->lru[i]));
+
+ dm_bufio_unlock(c);
+}
+
+/*
+ * Test if the buffer is unused and too old, and commit it.
+ * At if noio is set, we must not do any I/O because we hold
+ * dm_bufio_clients_lock and we would risk deadlock if the I/O gets rerouted to
+ * different bufio client.
+ */
+static int __cleanup_old_buffer(struct dm_buffer *b, gfp_t gfp,
+ unsigned long max_jiffies)
+{
+ if (jiffies - b->last_accessed < max_jiffies)
+ return 1;
+
+ if (!(gfp & __GFP_IO)) {
+ if (test_bit(B_READING, &b->state) ||
+ test_bit(B_WRITING, &b->state) ||
+ test_bit(B_DIRTY, &b->state))
+ return 1;
+ }
+
+ if (b->hold_count)
+ return 1;
+
+ __make_buffer_clean(b);
+ __unlink_buffer(b);
+ __free_buffer_wake(b);
+
+ return 0;
+}
+
+static void __scan(struct dm_bufio_client *c, unsigned long nr_to_scan,
+ struct shrink_control *sc)
+{
+ int l;
+ struct dm_buffer *b, *tmp;
+
+ for (l = 0; l < LIST_SIZE; l++) {
+ list_for_each_entry_safe_reverse(b, tmp, &c->lru[l], lru_list)
+ if (!__cleanup_old_buffer(b, sc->gfp_mask, 0) &&
+ !--nr_to_scan)
+ return;
+ dm_bufio_cond_resched();
+ }
+}
+
+static int shrink(struct shrinker *shrinker, struct shrink_control *sc)
+{
+ struct dm_bufio_client *c =
+ container_of(shrinker, struct dm_bufio_client, shrinker);
+ unsigned long r;
+ unsigned long nr_to_scan = sc->nr_to_scan;
+
+ if (sc->gfp_mask & __GFP_IO)
+ dm_bufio_lock(c);
+ else if (!dm_bufio_trylock(c))
+ return !nr_to_scan ? 0 : -1;
+
+ if (nr_to_scan)
+ __scan(c, nr_to_scan, sc);
+
+ r = c->n_buffers[LIST_CLEAN] + c->n_buffers[LIST_DIRTY];
+ if (r > INT_MAX)
+ r = INT_MAX;
+
+ dm_bufio_unlock(c);
+
+ return r;
+}
+
+/*
+ * Create the buffering interface
+ */
+struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsigned block_size,
+ unsigned reserved_buffers, unsigned aux_size,
+ void (*alloc_callback)(struct dm_buffer *),
+ void (*write_callback)(struct dm_buffer *))
+{
+ int r;
+ struct dm_bufio_client *c;
+ unsigned i;
+
+ BUG_ON(block_size < 1 << SECTOR_SHIFT ||
+ (block_size & (block_size - 1)));
+
+ c = kmalloc(sizeof(*c), GFP_KERNEL);
+ if (!c) {
+ r = -ENOMEM;
+ goto bad_client;
+ }
+ c->cache_hash = vmalloc(sizeof(struct hlist_head) << DM_BUFIO_HASH_BITS);
+ if (!c->cache_hash) {
+ r = -ENOMEM;
+ goto bad_hash;
+ }
+
+ c->bdev = bdev;
+ c->block_size = block_size;
+ c->sectors_per_block_bits = ffs(block_size) - 1 - SECTOR_SHIFT;
+ c->pages_per_block_bits = (ffs(block_size) - 1 >= PAGE_SHIFT) ?
+ ffs(block_size) - 1 - PAGE_SHIFT : 0;
+ c->blocks_per_page_bits = (ffs(block_size) - 1 < PAGE_SHIFT ?
+ PAGE_SHIFT - (ffs(block_size) - 1) : 0);
+
+ c->aux_size = aux_size;
+ c->alloc_callback = alloc_callback;
+ c->write_callback = write_callback;
+
+ for (i = 0; i < LIST_SIZE; i++) {
+ INIT_LIST_HEAD(&c->lru[i]);
+ c->n_buffers[i] = 0;
+ }
+
+ for (i = 0; i < 1 << DM_BUFIO_HASH_BITS; i++)
+ INIT_HLIST_HEAD(&c->cache_hash[i]);
+
+ mutex_init(&c->lock);
+ INIT_LIST_HEAD(&c->reserved_buffers);
+ c->need_reserved_buffers = reserved_buffers;
+
+ init_waitqueue_head(&c->free_buffer_wait);
+ c->async_write_error = 0;
+
+ c->dm_io = dm_io_client_create();
+ if (IS_ERR(c->dm_io)) {
+ r = PTR_ERR(c->dm_io);
+ goto bad_dm_io;
+ }
+
+ mutex_lock(&dm_bufio_clients_lock);
+ if (c->blocks_per_page_bits) {
+ if (!DM_BUFIO_CACHE_NAME(c)) {
+ DM_BUFIO_CACHE_NAME(c) = kasprintf(GFP_KERNEL, "dm_bufio_cache-%u", c->block_size);
+ if (!DM_BUFIO_CACHE_NAME(c)) {
+ r = -ENOMEM;
+ mutex_unlock(&dm_bufio_clients_lock);
+ goto bad_cache;
+ }
+ }
+
+ if (!DM_BUFIO_CACHE(c)) {
+ DM_BUFIO_CACHE(c) = kmem_cache_create(DM_BUFIO_CACHE_NAME(c),
+ c->block_size,
+ c->block_size, 0, NULL);
+ if (!DM_BUFIO_CACHE(c)) {
+ r = -ENOMEM;
+ mutex_unlock(&dm_bufio_clients_lock);
+ goto bad_cache;
+ }
+ }
+ }
+ mutex_unlock(&dm_bufio_clients_lock);
+
+ while (c->need_reserved_buffers) {
+ struct dm_buffer *b = alloc_buffer(c, GFP_KERNEL);
+
+ if (!b) {
+ r = -ENOMEM;
+ goto bad_buffer;
+ }
+ __free_buffer_wake(b);
+ }
+
+ mutex_lock(&dm_bufio_clients_lock);
+ dm_bufio_client_count++;
+ list_add(&c->client_list, &dm_bufio_all_clients);
+ __cache_size_refresh();
+ mutex_unlock(&dm_bufio_clients_lock);
+
+ c->shrinker.shrink = shrink;
+ c->shrinker.seeks = 1;
+ c->shrinker.batch = 0;
+ register_shrinker(&c->shrinker);
+
+ return c;
+
+bad_buffer:
+bad_cache:
+ while (!list_empty(&c->reserved_buffers)) {
+ struct dm_buffer *b = list_entry(c->reserved_buffers.next,
+ struct dm_buffer, lru_list);
+ list_del(&b->lru_list);
+ free_buffer(b);
+ }
+ dm_io_client_destroy(c->dm_io);
+bad_dm_io:
+ vfree(c->cache_hash);
+bad_hash:
+ kfree(c);
+bad_client:
+ return ERR_PTR(r);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_client_create);
+
+/*
+ * Free the buffering interface.
+ * It is required that there are no references on any buffers.
+ */
+void dm_bufio_client_destroy(struct dm_bufio_client *c)
+{
+ unsigned i;
+
+ drop_buffers(c);
+
+ unregister_shrinker(&c->shrinker);
+
+ mutex_lock(&dm_bufio_clients_lock);
+
+ list_del(&c->client_list);
+ dm_bufio_client_count--;
+ __cache_size_refresh();
+
+ mutex_unlock(&dm_bufio_clients_lock);
+
+ for (i = 0; i < 1 << DM_BUFIO_HASH_BITS; i++)
+ BUG_ON(!hlist_empty(&c->cache_hash[i]));
+
+ BUG_ON(c->need_reserved_buffers);
+
+ while (!list_empty(&c->reserved_buffers)) {
+ struct dm_buffer *b = list_entry(c->reserved_buffers.next,
+ struct dm_buffer, lru_list);
+ list_del(&b->lru_list);
+ free_buffer(b);
+ }
+
+ for (i = 0; i < LIST_SIZE; i++)
+ if (c->n_buffers[i])
+ DMERR("leaked buffer count %d: %ld", i, c->n_buffers[i]);
+
+ for (i = 0; i < LIST_SIZE; i++)
+ BUG_ON(c->n_buffers[i]);
+
+ dm_io_client_destroy(c->dm_io);
+ vfree(c->cache_hash);
+ kfree(c);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_client_destroy);
+
+static void cleanup_old_buffers(void)
+{
+ unsigned long max_age = dm_bufio_max_age;
+ struct dm_bufio_client *c;
+
+ barrier();
+
+ if (max_age > ULONG_MAX / HZ)
+ max_age = ULONG_MAX / HZ;
+
+ mutex_lock(&dm_bufio_clients_lock);
+ list_for_each_entry(c, &dm_bufio_all_clients, client_list) {
+ if (!dm_bufio_trylock(c))
+ continue;
+
+ while (!list_empty(&c->lru[LIST_CLEAN])) {
+ struct dm_buffer *b;
+ b = list_entry(c->lru[LIST_CLEAN].prev,
+ struct dm_buffer, lru_list);
+ if (__cleanup_old_buffer(b, 0, max_age * HZ))
+ break;
+ dm_bufio_cond_resched();
+ }
+
+ dm_bufio_unlock(c);
+ dm_bufio_cond_resched();
+ }
+ mutex_unlock(&dm_bufio_clients_lock);
+}
+
+static struct workqueue_struct *dm_bufio_wq;
+static struct delayed_work dm_bufio_work;
+
+static void work_fn(struct work_struct *w)
+{
+ cleanup_old_buffers();
+
+ queue_delayed_work(dm_bufio_wq, &dm_bufio_work,
+ DM_BUFIO_WORK_TIMER_SECS * HZ);
+}
+
+/*----------------------------------------------------------------
+ * Module setup
+ *--------------------------------------------------------------*/
+
+/*
+ * This is called only once for the whole dm_bufio module.
+ * It initializes memory limit.
+ */
+static int __init dm_bufio_init(void)
+{
+ __u64 mem;
+
+ memset(&dm_bufio_caches, 0, sizeof dm_bufio_caches);
+ memset(&dm_bufio_cache_names, 0, sizeof dm_bufio_cache_names);
+
+ mem = (__u64)((totalram_pages - totalhigh_pages) *
+ DM_BUFIO_MEMORY_PERCENT / 100) << PAGE_SHIFT;
+
+ if (mem > ULONG_MAX)
+ mem = ULONG_MAX;
+
+#ifdef CONFIG_MMU
+ /*
+ * Get the size of vmalloc space the same way as VMALLOC_TOTAL
+ * in fs/proc/internal.h
+ */
+ if (mem > (VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100)
+ mem = (VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100;
+#endif
+
+ dm_bufio_default_cache_size = mem;
+
+ mutex_lock(&dm_bufio_clients_lock);
+ __cache_size_refresh();
+ mutex_unlock(&dm_bufio_clients_lock);
+
+ dm_bufio_wq = create_singlethread_workqueue("dm_bufio_cache");
+ if (!dm_bufio_wq)
+ return -ENOMEM;
+
+ INIT_DELAYED_WORK(&dm_bufio_work, work_fn);
+ queue_delayed_work(dm_bufio_wq, &dm_bufio_work,
+ DM_BUFIO_WORK_TIMER_SECS * HZ);
+
+ return 0;
+}
+
+/*
+ * This is called once when unloading the dm_bufio module.
+ */
+static void __exit dm_bufio_exit(void)
+{
+ int bug = 0;
+ int i;
+
+ cancel_delayed_work_sync(&dm_bufio_work);
+ destroy_workqueue(dm_bufio_wq);
+
+ for (i = 0; i < ARRAY_SIZE(dm_bufio_caches); i++) {
+ struct kmem_cache *kc = dm_bufio_caches[i];
+
+ if (kc)
+ kmem_cache_destroy(kc);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(dm_bufio_cache_names); i++)
+ kfree(dm_bufio_cache_names[i]);
+
+ if (dm_bufio_client_count) {
+ DMCRIT("%s: dm_bufio_client_count leaked: %d",
+ __func__, dm_bufio_client_count);
+ bug = 1;
+ }
+
+ if (dm_bufio_current_allocated) {
+ DMCRIT("%s: dm_bufio_current_allocated leaked: %lu",
+ __func__, dm_bufio_current_allocated);
+ bug = 1;
+ }
+
+ if (dm_bufio_allocated_get_free_pages) {
+ DMCRIT("%s: dm_bufio_allocated_get_free_pages leaked: %lu",
+ __func__, dm_bufio_allocated_get_free_pages);
+ bug = 1;
+ }
+
+ if (dm_bufio_allocated_vmalloc) {
+ DMCRIT("%s: dm_bufio_vmalloc leaked: %lu",
+ __func__, dm_bufio_allocated_vmalloc);
+ bug = 1;
+ }
+
+ if (bug)
+ BUG();
+}
+
+module_init(dm_bufio_init)
+module_exit(dm_bufio_exit)
+
+module_param_named(max_cache_size_bytes, dm_bufio_cache_size, ulong, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(max_cache_size_bytes, "Size of metadata cache");
+
+module_param_named(max_age_seconds, dm_bufio_max_age, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(max_age_seconds, "Max age of a buffer in seconds");
+
+module_param_named(peak_allocated_bytes, dm_bufio_peak_allocated, ulong, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(peak_allocated_bytes, "Tracks the maximum allocated memory");
+
+module_param_named(allocated_kmem_cache_bytes, dm_bufio_allocated_kmem_cache, ulong, S_IRUGO);
+MODULE_PARM_DESC(allocated_kmem_cache_bytes, "Memory allocated with kmem_cache_alloc");
+
+module_param_named(allocated_get_free_pages_bytes, dm_bufio_allocated_get_free_pages, ulong, S_IRUGO);
+MODULE_PARM_DESC(allocated_get_free_pages_bytes, "Memory allocated with get_free_pages");
+
+module_param_named(allocated_vmalloc_bytes, dm_bufio_allocated_vmalloc, ulong, S_IRUGO);
+MODULE_PARM_DESC(allocated_vmalloc_bytes, "Memory allocated with vmalloc");
+
+module_param_named(current_allocated_bytes, dm_bufio_current_allocated, ulong, S_IRUGO);
+MODULE_PARM_DESC(current_allocated_bytes, "Memory currently used by the cache");
+
+MODULE_AUTHOR("Mikulas Patocka <dm-devel@redhat.com>");
+MODULE_DESCRIPTION(DM_NAME " buffered I/O library");
+MODULE_LICENSE("GPL");
diff --git a/drivers/md/dm-bufio.h b/drivers/md/dm-bufio.h
new file mode 100644
index 000000000000..5c4c3a04e381
--- /dev/null
+++ b/drivers/md/dm-bufio.h
@@ -0,0 +1,112 @@
+/*
+ * Copyright (C) 2009-2011 Red Hat, Inc.
+ *
+ * Author: Mikulas Patocka <mpatocka@redhat.com>
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef DM_BUFIO_H
+#define DM_BUFIO_H
+
+#include <linux/blkdev.h>
+#include <linux/types.h>
+
+/*----------------------------------------------------------------*/
+
+struct dm_bufio_client;
+struct dm_buffer;
+
+/*
+ * Create a buffered IO cache on a given device
+ */
+struct dm_bufio_client *
+dm_bufio_client_create(struct block_device *bdev, unsigned block_size,
+ unsigned reserved_buffers, unsigned aux_size,
+ void (*alloc_callback)(struct dm_buffer *),
+ void (*write_callback)(struct dm_buffer *));
+
+/*
+ * Release a buffered IO cache.
+ */
+void dm_bufio_client_destroy(struct dm_bufio_client *c);
+
+/*
+ * WARNING: to avoid deadlocks, these conditions are observed:
+ *
+ * - At most one thread can hold at most "reserved_buffers" simultaneously.
+ * - Each other threads can hold at most one buffer.
+ * - Threads which call only dm_bufio_get can hold unlimited number of
+ * buffers.
+ */
+
+/*
+ * Read a given block from disk. Returns pointer to data. Returns a
+ * pointer to dm_buffer that can be used to release the buffer or to make
+ * it dirty.
+ */
+void *dm_bufio_read(struct dm_bufio_client *c, sector_t block,
+ struct dm_buffer **bp);
+
+/*
+ * Like dm_bufio_read, but return buffer from cache, don't read
+ * it. If the buffer is not in the cache, return NULL.
+ */
+void *dm_bufio_get(struct dm_bufio_client *c, sector_t block,
+ struct dm_buffer **bp);
+
+/*
+ * Like dm_bufio_read, but don't read anything from the disk. It is
+ * expected that the caller initializes the buffer and marks it dirty.
+ */
+void *dm_bufio_new(struct dm_bufio_client *c, sector_t block,
+ struct dm_buffer **bp);
+
+/*
+ * Release a reference obtained with dm_bufio_{read,get,new}. The data
+ * pointer and dm_buffer pointer is no longer valid after this call.
+ */
+void dm_bufio_release(struct dm_buffer *b);
+
+/*
+ * Mark a buffer dirty. It should be called after the buffer is modified.
+ *
+ * In case of memory pressure, the buffer may be written after
+ * dm_bufio_mark_buffer_dirty, but before dm_bufio_write_dirty_buffers. So
+ * dm_bufio_write_dirty_buffers guarantees that the buffer is on-disk but
+ * the actual writing may occur earlier.
+ */
+void dm_bufio_mark_buffer_dirty(struct dm_buffer *b);
+
+/*
+ * Initiate writing of dirty buffers, without waiting for completion.
+ */
+void dm_bufio_write_dirty_buffers_async(struct dm_bufio_client *c);
+
+/*
+ * Write all dirty buffers. Guarantees that all dirty buffers created prior
+ * to this call are on disk when this call exits.
+ */
+int dm_bufio_write_dirty_buffers(struct dm_bufio_client *c);
+
+/*
+ * Send an empty write barrier to the device to flush hardware disk cache.
+ */
+int dm_bufio_issue_flush(struct dm_bufio_client *c);
+
+/*
+ * Like dm_bufio_release but also move the buffer to the new
+ * block. dm_bufio_write_dirty_buffers is needed to commit the new block.
+ */
+void dm_bufio_release_move(struct dm_buffer *b, sector_t new_block);
+
+unsigned dm_bufio_get_block_size(struct dm_bufio_client *c);
+sector_t dm_bufio_get_device_size(struct dm_bufio_client *c);
+sector_t dm_bufio_get_block_number(struct dm_buffer *b);
+void *dm_bufio_get_block_data(struct dm_buffer *b);
+void *dm_bufio_get_aux_data(struct dm_buffer *b);
+struct dm_bufio_client *dm_bufio_get_client(struct dm_buffer *b);
+
+/*----------------------------------------------------------------*/
+
+#endif
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 2e9a3ca37bdd..31c2dc25886d 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1215,6 +1215,7 @@ static int table_load(struct dm_ioctl *param, size_t param_size)
struct hash_cell *hc;
struct dm_table *t;
struct mapped_device *md;
+ struct target_type *immutable_target_type;
md = find_device(param);
if (!md)
@@ -1230,6 +1231,16 @@ static int table_load(struct dm_ioctl *param, size_t param_size)
goto out;
}
+ immutable_target_type = dm_get_immutable_target_type(md);
+ if (immutable_target_type &&
+ (immutable_target_type != dm_table_get_immutable_target_type(t))) {
+ DMWARN("can't replace immutable target type %s",
+ immutable_target_type->name);
+ dm_table_destroy(t);
+ r = -EINVAL;
+ goto out;
+ }
+
/* Protect md->type and md->queue against concurrent table loads. */
dm_lock_md_type(md);
if (dm_get_md_type(md) == DM_TYPE_NONE)
diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c
index 32ac70861d66..bed444c93d8d 100644
--- a/drivers/md/dm-kcopyd.c
+++ b/drivers/md/dm-kcopyd.c
@@ -66,6 +66,8 @@ struct dm_kcopyd_client {
struct list_head pages_jobs;
};
+static struct page_list zero_page_list;
+
static void wake(struct dm_kcopyd_client *kc)
{
queue_work(kc->kcopyd_wq, &kc->kcopyd_work);
@@ -254,6 +256,9 @@ int __init dm_kcopyd_init(void)
if (!_job_cache)
return -ENOMEM;
+ zero_page_list.next = &zero_page_list;
+ zero_page_list.page = ZERO_PAGE(0);
+
return 0;
}
@@ -322,7 +327,7 @@ static int run_complete_job(struct kcopyd_job *job)
dm_kcopyd_notify_fn fn = job->fn;
struct dm_kcopyd_client *kc = job->kc;
- if (job->pages)
+ if (job->pages && job->pages != &zero_page_list)
kcopyd_put_pages(kc, job->pages);
/*
* If this is the master job, the sub jobs have already
@@ -484,6 +489,8 @@ static void dispatch_job(struct kcopyd_job *job)
atomic_inc(&kc->nr_jobs);
if (unlikely(!job->source.count))
push(&kc->complete_jobs, job);
+ else if (job->pages == &zero_page_list)
+ push(&kc->io_jobs, job);
else
push(&kc->pages_jobs, job);
wake(kc);
@@ -592,14 +599,20 @@ int dm_kcopyd_copy(struct dm_kcopyd_client *kc, struct dm_io_region *from,
job->flags = flags;
job->read_err = 0;
job->write_err = 0;
- job->rw = READ;
-
- job->source = *from;
job->num_dests = num_dests;
memcpy(&job->dests, dests, sizeof(*dests) * num_dests);
- job->pages = NULL;
+ if (from) {
+ job->source = *from;
+ job->pages = NULL;
+ job->rw = READ;
+ } else {
+ memset(&job->source, 0, sizeof job->source);
+ job->source.count = job->dests[0].count;
+ job->pages = &zero_page_list;
+ job->rw = WRITE;
+ }
job->fn = fn;
job->context = context;
@@ -617,6 +630,14 @@ int dm_kcopyd_copy(struct dm_kcopyd_client *kc, struct dm_io_region *from,
}
EXPORT_SYMBOL(dm_kcopyd_copy);
+int dm_kcopyd_zero(struct dm_kcopyd_client *kc,
+ unsigned num_dests, struct dm_io_region *dests,
+ unsigned flags, dm_kcopyd_notify_fn fn, void *context)
+{
+ return dm_kcopyd_copy(kc, NULL, num_dests, dests, flags, fn, context);
+}
+EXPORT_SYMBOL(dm_kcopyd_zero);
+
void *dm_kcopyd_prepare_callback(struct dm_kcopyd_client *kc,
dm_kcopyd_notify_fn fn, void *context)
{
diff --git a/drivers/md/dm-log-userspace-base.c b/drivers/md/dm-log-userspace-base.c
index 1021c8986011..8db3862dade5 100644
--- a/drivers/md/dm-log-userspace-base.c
+++ b/drivers/md/dm-log-userspace-base.c
@@ -30,6 +30,7 @@ struct flush_entry {
struct log_c {
struct dm_target *ti;
+ struct dm_dev *log_dev;
uint32_t region_size;
region_t region_count;
uint64_t luid;
@@ -146,7 +147,7 @@ static int build_constructor_string(struct dm_target *ti,
* <UUID> <other args>
* Where 'other args' is the userspace implementation specific log
* arguments. An example might be:
- * <UUID> clustered_disk <arg count> <log dev> <region_size> [[no]sync]
+ * <UUID> clustered-disk <arg count> <log dev> <region_size> [[no]sync]
*
* So, this module will strip off the <UUID> for identification purposes
* when communicating with userspace about a log; but will pass on everything
@@ -161,13 +162,15 @@ static int userspace_ctr(struct dm_dirty_log *log, struct dm_target *ti,
struct log_c *lc = NULL;
uint64_t rdata;
size_t rdata_size = sizeof(rdata);
+ char *devices_rdata = NULL;
+ size_t devices_rdata_size = DM_NAME_LEN;
if (argc < 3) {
DMWARN("Too few arguments to userspace dirty log");
return -EINVAL;
}
- lc = kmalloc(sizeof(*lc), GFP_KERNEL);
+ lc = kzalloc(sizeof(*lc), GFP_KERNEL);
if (!lc) {
DMWARN("Unable to allocate userspace log context.");
return -ENOMEM;
@@ -195,9 +198,19 @@ static int userspace_ctr(struct dm_dirty_log *log, struct dm_target *ti,
return str_size;
}
- /* Send table string */
+ devices_rdata = kzalloc(devices_rdata_size, GFP_KERNEL);
+ if (!devices_rdata) {
+ DMERR("Failed to allocate memory for device information");
+ r = -ENOMEM;
+ goto out;
+ }
+
+ /*
+ * Send table string and get back any opened device.
+ */
r = dm_consult_userspace(lc->uuid, lc->luid, DM_ULOG_CTR,
- ctr_str, str_size, NULL, NULL);
+ ctr_str, str_size,
+ devices_rdata, &devices_rdata_size);
if (r < 0) {
if (r == -ESRCH)
@@ -220,7 +233,20 @@ static int userspace_ctr(struct dm_dirty_log *log, struct dm_target *ti,
lc->region_size = (uint32_t)rdata;
lc->region_count = dm_sector_div_up(ti->len, lc->region_size);
+ if (devices_rdata_size) {
+ if (devices_rdata[devices_rdata_size - 1] != '\0') {
+ DMERR("DM_ULOG_CTR device return string not properly terminated");
+ r = -EINVAL;
+ goto out;
+ }
+ r = dm_get_device(ti, devices_rdata,
+ dm_table_get_mode(ti->table), &lc->log_dev);
+ if (r)
+ DMERR("Failed to register %s with device-mapper",
+ devices_rdata);
+ }
out:
+ kfree(devices_rdata);
if (r) {
kfree(lc);
kfree(ctr_str);
@@ -241,6 +267,9 @@ static void userspace_dtr(struct dm_dirty_log *log)
NULL, 0,
NULL, NULL);
+ if (lc->log_dev)
+ dm_put_device(lc->ti, lc->log_dev);
+
kfree(lc->usr_argv_str);
kfree(lc);
diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 37a37266a1e3..11fa96df4b06 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -1017,30 +1017,56 @@ static int raid_status(struct dm_target *ti, status_type_t type,
struct raid_set *rs = ti->private;
unsigned raid_param_cnt = 1; /* at least 1 for chunksize */
unsigned sz = 0;
- int i;
+ int i, array_in_sync = 0;
sector_t sync;
switch (type) {
case STATUSTYPE_INFO:
DMEMIT("%s %d ", rs->raid_type->name, rs->md.raid_disks);
- for (i = 0; i < rs->md.raid_disks; i++) {
- if (test_bit(Faulty, &rs->dev[i].rdev.flags))
- DMEMIT("D");
- else if (test_bit(In_sync, &rs->dev[i].rdev.flags))
- DMEMIT("A");
- else
- DMEMIT("a");
- }
-
if (test_bit(MD_RECOVERY_RUNNING, &rs->md.recovery))
sync = rs->md.curr_resync_completed;
else
sync = rs->md.recovery_cp;
- if (sync > rs->md.resync_max_sectors)
+ if (sync >= rs->md.resync_max_sectors) {
+ array_in_sync = 1;
sync = rs->md.resync_max_sectors;
+ } else {
+ /*
+ * The array may be doing an initial sync, or it may
+ * be rebuilding individual components. If all the
+ * devices are In_sync, then it is the array that is
+ * being initialized.
+ */
+ for (i = 0; i < rs->md.raid_disks; i++)
+ if (!test_bit(In_sync, &rs->dev[i].rdev.flags))
+ array_in_sync = 1;
+ }
+ /*
+ * Status characters:
+ * 'D' = Dead/Failed device
+ * 'a' = Alive but not in-sync
+ * 'A' = Alive and in-sync
+ */
+ for (i = 0; i < rs->md.raid_disks; i++) {
+ if (test_bit(Faulty, &rs->dev[i].rdev.flags))
+ DMEMIT("D");
+ else if (!array_in_sync ||
+ !test_bit(In_sync, &rs->dev[i].rdev.flags))
+ DMEMIT("a");
+ else
+ DMEMIT("A");
+ }
+ /*
+ * In-sync ratio:
+ * The in-sync ratio shows the progress of:
+ * - Initializing the array
+ * - Rebuilding a subset of devices of the array
+ * The user can distinguish between the two by referring
+ * to the status characters.
+ */
DMEMIT(" %llu/%llu",
(unsigned long long) sync,
(unsigned long long) rs->md.resync_max_sectors);
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index bc04518e9d8b..8e9132130142 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -54,7 +54,9 @@ struct dm_table {
sector_t *highs;
struct dm_target *targets;
+ struct target_type *immutable_target_type;
unsigned integrity_supported:1;
+ unsigned singleton:1;
/*
* Indicates the rw permissions for the new logical
@@ -740,6 +742,12 @@ int dm_table_add_target(struct dm_table *t, const char *type,
char **argv;
struct dm_target *tgt;
+ if (t->singleton) {
+ DMERR("%s: target type %s must appear alone in table",
+ dm_device_name(t->md), t->targets->type->name);
+ return -EINVAL;
+ }
+
if ((r = check_space(t)))
return r;
@@ -758,6 +766,36 @@ int dm_table_add_target(struct dm_table *t, const char *type,
return -EINVAL;
}
+ if (dm_target_needs_singleton(tgt->type)) {
+ if (t->num_targets) {
+ DMERR("%s: target type %s must appear alone in table",
+ dm_device_name(t->md), type);
+ return -EINVAL;
+ }
+ t->singleton = 1;
+ }
+
+ if (dm_target_always_writeable(tgt->type) && !(t->mode & FMODE_WRITE)) {
+ DMERR("%s: target type %s may not be included in read-only tables",
+ dm_device_name(t->md), type);
+ return -EINVAL;
+ }
+
+ if (t->immutable_target_type) {
+ if (t->immutable_target_type != tgt->type) {
+ DMERR("%s: immutable target type %s cannot be mixed with other target types",
+ dm_device_name(t->md), t->immutable_target_type->name);
+ return -EINVAL;
+ }
+ } else if (dm_target_is_immutable(tgt->type)) {
+ if (t->num_targets) {
+ DMERR("%s: immutable target type %s cannot be mixed with other target types",
+ dm_device_name(t->md), tgt->type->name);
+ return -EINVAL;
+ }
+ t->immutable_target_type = tgt->type;
+ }
+
tgt->table = t;
tgt->begin = start;
tgt->len = len;
@@ -915,6 +953,11 @@ unsigned dm_table_get_type(struct dm_table *t)
return t->type;
}
+struct target_type *dm_table_get_immutable_target_type(struct dm_table *t)
+{
+ return t->immutable_target_type;
+}
+
bool dm_table_request_based(struct dm_table *t)
{
return dm_table_get_type(t) == DM_TYPE_REQUEST_BASED;
@@ -1299,6 +1342,31 @@ static bool dm_table_discard_zeroes_data(struct dm_table *t)
return 1;
}
+static int device_is_nonrot(struct dm_target *ti, struct dm_dev *dev,
+ sector_t start, sector_t len, void *data)
+{
+ struct request_queue *q = bdev_get_queue(dev->bdev);
+
+ return q && blk_queue_nonrot(q);
+}
+
+static bool dm_table_is_nonrot(struct dm_table *t)
+{
+ struct dm_target *ti;
+ unsigned i = 0;
+
+ /* Ensure that all underlying device are non-rotational. */
+ while (i < dm_table_get_num_targets(t)) {
+ ti = dm_table_get_target(t, i++);
+
+ if (!ti->type->iterate_devices ||
+ !ti->type->iterate_devices(ti, device_is_nonrot, NULL))
+ return 0;
+ }
+
+ return 1;
+}
+
void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
struct queue_limits *limits)
{
@@ -1324,6 +1392,11 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
if (!dm_table_discard_zeroes_data(t))
q->limits.discard_zeroes_data = 0;
+ if (dm_table_is_nonrot(t))
+ queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q);
+ else
+ queue_flag_clear_unlocked(QUEUE_FLAG_NONROT, q);
+
dm_table_set_integrity(t);
/*
diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
new file mode 100644
index 000000000000..59c4f0446ffa
--- /dev/null
+++ b/drivers/md/dm-thin-metadata.c
@@ -0,0 +1,1391 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-thin-metadata.h"
+#include "persistent-data/dm-btree.h"
+#include "persistent-data/dm-space-map.h"
+#include "persistent-data/dm-space-map-disk.h"
+#include "persistent-data/dm-transaction-manager.h"
+
+#include <linux/list.h>
+#include <linux/device-mapper.h>
+#include <linux/workqueue.h>
+
+/*--------------------------------------------------------------------------
+ * As far as the metadata goes, there is:
+ *
+ * - A superblock in block zero, taking up fewer than 512 bytes for
+ * atomic writes.
+ *
+ * - A space map managing the metadata blocks.
+ *
+ * - A space map managing the data blocks.
+ *
+ * - A btree mapping our internal thin dev ids onto struct disk_device_details.
+ *
+ * - A hierarchical btree, with 2 levels which effectively maps (thin
+ * dev id, virtual block) -> block_time. Block time is a 64-bit
+ * field holding the time in the low 24 bits, and block in the top 48
+ * bits.
+ *
+ * BTrees consist solely of btree_nodes, that fill a block. Some are
+ * internal nodes, as such their values are a __le64 pointing to other
+ * nodes. Leaf nodes can store data of any reasonable size (ie. much
+ * smaller than the block size). The nodes consist of the header,
+ * followed by an array of keys, followed by an array of values. We have
+ * to binary search on the keys so they're all held together to help the
+ * cpu cache.
+ *
+ * Space maps have 2 btrees:
+ *
+ * - One maps a uint64_t onto a struct index_entry. Which points to a
+ * bitmap block, and has some details about how many free entries there
+ * are etc.
+ *
+ * - The bitmap blocks have a header (for the checksum). Then the rest
+ * of the block is pairs of bits. With the meaning being:
+ *
+ * 0 - ref count is 0
+ * 1 - ref count is 1
+ * 2 - ref count is 2
+ * 3 - ref count is higher than 2
+ *
+ * - If the count is higher than 2 then the ref count is entered in a
+ * second btree that directly maps the block_address to a uint32_t ref
+ * count.
+ *
+ * The space map metadata variant doesn't have a bitmaps btree. Instead
+ * it has one single blocks worth of index_entries. This avoids
+ * recursive issues with the bitmap btree needing to allocate space in
+ * order to insert. With a small data block size such as 64k the
+ * metadata support data devices that are hundreds of terrabytes.
+ *
+ * The space maps allocate space linearly from front to back. Space that
+ * is freed in a transaction is never recycled within that transaction.
+ * To try and avoid fragmenting _free_ space the allocator always goes
+ * back and fills in gaps.
+ *
+ * All metadata io is in THIN_METADATA_BLOCK_SIZE sized/aligned chunks
+ * from the block manager.
+ *--------------------------------------------------------------------------*/
+
+#define DM_MSG_PREFIX "thin metadata"
+
+#define THIN_SUPERBLOCK_MAGIC 27022010
+#define THIN_SUPERBLOCK_LOCATION 0
+#define THIN_VERSION 1
+#define THIN_METADATA_CACHE_SIZE 64
+#define SECTOR_TO_BLOCK_SHIFT 3
+
+/* This should be plenty */
+#define SPACE_MAP_ROOT_SIZE 128
+
+/*
+ * Little endian on-disk superblock and device details.
+ */
+struct thin_disk_superblock {
+ __le32 csum; /* Checksum of superblock except for this field. */
+ __le32 flags;
+ __le64 blocknr; /* This block number, dm_block_t. */
+
+ __u8 uuid[16];
+ __le64 magic;
+ __le32 version;
+ __le32 time;
+
+ __le64 trans_id;
+
+ /*
+ * Root held by userspace transactions.
+ */
+ __le64 held_root;
+
+ __u8 data_space_map_root[SPACE_MAP_ROOT_SIZE];
+ __u8 metadata_space_map_root[SPACE_MAP_ROOT_SIZE];
+
+ /*
+ * 2-level btree mapping (dev_id, (dev block, time)) -> data block
+ */
+ __le64 data_mapping_root;
+
+ /*
+ * Device detail root mapping dev_id -> device_details
+ */
+ __le64 device_details_root;
+
+ __le32 data_block_size; /* In 512-byte sectors. */
+
+ __le32 metadata_block_size; /* In 512-byte sectors. */
+ __le64 metadata_nr_blocks;
+
+ __le32 compat_flags;
+ __le32 compat_ro_flags;
+ __le32 incompat_flags;
+} __packed;
+
+struct disk_device_details {
+ __le64 mapped_blocks;
+ __le64 transaction_id; /* When created. */
+ __le32 creation_time;
+ __le32 snapshotted_time;
+} __packed;
+
+struct dm_pool_metadata {
+ struct hlist_node hash;
+
+ struct block_device *bdev;
+ struct dm_block_manager *bm;
+ struct dm_space_map *metadata_sm;
+ struct dm_space_map *data_sm;
+ struct dm_transaction_manager *tm;
+ struct dm_transaction_manager *nb_tm;
+
+ /*
+ * Two-level btree.
+ * First level holds thin_dev_t.
+ * Second level holds mappings.
+ */
+ struct dm_btree_info info;
+
+ /*
+ * Non-blocking version of the above.
+ */
+ struct dm_btree_info nb_info;
+
+ /*
+ * Just the top level for deleting whole devices.
+ */
+ struct dm_btree_info tl_info;
+
+ /*
+ * Just the bottom level for creating new devices.
+ */
+ struct dm_btree_info bl_info;
+
+ /*
+ * Describes the device details btree.
+ */
+ struct dm_btree_info details_info;
+
+ struct rw_semaphore root_lock;
+ uint32_t time;
+ int need_commit;
+ dm_block_t root;
+ dm_block_t details_root;
+ struct list_head thin_devices;
+ uint64_t trans_id;
+ unsigned long flags;
+ sector_t data_block_size;
+};
+
+struct dm_thin_device {
+ struct list_head list;
+ struct dm_pool_metadata *pmd;
+ dm_thin_id id;
+
+ int open_count;
+ int changed;
+ uint64_t mapped_blocks;
+ uint64_t transaction_id;
+ uint32_t creation_time;
+ uint32_t snapshotted_time;
+};
+
+/*----------------------------------------------------------------
+ * superblock validator
+ *--------------------------------------------------------------*/
+
+#define SUPERBLOCK_CSUM_XOR 160774
+
+static void sb_prepare_for_write(struct dm_block_validator *v,
+ struct dm_block *b,
+ size_t block_size)
+{
+ struct thin_disk_superblock *disk_super = dm_block_data(b);
+
+ disk_super->blocknr = cpu_to_le64(dm_block_location(b));
+ disk_super->csum = cpu_to_le32(dm_bm_checksum(&disk_super->flags,
+ block_size - sizeof(__le32),
+ SUPERBLOCK_CSUM_XOR));
+}
+
+static int sb_check(struct dm_block_validator *v,
+ struct dm_block *b,
+ size_t block_size)
+{
+ struct thin_disk_superblock *disk_super = dm_block_data(b);
+ __le32 csum_le;
+
+ if (dm_block_location(b) != le64_to_cpu(disk_super->blocknr)) {
+ DMERR("sb_check failed: blocknr %llu: "
+ "wanted %llu", le64_to_cpu(disk_super->blocknr),
+ (unsigned long long)dm_block_location(b));
+ return -ENOTBLK;
+ }
+
+ if (le64_to_cpu(disk_super->magic) != THIN_SUPERBLOCK_MAGIC) {
+ DMERR("sb_check failed: magic %llu: "
+ "wanted %llu", le64_to_cpu(disk_super->magic),
+ (unsigned long long)THIN_SUPERBLOCK_MAGIC);
+ return -EILSEQ;
+ }
+
+ csum_le = cpu_to_le32(dm_bm_checksum(&disk_super->flags,
+ block_size - sizeof(__le32),
+ SUPERBLOCK_CSUM_XOR));
+ if (csum_le != disk_super->csum) {
+ DMERR("sb_check failed: csum %u: wanted %u",
+ le32_to_cpu(csum_le), le32_to_cpu(disk_super->csum));
+ return -EILSEQ;
+ }
+
+ return 0;
+}
+
+static struct dm_block_validator sb_validator = {
+ .name = "superblock",
+ .prepare_for_write = sb_prepare_for_write,
+ .check = sb_check
+};
+
+/*----------------------------------------------------------------
+ * Methods for the btree value types
+ *--------------------------------------------------------------*/
+
+static uint64_t pack_block_time(dm_block_t b, uint32_t t)
+{
+ return (b << 24) | t;
+}
+
+static void unpack_block_time(uint64_t v, dm_block_t *b, uint32_t *t)
+{
+ *b = v >> 24;
+ *t = v & ((1 << 24) - 1);
+}
+
+static void data_block_inc(void *context, void *value_le)
+{
+ struct dm_space_map *sm = context;
+ __le64 v_le;
+ uint64_t b;
+ uint32_t t;
+
+ memcpy(&v_le, value_le, sizeof(v_le));
+ unpack_block_time(le64_to_cpu(v_le), &b, &t);
+ dm_sm_inc_block(sm, b);
+}
+
+static void data_block_dec(void *context, void *value_le)
+{
+ struct dm_space_map *sm = context;
+ __le64 v_le;
+ uint64_t b;
+ uint32_t t;
+
+ memcpy(&v_le, value_le, sizeof(v_le));
+ unpack_block_time(le64_to_cpu(v_le), &b, &t);
+ dm_sm_dec_block(sm, b);
+}
+
+static int data_block_equal(void *context, void *value1_le, void *value2_le)
+{
+ __le64 v1_le, v2_le;
+ uint64_t b1, b2;
+ uint32_t t;
+
+ memcpy(&v1_le, value1_le, sizeof(v1_le));
+ memcpy(&v2_le, value2_le, sizeof(v2_le));
+ unpack_block_time(le64_to_cpu(v1_le), &b1, &t);
+ unpack_block_time(le64_to_cpu(v2_le), &b2, &t);
+
+ return b1 == b2;
+}
+
+static void subtree_inc(void *context, void *value)
+{
+ struct dm_btree_info *info = context;
+ __le64 root_le;
+ uint64_t root;
+
+ memcpy(&root_le, value, sizeof(root_le));
+ root = le64_to_cpu(root_le);
+ dm_tm_inc(info->tm, root);
+}
+
+static void subtree_dec(void *context, void *value)
+{
+ struct dm_btree_info *info = context;
+ __le64 root_le;
+ uint64_t root;
+
+ memcpy(&root_le, value, sizeof(root_le));
+ root = le64_to_cpu(root_le);
+ if (dm_btree_del(info, root))
+ DMERR("btree delete failed\n");
+}
+
+static int subtree_equal(void *context, void *value1_le, void *value2_le)
+{
+ __le64 v1_le, v2_le;
+ memcpy(&v1_le, value1_le, sizeof(v1_le));
+ memcpy(&v2_le, value2_le, sizeof(v2_le));
+
+ return v1_le == v2_le;
+}
+
+/*----------------------------------------------------------------*/
+
+static int superblock_all_zeroes(struct dm_block_manager *bm, int *result)
+{
+ int r;
+ unsigned i;
+ struct dm_block *b;
+ __le64 *data_le, zero = cpu_to_le64(0);
+ unsigned block_size = dm_bm_block_size(bm) / sizeof(__le64);
+
+ /*
+ * We can't use a validator here - it may be all zeroes.
+ */
+ r = dm_bm_read_lock(bm, THIN_SUPERBLOCK_LOCATION, NULL, &b);
+ if (r)
+ return r;
+
+ data_le = dm_block_data(b);
+ *result = 1;
+ for (i = 0; i < block_size; i++) {
+ if (data_le[i] != zero) {
+ *result = 0;
+ break;
+ }
+ }
+
+ return dm_bm_unlock(b);
+}
+
+static int init_pmd(struct dm_pool_metadata *pmd,
+ struct dm_block_manager *bm,
+ dm_block_t nr_blocks, int create)
+{
+ int r;
+ struct dm_space_map *sm, *data_sm;
+ struct dm_transaction_manager *tm;
+ struct dm_block *sblock;
+
+ if (create) {
+ r = dm_tm_create_with_sm(bm, THIN_SUPERBLOCK_LOCATION,
+ &sb_validator, &tm, &sm, &sblock);
+ if (r < 0) {
+ DMERR("tm_create_with_sm failed");
+ return r;
+ }
+
+ data_sm = dm_sm_disk_create(tm, nr_blocks);
+ if (IS_ERR(data_sm)) {
+ DMERR("sm_disk_create failed");
+ r = PTR_ERR(data_sm);
+ goto bad;
+ }
+ } else {
+ struct thin_disk_superblock *disk_super = NULL;
+ size_t space_map_root_offset =
+ offsetof(struct thin_disk_superblock, metadata_space_map_root);
+
+ r = dm_tm_open_with_sm(bm, THIN_SUPERBLOCK_LOCATION,
+ &sb_validator, space_map_root_offset,
+ SPACE_MAP_ROOT_SIZE, &tm, &sm, &sblock);
+ if (r < 0) {
+ DMERR("tm_open_with_sm failed");
+ return r;
+ }
+
+ disk_super = dm_block_data(sblock);
+ data_sm = dm_sm_disk_open(tm, disk_super->data_space_map_root,
+ sizeof(disk_super->data_space_map_root));
+ if (IS_ERR(data_sm)) {
+ DMERR("sm_disk_open failed");
+ r = PTR_ERR(data_sm);
+ goto bad;
+ }
+ }
+
+
+ r = dm_tm_unlock(tm, sblock);
+ if (r < 0) {
+ DMERR("couldn't unlock superblock");
+ goto bad_data_sm;
+ }
+
+ pmd->bm = bm;
+ pmd->metadata_sm = sm;
+ pmd->data_sm = data_sm;
+ pmd->tm = tm;
+ pmd->nb_tm = dm_tm_create_non_blocking_clone(tm);
+ if (!pmd->nb_tm) {
+ DMERR("could not create clone tm");
+ r = -ENOMEM;
+ goto bad_data_sm;
+ }
+
+ pmd->info.tm = tm;
+ pmd->info.levels = 2;
+ pmd->info.value_type.context = pmd->data_sm;
+ pmd->info.value_type.size = sizeof(__le64);
+ pmd->info.value_type.inc = data_block_inc;
+ pmd->info.value_type.dec = data_block_dec;
+ pmd->info.value_type.equal = data_block_equal;
+
+ memcpy(&pmd->nb_info, &pmd->info, sizeof(pmd->nb_info));
+ pmd->nb_info.tm = pmd->nb_tm;
+
+ pmd->tl_info.tm = tm;
+ pmd->tl_info.levels = 1;
+ pmd->tl_info.value_type.context = &pmd->info;
+ pmd->tl_info.value_type.size = sizeof(__le64);
+ pmd->tl_info.value_type.inc = subtree_inc;
+ pmd->tl_info.value_type.dec = subtree_dec;
+ pmd->tl_info.value_type.equal = subtree_equal;
+
+ pmd->bl_info.tm = tm;
+ pmd->bl_info.levels = 1;
+ pmd->bl_info.value_type.context = pmd->data_sm;
+ pmd->bl_info.value_type.size = sizeof(__le64);
+ pmd->bl_info.value_type.inc = data_block_inc;
+ pmd->bl_info.value_type.dec = data_block_dec;
+ pmd->bl_info.value_type.equal = data_block_equal;
+
+ pmd->details_info.tm = tm;
+ pmd->details_info.levels = 1;
+ pmd->details_info.value_type.context = NULL;
+ pmd->details_info.value_type.size = sizeof(struct disk_device_details);
+ pmd->details_info.value_type.inc = NULL;
+ pmd->details_info.value_type.dec = NULL;
+ pmd->details_info.value_type.equal = NULL;
+
+ pmd->root = 0;
+
+ init_rwsem(&pmd->root_lock);
+ pmd->time = 0;
+ pmd->need_commit = 0;
+ pmd->details_root = 0;
+ pmd->trans_id = 0;
+ pmd->flags = 0;
+ INIT_LIST_HEAD(&pmd->thin_devices);
+
+ return 0;
+
+bad_data_sm:
+ dm_sm_destroy(data_sm);
+bad:
+ dm_tm_destroy(tm);
+ dm_sm_destroy(sm);
+
+ return r;
+}
+
+static int __begin_transaction(struct dm_pool_metadata *pmd)
+{
+ int r;
+ u32 features;
+ struct thin_disk_superblock *disk_super;
+ struct dm_block *sblock;
+
+ /*
+ * __maybe_commit_transaction() resets these
+ */
+ WARN_ON(pmd->need_commit);
+
+ /*
+ * We re-read the superblock every time. Shouldn't need to do this
+ * really.
+ */
+ r = dm_bm_read_lock(pmd->bm, THIN_SUPERBLOCK_LOCATION,
+ &sb_validator, &sblock);
+ if (r)
+ return r;
+
+ disk_super = dm_block_data(sblock);
+ pmd->time = le32_to_cpu(disk_super->time);
+ pmd->root = le64_to_cpu(disk_super->data_mapping_root);
+ pmd->details_root = le64_to_cpu(disk_super->device_details_root);
+ pmd->trans_id = le64_to_cpu(disk_super->trans_id);
+ pmd->flags = le32_to_cpu(disk_super->flags);
+ pmd->data_block_size = le32_to_cpu(disk_super->data_block_size);
+
+ features = le32_to_cpu(disk_super->incompat_flags) & ~THIN_FEATURE_INCOMPAT_SUPP;
+ if (features) {
+ DMERR("could not access metadata due to "
+ "unsupported optional features (%lx).",
+ (unsigned long)features);
+ r = -EINVAL;
+ goto out;
+ }
+
+ /*
+ * Check for read-only metadata to skip the following RDWR checks.
+ */
+ if (get_disk_ro(pmd->bdev->bd_disk))
+ goto out;
+
+ features = le32_to_cpu(disk_super->compat_ro_flags) & ~THIN_FEATURE_COMPAT_RO_SUPP;
+ if (features) {
+ DMERR("could not access metadata RDWR due to "
+ "unsupported optional features (%lx).",
+ (unsigned long)features);
+ r = -EINVAL;
+ }
+
+out:
+ dm_bm_unlock(sblock);
+ return r;
+}
+
+static int __write_changed_details(struct dm_pool_metadata *pmd)
+{
+ int r;
+ struct dm_thin_device *td, *tmp;
+ struct disk_device_details details;
+ uint64_t key;
+
+ list_for_each_entry_safe(td, tmp, &pmd->thin_devices, list) {
+ if (!td->changed)
+ continue;
+
+ key = td->id;
+
+ details.mapped_blocks = cpu_to_le64(td->mapped_blocks);
+ details.transaction_id = cpu_to_le64(td->transaction_id);
+ details.creation_time = cpu_to_le32(td->creation_time);
+ details.snapshotted_time = cpu_to_le32(td->snapshotted_time);
+ __dm_bless_for_disk(&details);
+
+ r = dm_btree_insert(&pmd->details_info, pmd->details_root,
+ &key, &details, &pmd->details_root);
+ if (r)
+ return r;
+
+ if (td->open_count)
+ td->changed = 0;
+ else {
+ list_del(&td->list);
+ kfree(td);
+ }
+
+ pmd->need_commit = 1;
+ }
+
+ return 0;
+}
+
+static int __commit_transaction(struct dm_pool_metadata *pmd)
+{
+ /*
+ * FIXME: Associated pool should be made read-only on failure.
+ */
+ int r;
+ size_t metadata_len, data_len;
+ struct thin_disk_superblock *disk_super;
+ struct dm_block *sblock;
+
+ /*
+ * We need to know if the thin_disk_superblock exceeds a 512-byte sector.
+ */
+ BUILD_BUG_ON(sizeof(struct thin_disk_superblock) > 512);
+
+ r = __write_changed_details(pmd);
+ if (r < 0)
+ goto out;
+
+ if (!pmd->need_commit)
+ goto out;
+
+ r = dm_sm_commit(pmd->data_sm);
+ if (r < 0)
+ goto out;
+
+ r = dm_tm_pre_commit(pmd->tm);
+ if (r < 0)
+ goto out;
+
+ r = dm_sm_root_size(pmd->metadata_sm, &metadata_len);
+ if (r < 0)
+ goto out;
+
+ r = dm_sm_root_size(pmd->metadata_sm, &data_len);
+ if (r < 0)
+ goto out;
+
+ r = dm_bm_write_lock(pmd->bm, THIN_SUPERBLOCK_LOCATION,
+ &sb_validator, &sblock);
+ if (r)
+ goto out;
+
+ disk_super = dm_block_data(sblock);
+ disk_super->time = cpu_to_le32(pmd->time);
+ disk_super->data_mapping_root = cpu_to_le64(pmd->root);
+ disk_super->device_details_root = cpu_to_le64(pmd->details_root);
+ disk_super->trans_id = cpu_to_le64(pmd->trans_id);
+ disk_super->flags = cpu_to_le32(pmd->flags);
+
+ r = dm_sm_copy_root(pmd->metadata_sm, &disk_super->metadata_space_map_root,
+ metadata_len);
+ if (r < 0)
+ goto out_locked;
+
+ r = dm_sm_copy_root(pmd->data_sm, &disk_super->data_space_map_root,
+ data_len);
+ if (r < 0)
+ goto out_locked;
+
+ r = dm_tm_commit(pmd->tm, sblock);
+ if (!r)
+ pmd->need_commit = 0;
+
+out:
+ return r;
+
+out_locked:
+ dm_bm_unlock(sblock);
+ return r;
+}
+
+struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
+ sector_t data_block_size)
+{
+ int r;
+ struct thin_disk_superblock *disk_super;
+ struct dm_pool_metadata *pmd;
+ sector_t bdev_size = i_size_read(bdev->bd_inode) >> SECTOR_SHIFT;
+ struct dm_block_manager *bm;
+ int create;
+ struct dm_block *sblock;
+
+ pmd = kmalloc(sizeof(*pmd), GFP_KERNEL);
+ if (!pmd) {
+ DMERR("could not allocate metadata struct");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ /*
+ * Max hex locks:
+ * 3 for btree insert +
+ * 2 for btree lookup used within space map
+ */
+ bm = dm_block_manager_create(bdev, THIN_METADATA_BLOCK_SIZE,
+ THIN_METADATA_CACHE_SIZE, 5);
+ if (!bm) {
+ DMERR("could not create block manager");
+ kfree(pmd);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ r = superblock_all_zeroes(bm, &create);
+ if (r) {
+ dm_block_manager_destroy(bm);
+ kfree(pmd);
+ return ERR_PTR(r);
+ }
+
+
+ r = init_pmd(pmd, bm, 0, create);
+ if (r) {
+ dm_block_manager_destroy(bm);
+ kfree(pmd);
+ return ERR_PTR(r);
+ }
+ pmd->bdev = bdev;
+
+ if (!create) {
+ r = __begin_transaction(pmd);
+ if (r < 0)
+ goto bad;
+ return pmd;
+ }
+
+ /*
+ * Create.
+ */
+ r = dm_bm_write_lock(pmd->bm, THIN_SUPERBLOCK_LOCATION,
+ &sb_validator, &sblock);
+ if (r)
+ goto bad;
+
+ disk_super = dm_block_data(sblock);
+ disk_super->magic = cpu_to_le64(THIN_SUPERBLOCK_MAGIC);
+ disk_super->version = cpu_to_le32(THIN_VERSION);
+ disk_super->time = 0;
+ disk_super->metadata_block_size = cpu_to_le32(THIN_METADATA_BLOCK_SIZE >> SECTOR_SHIFT);
+ disk_super->metadata_nr_blocks = cpu_to_le64(bdev_size >> SECTOR_TO_BLOCK_SHIFT);
+ disk_super->data_block_size = cpu_to_le32(data_block_size);
+
+ r = dm_bm_unlock(sblock);
+ if (r < 0)
+ goto bad;
+
+ r = dm_btree_empty(&pmd->info, &pmd->root);
+ if (r < 0)
+ goto bad;
+
+ r = dm_btree_empty(&pmd->details_info, &pmd->details_root);
+ if (r < 0) {
+ DMERR("couldn't create devices root");
+ goto bad;
+ }
+
+ pmd->flags = 0;
+ pmd->need_commit = 1;
+ r = dm_pool_commit_metadata(pmd);
+ if (r < 0) {
+ DMERR("%s: dm_pool_commit_metadata() failed, error = %d",
+ __func__, r);
+ goto bad;
+ }
+
+ return pmd;
+
+bad:
+ if (dm_pool_metadata_close(pmd) < 0)
+ DMWARN("%s: dm_pool_metadata_close() failed.", __func__);
+ return ERR_PTR(r);
+}
+
+int dm_pool_metadata_close(struct dm_pool_metadata *pmd)
+{
+ int r;
+ unsigned open_devices = 0;
+ struct dm_thin_device *td, *tmp;
+
+ down_read(&pmd->root_lock);
+ list_for_each_entry_safe(td, tmp, &pmd->thin_devices, list) {
+ if (td->open_count)
+ open_devices++;
+ else {
+ list_del(&td->list);
+ kfree(td);
+ }
+ }
+ up_read(&pmd->root_lock);
+
+ if (open_devices) {
+ DMERR("attempt to close pmd when %u device(s) are still open",
+ open_devices);
+ return -EBUSY;
+ }
+
+ r = __commit_transaction(pmd);
+ if (r < 0)
+ DMWARN("%s: __commit_transaction() failed, error = %d",
+ __func__, r);
+
+ dm_tm_destroy(pmd->tm);
+ dm_tm_destroy(pmd->nb_tm);
+ dm_block_manager_destroy(pmd->bm);
+ dm_sm_destroy(pmd->metadata_sm);
+ dm_sm_destroy(pmd->data_sm);
+ kfree(pmd);
+
+ return 0;
+}
+
+static int __open_device(struct dm_pool_metadata *pmd,
+ dm_thin_id dev, int create,
+ struct dm_thin_device **td)
+{
+ int r, changed = 0;
+ struct dm_thin_device *td2;
+ uint64_t key = dev;
+ struct disk_device_details details_le;
+
+ /*
+ * Check the device isn't already open.
+ */
+ list_for_each_entry(td2, &pmd->thin_devices, list)
+ if (td2->id == dev) {
+ td2->open_count++;
+ *td = td2;
+ return 0;
+ }
+
+ /*
+ * Check the device exists.
+ */
+ r = dm_btree_lookup(&pmd->details_info, pmd->details_root,
+ &key, &details_le);
+ if (r) {
+ if (r != -ENODATA || !create)
+ return r;
+
+ changed = 1;
+ details_le.mapped_blocks = 0;
+ details_le.transaction_id = cpu_to_le64(pmd->trans_id);
+ details_le.creation_time = cpu_to_le32(pmd->time);
+ details_le.snapshotted_time = cpu_to_le32(pmd->time);
+ }
+
+ *td = kmalloc(sizeof(**td), GFP_NOIO);
+ if (!*td)
+ return -ENOMEM;
+
+ (*td)->pmd = pmd;
+ (*td)->id = dev;
+ (*td)->open_count = 1;
+ (*td)->changed = changed;
+ (*td)->mapped_blocks = le64_to_cpu(details_le.mapped_blocks);
+ (*td)->transaction_id = le64_to_cpu(details_le.transaction_id);
+ (*td)->creation_time = le32_to_cpu(details_le.creation_time);
+ (*td)->snapshotted_time = le32_to_cpu(details_le.snapshotted_time);
+
+ list_add(&(*td)->list, &pmd->thin_devices);
+
+ return 0;
+}
+
+static void __close_device(struct dm_thin_device *td)
+{
+ --td->open_count;
+}
+
+static int __create_thin(struct dm_pool_metadata *pmd,
+ dm_thin_id dev)
+{
+ int r;
+ dm_block_t dev_root;
+ uint64_t key = dev;
+ struct disk_device_details details_le;
+ struct dm_thin_device *td;
+ __le64 value;
+
+ r = dm_btree_lookup(&pmd->details_info, pmd->details_root,
+ &key, &details_le);
+ if (!r)
+ return -EEXIST;
+
+ /*
+ * Create an empty btree for the mappings.
+ */
+ r = dm_btree_empty(&pmd->bl_info, &dev_root);
+ if (r)
+ return r;
+
+ /*
+ * Insert it into the main mapping tree.
+ */
+ value = cpu_to_le64(dev_root);
+ __dm_bless_for_disk(&value);
+ r = dm_btree_insert(&pmd->tl_info, pmd->root, &key, &value, &pmd->root);
+ if (r) {
+ dm_btree_del(&pmd->bl_info, dev_root);
+ return r;
+ }
+
+ r = __open_device(pmd, dev, 1, &td);
+ if (r) {
+ __close_device(td);
+ dm_btree_remove(&pmd->tl_info, pmd->root, &key, &pmd->root);
+ dm_btree_del(&pmd->bl_info, dev_root);
+ return r;
+ }
+ td->changed = 1;
+ __close_device(td);
+
+ return r;
+}
+
+int dm_pool_create_thin(struct dm_pool_metadata *pmd, dm_thin_id dev)
+{
+ int r;
+
+ down_write(&pmd->root_lock);
+ r = __create_thin(pmd, dev);
+ up_write(&pmd->root_lock);
+
+ return r;
+}
+
+static int __set_snapshot_details(struct dm_pool_metadata *pmd,
+ struct dm_thin_device *snap,
+ dm_thin_id origin, uint32_t time)
+{
+ int r;
+ struct dm_thin_device *td;
+
+ r = __open_device(pmd, origin, 0, &td);
+ if (r)
+ return r;
+
+ td->changed = 1;
+ td->snapshotted_time = time;
+
+ snap->mapped_blocks = td->mapped_blocks;
+ snap->snapshotted_time = time;
+ __close_device(td);
+
+ return 0;
+}
+
+static int __create_snap(struct dm_pool_metadata *pmd,
+ dm_thin_id dev, dm_thin_id origin)
+{
+ int r;
+ dm_block_t origin_root;
+ uint64_t key = origin, dev_key = dev;
+ struct dm_thin_device *td;
+ struct disk_device_details details_le;
+ __le64 value;
+
+ /* check this device is unused */
+ r = dm_btree_lookup(&pmd->details_info, pmd->details_root,
+ &dev_key, &details_le);
+ if (!r)
+ return -EEXIST;
+
+ /* find the mapping tree for the origin */
+ r = dm_btree_lookup(&pmd->tl_info, pmd->root, &key, &value);
+ if (r)
+ return r;
+ origin_root = le64_to_cpu(value);
+
+ /* clone the origin, an inc will do */
+ dm_tm_inc(pmd->tm, origin_root);
+
+ /* insert into the main mapping tree */
+ value = cpu_to_le64(origin_root);
+ __dm_bless_for_disk(&value);
+ key = dev;
+ r = dm_btree_insert(&pmd->tl_info, pmd->root, &key, &value, &pmd->root);
+ if (r) {
+ dm_tm_dec(pmd->tm, origin_root);
+ return r;
+ }
+
+ pmd->time++;
+
+ r = __open_device(pmd, dev, 1, &td);
+ if (r)
+ goto bad;
+
+ r = __set_snapshot_details(pmd, td, origin, pmd->time);
+ if (r)
+ goto bad;
+
+ __close_device(td);
+ return 0;
+
+bad:
+ __close_device(td);
+ dm_btree_remove(&pmd->tl_info, pmd->root, &key, &pmd->root);
+ dm_btree_remove(&pmd->details_info, pmd->details_root,
+ &key, &pmd->details_root);
+ return r;
+}
+
+int dm_pool_create_snap(struct dm_pool_metadata *pmd,
+ dm_thin_id dev,
+ dm_thin_id origin)
+{
+ int r;
+
+ down_write(&pmd->root_lock);
+ r = __create_snap(pmd, dev, origin);
+ up_write(&pmd->root_lock);
+
+ return r;
+}
+
+static int __delete_device(struct dm_pool_metadata *pmd, dm_thin_id dev)
+{
+ int r;
+ uint64_t key = dev;
+ struct dm_thin_device *td;
+
+ /* TODO: failure should mark the transaction invalid */
+ r = __open_device(pmd, dev, 0, &td);
+ if (r)
+ return r;
+
+ if (td->open_count > 1) {
+ __close_device(td);
+ return -EBUSY;
+ }
+
+ list_del(&td->list);
+ kfree(td);
+ r = dm_btree_remove(&pmd->details_info, pmd->details_root,
+ &key, &pmd->details_root);
+ if (r)
+ return r;
+
+ r = dm_btree_remove(&pmd->tl_info, pmd->root, &key, &pmd->root);
+ if (r)
+ return r;
+
+ pmd->need_commit = 1;
+
+ return 0;
+}
+
+int dm_pool_delete_thin_device(struct dm_pool_metadata *pmd,
+ dm_thin_id dev)
+{
+ int r;
+
+ down_write(&pmd->root_lock);
+ r = __delete_device(pmd, dev);
+ up_write(&pmd->root_lock);
+
+ return r;
+}
+
+int dm_pool_set_metadata_transaction_id(struct dm_pool_metadata *pmd,
+ uint64_t current_id,
+ uint64_t new_id)
+{
+ down_write(&pmd->root_lock);
+ if (pmd->trans_id != current_id) {
+ up_write(&pmd->root_lock);
+ DMERR("mismatched transaction id");
+ return -EINVAL;
+ }
+
+ pmd->trans_id = new_id;
+ pmd->need_commit = 1;
+ up_write(&pmd->root_lock);
+
+ return 0;
+}
+
+int dm_pool_get_metadata_transaction_id(struct dm_pool_metadata *pmd,
+ uint64_t *result)
+{
+ down_read(&pmd->root_lock);
+ *result = pmd->trans_id;
+ up_read(&pmd->root_lock);
+
+ return 0;
+}
+
+static int __get_held_metadata_root(struct dm_pool_metadata *pmd,
+ dm_block_t *result)
+{
+ int r;
+ struct thin_disk_superblock *disk_super;
+ struct dm_block *sblock;
+
+ r = dm_bm_write_lock(pmd->bm, THIN_SUPERBLOCK_LOCATION,
+ &sb_validator, &sblock);
+ if (r)
+ return r;
+
+ disk_super = dm_block_data(sblock);
+ *result = le64_to_cpu(disk_super->held_root);
+
+ return dm_bm_unlock(sblock);
+}
+
+int dm_pool_get_held_metadata_root(struct dm_pool_metadata *pmd,
+ dm_block_t *result)
+{
+ int r;
+
+ down_read(&pmd->root_lock);
+ r = __get_held_metadata_root(pmd, result);
+ up_read(&pmd->root_lock);
+
+ return r;
+}
+
+int dm_pool_open_thin_device(struct dm_pool_metadata *pmd, dm_thin_id dev,
+ struct dm_thin_device **td)
+{
+ int r;
+
+ down_write(&pmd->root_lock);
+ r = __open_device(pmd, dev, 0, td);
+ up_write(&pmd->root_lock);
+
+ return r;
+}
+
+int dm_pool_close_thin_device(struct dm_thin_device *td)
+{
+ down_write(&td->pmd->root_lock);
+ __close_device(td);
+ up_write(&td->pmd->root_lock);
+
+ return 0;
+}
+
+dm_thin_id dm_thin_dev_id(struct dm_thin_device *td)
+{
+ return td->id;
+}
+
+static int __snapshotted_since(struct dm_thin_device *td, uint32_t time)
+{
+ return td->snapshotted_time > time;
+}
+
+int dm_thin_find_block(struct dm_thin_device *td, dm_block_t block,
+ int can_block, struct dm_thin_lookup_result *result)
+{
+ int r;
+ uint64_t block_time = 0;
+ __le64 value;
+ struct dm_pool_metadata *pmd = td->pmd;
+ dm_block_t keys[2] = { td->id, block };
+
+ if (can_block) {
+ down_read(&pmd->root_lock);
+ r = dm_btree_lookup(&pmd->info, pmd->root, keys, &value);
+ if (!r)
+ block_time = le64_to_cpu(value);
+ up_read(&pmd->root_lock);
+
+ } else if (down_read_trylock(&pmd->root_lock)) {
+ r = dm_btree_lookup(&pmd->nb_info, pmd->root, keys, &value);
+ if (!r)
+ block_time = le64_to_cpu(value);
+ up_read(&pmd->root_lock);
+
+ } else
+ return -EWOULDBLOCK;
+
+ if (!r) {
+ dm_block_t exception_block;
+ uint32_t exception_time;
+ unpack_block_time(block_time, &exception_block,
+ &exception_time);
+ result->block = exception_block;
+ result->shared = __snapshotted_since(td, exception_time);
+ }
+
+ return r;
+}
+
+static int __insert(struct dm_thin_device *td, dm_block_t block,
+ dm_block_t data_block)
+{
+ int r, inserted;
+ __le64 value;
+ struct dm_pool_metadata *pmd = td->pmd;
+ dm_block_t keys[2] = { td->id, block };
+
+ pmd->need_commit = 1;
+ value = cpu_to_le64(pack_block_time(data_block, pmd->time));
+ __dm_bless_for_disk(&value);
+
+ r = dm_btree_insert_notify(&pmd->info, pmd->root, keys, &value,
+ &pmd->root, &inserted);
+ if (r)
+ return r;
+
+ if (inserted) {
+ td->mapped_blocks++;
+ td->changed = 1;
+ }
+
+ return 0;
+}
+
+int dm_thin_insert_block(struct dm_thin_device *td, dm_block_t block,
+ dm_block_t data_block)
+{
+ int r;
+
+ down_write(&td->pmd->root_lock);
+ r = __insert(td, block, data_block);
+ up_write(&td->pmd->root_lock);
+
+ return r;
+}
+
+static int __remove(struct dm_thin_device *td, dm_block_t block)
+{
+ int r;
+ struct dm_pool_metadata *pmd = td->pmd;
+ dm_block_t keys[2] = { td->id, block };
+
+ r = dm_btree_remove(&pmd->info, pmd->root, keys, &pmd->root);
+ if (r)
+ return r;
+
+ pmd->need_commit = 1;
+
+ return 0;
+}
+
+int dm_thin_remove_block(struct dm_thin_device *td, dm_block_t block)
+{
+ int r;
+
+ down_write(&td->pmd->root_lock);
+ r = __remove(td, block);
+ up_write(&td->pmd->root_lock);
+
+ return r;
+}
+
+int dm_pool_alloc_data_block(struct dm_pool_metadata *pmd, dm_block_t *result)
+{
+ int r;
+
+ down_write(&pmd->root_lock);
+
+ r = dm_sm_new_block(pmd->data_sm, result);
+ pmd->need_commit = 1;
+
+ up_write(&pmd->root_lock);
+
+ return r;
+}
+
+int dm_pool_commit_metadata(struct dm_pool_metadata *pmd)
+{
+ int r;
+
+ down_write(&pmd->root_lock);
+
+ r = __commit_transaction(pmd);
+ if (r <= 0)
+ goto out;
+
+ /*
+ * Open the next transaction.
+ */
+ r = __begin_transaction(pmd);
+out:
+ up_write(&pmd->root_lock);
+ return r;
+}
+
+int dm_pool_get_free_block_count(struct dm_pool_metadata *pmd, dm_block_t *result)
+{
+ int r;
+
+ down_read(&pmd->root_lock);
+ r = dm_sm_get_nr_free(pmd->data_sm, result);
+ up_read(&pmd->root_lock);
+
+ return r;
+}
+
+int dm_pool_get_free_metadata_block_count(struct dm_pool_metadata *pmd,
+ dm_block_t *result)
+{
+ int r;
+
+ down_read(&pmd->root_lock);
+ r = dm_sm_get_nr_free(pmd->metadata_sm, result);
+ up_read(&pmd->root_lock);
+
+ return r;
+}
+
+int dm_pool_get_metadata_dev_size(struct dm_pool_metadata *pmd,
+ dm_block_t *result)
+{
+ int r;
+
+ down_read(&pmd->root_lock);
+ r = dm_sm_get_nr_blocks(pmd->metadata_sm, result);
+ up_read(&pmd->root_lock);
+
+ return r;
+}
+
+int dm_pool_get_data_block_size(struct dm_pool_metadata *pmd, sector_t *result)
+{
+ down_read(&pmd->root_lock);
+ *result = pmd->data_block_size;
+ up_read(&pmd->root_lock);
+
+ return 0;
+}
+
+int dm_pool_get_data_dev_size(struct dm_pool_metadata *pmd, dm_block_t *result)
+{
+ int r;
+
+ down_read(&pmd->root_lock);
+ r = dm_sm_get_nr_blocks(pmd->data_sm, result);
+ up_read(&pmd->root_lock);
+
+ return r;
+}
+
+int dm_thin_get_mapped_count(struct dm_thin_device *td, dm_block_t *result)
+{
+ struct dm_pool_metadata *pmd = td->pmd;
+
+ down_read(&pmd->root_lock);
+ *result = td->mapped_blocks;
+ up_read(&pmd->root_lock);
+
+ return 0;
+}
+
+static int __highest_block(struct dm_thin_device *td, dm_block_t *result)
+{
+ int r;
+ __le64 value_le;
+ dm_block_t thin_root;
+ struct dm_pool_metadata *pmd = td->pmd;
+
+ r = dm_btree_lookup(&pmd->tl_info, pmd->root, &td->id, &value_le);
+ if (r)
+ return r;
+
+ thin_root = le64_to_cpu(value_le);
+
+ return dm_btree_find_highest_key(&pmd->bl_info, thin_root, result);
+}
+
+int dm_thin_get_highest_mapped_block(struct dm_thin_device *td,
+ dm_block_t *result)
+{
+ int r;
+ struct dm_pool_metadata *pmd = td->pmd;
+
+ down_read(&pmd->root_lock);
+ r = __highest_block(td, result);
+ up_read(&pmd->root_lock);
+
+ return r;
+}
+
+static int __resize_data_dev(struct dm_pool_metadata *pmd, dm_block_t new_count)
+{
+ int r;
+ dm_block_t old_count;
+
+ r = dm_sm_get_nr_blocks(pmd->data_sm, &old_count);
+ if (r)
+ return r;
+
+ if (new_count == old_count)
+ return 0;
+
+ if (new_count < old_count) {
+ DMERR("cannot reduce size of data device");
+ return -EINVAL;
+ }
+
+ r = dm_sm_extend(pmd->data_sm, new_count - old_count);
+ if (!r)
+ pmd->need_commit = 1;
+
+ return r;
+}
+
+int dm_pool_resize_data_dev(struct dm_pool_metadata *pmd, dm_block_t new_count)
+{
+ int r;
+
+ down_write(&pmd->root_lock);
+ r = __resize_data_dev(pmd, new_count);
+ up_write(&pmd->root_lock);
+
+ return r;
+}
diff --git a/drivers/md/dm-thin-metadata.h b/drivers/md/dm-thin-metadata.h
new file mode 100644
index 000000000000..859c16896877
--- /dev/null
+++ b/drivers/md/dm-thin-metadata.h
@@ -0,0 +1,156 @@
+/*
+ * Copyright (C) 2010-2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef DM_THIN_METADATA_H
+#define DM_THIN_METADATA_H
+
+#include "persistent-data/dm-block-manager.h"
+
+#define THIN_METADATA_BLOCK_SIZE 4096
+
+/*----------------------------------------------------------------*/
+
+struct dm_pool_metadata;
+struct dm_thin_device;
+
+/*
+ * Device identifier
+ */
+typedef uint64_t dm_thin_id;
+
+/*
+ * Reopens or creates a new, empty metadata volume.
+ */
+struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
+ sector_t data_block_size);
+
+int dm_pool_metadata_close(struct dm_pool_metadata *pmd);
+
+/*
+ * Compat feature flags. Any incompat flags beyond the ones
+ * specified below will prevent use of the thin metadata.
+ */
+#define THIN_FEATURE_COMPAT_SUPP 0UL
+#define THIN_FEATURE_COMPAT_RO_SUPP 0UL
+#define THIN_FEATURE_INCOMPAT_SUPP 0UL
+
+/*
+ * Device creation/deletion.
+ */
+int dm_pool_create_thin(struct dm_pool_metadata *pmd, dm_thin_id dev);
+
+/*
+ * An internal snapshot.
+ *
+ * You can only snapshot a quiesced origin i.e. one that is either
+ * suspended or not instanced at all.
+ */
+int dm_pool_create_snap(struct dm_pool_metadata *pmd, dm_thin_id dev,
+ dm_thin_id origin);
+
+/*
+ * Deletes a virtual device from the metadata. It _is_ safe to call this
+ * when that device is open. Operations on that device will just start
+ * failing. You still need to call close() on the device.
+ */
+int dm_pool_delete_thin_device(struct dm_pool_metadata *pmd,
+ dm_thin_id dev);
+
+/*
+ * Commits _all_ metadata changes: device creation, deletion, mapping
+ * updates.
+ */
+int dm_pool_commit_metadata(struct dm_pool_metadata *pmd);
+
+/*
+ * Set/get userspace transaction id.
+ */
+int dm_pool_set_metadata_transaction_id(struct dm_pool_metadata *pmd,
+ uint64_t current_id,
+ uint64_t new_id);
+
+int dm_pool_get_metadata_transaction_id(struct dm_pool_metadata *pmd,
+ uint64_t *result);
+
+/*
+ * Hold/get root for userspace transaction.
+ */
+int dm_pool_hold_metadata_root(struct dm_pool_metadata *pmd);
+
+int dm_pool_get_held_metadata_root(struct dm_pool_metadata *pmd,
+ dm_block_t *result);
+
+/*
+ * Actions on a single virtual device.
+ */
+
+/*
+ * Opening the same device more than once will fail with -EBUSY.
+ */
+int dm_pool_open_thin_device(struct dm_pool_metadata *pmd, dm_thin_id dev,
+ struct dm_thin_device **td);
+
+int dm_pool_close_thin_device(struct dm_thin_device *td);
+
+dm_thin_id dm_thin_dev_id(struct dm_thin_device *td);
+
+struct dm_thin_lookup_result {
+ dm_block_t block;
+ int shared;
+};
+
+/*
+ * Returns:
+ * -EWOULDBLOCK iff @can_block is set and would block.
+ * -ENODATA iff that mapping is not present.
+ * 0 success
+ */
+int dm_thin_find_block(struct dm_thin_device *td, dm_block_t block,
+ int can_block, struct dm_thin_lookup_result *result);
+
+/*
+ * Obtain an unused block.
+ */
+int dm_pool_alloc_data_block(struct dm_pool_metadata *pmd, dm_block_t *result);
+
+/*
+ * Insert or remove block.
+ */
+int dm_thin_insert_block(struct dm_thin_device *td, dm_block_t block,
+ dm_block_t data_block);
+
+int dm_thin_remove_block(struct dm_thin_device *td, dm_block_t block);
+
+/*
+ * Queries.
+ */
+int dm_thin_get_highest_mapped_block(struct dm_thin_device *td,
+ dm_block_t *highest_mapped);
+
+int dm_thin_get_mapped_count(struct dm_thin_device *td, dm_block_t *result);
+
+int dm_pool_get_free_block_count(struct dm_pool_metadata *pmd,
+ dm_block_t *result);
+
+int dm_pool_get_free_metadata_block_count(struct dm_pool_metadata *pmd,
+ dm_block_t *result);
+
+int dm_pool_get_metadata_dev_size(struct dm_pool_metadata *pmd,
+ dm_block_t *result);
+
+int dm_pool_get_data_block_size(struct dm_pool_metadata *pmd, sector_t *result);
+
+int dm_pool_get_data_dev_size(struct dm_pool_metadata *pmd, dm_block_t *result);
+
+/*
+ * Returns -ENOSPC if the new size is too small and already allocated
+ * blocks would be lost.
+ */
+int dm_pool_resize_data_dev(struct dm_pool_metadata *pmd, dm_block_t new_size);
+
+/*----------------------------------------------------------------*/
+
+#endif
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
new file mode 100644
index 000000000000..c3087575fef0
--- /dev/null
+++ b/drivers/md/dm-thin.c
@@ -0,0 +1,2428 @@
+/*
+ * Copyright (C) 2011 Red Hat UK.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-thin-metadata.h"
+
+#include <linux/device-mapper.h>
+#include <linux/dm-io.h>
+#include <linux/dm-kcopyd.h>
+#include <linux/list.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#define DM_MSG_PREFIX "thin"
+
+/*
+ * Tunable constants
+ */
+#define ENDIO_HOOK_POOL_SIZE 10240
+#define DEFERRED_SET_SIZE 64
+#define MAPPING_POOL_SIZE 1024
+#define PRISON_CELLS 1024
+
+/*
+ * The block size of the device holding pool data must be
+ * between 64KB and 1GB.
+ */
+#define DATA_DEV_BLOCK_SIZE_MIN_SECTORS (64 * 1024 >> SECTOR_SHIFT)
+#define DATA_DEV_BLOCK_SIZE_MAX_SECTORS (1024 * 1024 * 1024 >> SECTOR_SHIFT)
+
+/*
+ * The metadata device is currently limited in size. The limitation is
+ * checked lower down in dm-space-map-metadata, but we also check it here
+ * so we can fail early.
+ *
+ * We have one block of index, which can hold 255 index entries. Each
+ * index entry contains allocation info about 16k metadata blocks.
+ */
+#define METADATA_DEV_MAX_SECTORS (255 * (1 << 14) * (THIN_METADATA_BLOCK_SIZE / (1 << SECTOR_SHIFT)))
+
+/*
+ * Device id is restricted to 24 bits.
+ */
+#define MAX_DEV_ID ((1 << 24) - 1)
+
+/*
+ * How do we handle breaking sharing of data blocks?
+ * =================================================
+ *
+ * We use a standard copy-on-write btree to store the mappings for the
+ * devices (note I'm talking about copy-on-write of the metadata here, not
+ * the data). When you take an internal snapshot you clone the root node
+ * of the origin btree. After this there is no concept of an origin or a
+ * snapshot. They are just two device trees that happen to point to the
+ * same data blocks.
+ *
+ * When we get a write in we decide if it's to a shared data block using
+ * some timestamp magic. If it is, we have to break sharing.
+ *
+ * Let's say we write to a shared block in what was the origin. The
+ * steps are:
+ *
+ * i) plug io further to this physical block. (see bio_prison code).
+ *
+ * ii) quiesce any read io to that shared data block. Obviously
+ * including all devices that share this block. (see deferred_set code)
+ *
+ * iii) copy the data block to a newly allocate block. This step can be
+ * missed out if the io covers the block. (schedule_copy).
+ *
+ * iv) insert the new mapping into the origin's btree
+ * (process_prepared_mappings). This act of inserting breaks some
+ * sharing of btree nodes between the two devices. Breaking sharing only
+ * effects the btree of that specific device. Btrees for the other
+ * devices that share the block never change. The btree for the origin
+ * device as it was after the last commit is untouched, ie. we're using
+ * persistent data structures in the functional programming sense.
+ *
+ * v) unplug io to this physical block, including the io that triggered
+ * the breaking of sharing.
+ *
+ * Steps (ii) and (iii) occur in parallel.
+ *
+ * The metadata _doesn't_ need to be committed before the io continues. We
+ * get away with this because the io is always written to a _new_ block.
+ * If there's a crash, then:
+ *
+ * - The origin mapping will point to the old origin block (the shared
+ * one). This will contain the data as it was before the io that triggered
+ * the breaking of sharing came in.
+ *
+ * - The snap mapping still points to the old block. As it would after
+ * the commit.
+ *
+ * The downside of this scheme is the timestamp magic isn't perfect, and
+ * will continue to think that data block in the snapshot device is shared
+ * even after the write to the origin has broken sharing. I suspect data
+ * blocks will typically be shared by many different devices, so we're
+ * breaking sharing n + 1 times, rather than n, where n is the number of
+ * devices that reference this data block. At the moment I think the
+ * benefits far, far outweigh the disadvantages.
+ */
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Sometimes we can't deal with a bio straight away. We put them in prison
+ * where they can't cause any mischief. Bios are put in a cell identified
+ * by a key, multiple bios can be in the same cell. When the cell is
+ * subsequently unlocked the bios become available.
+ */
+struct bio_prison;
+
+struct cell_key {
+ int virtual;
+ dm_thin_id dev;
+ dm_block_t block;
+};
+
+struct cell {
+ struct hlist_node list;
+ struct bio_prison *prison;
+ struct cell_key key;
+ unsigned count;
+ struct bio_list bios;
+};
+
+struct bio_prison {
+ spinlock_t lock;
+ mempool_t *cell_pool;
+
+ unsigned nr_buckets;
+ unsigned hash_mask;
+ struct hlist_head *cells;
+};
+
+static uint32_t calc_nr_buckets(unsigned nr_cells)
+{
+ uint32_t n = 128;
+
+ nr_cells /= 4;
+ nr_cells = min(nr_cells, 8192u);
+
+ while (n < nr_cells)
+ n <<= 1;
+
+ return n;
+}
+
+/*
+ * @nr_cells should be the number of cells you want in use _concurrently_.
+ * Don't confuse it with the number of distinct keys.
+ */
+static struct bio_prison *prison_create(unsigned nr_cells)
+{
+ unsigned i;
+ uint32_t nr_buckets = calc_nr_buckets(nr_cells);
+ size_t len = sizeof(struct bio_prison) +
+ (sizeof(struct hlist_head) * nr_buckets);
+ struct bio_prison *prison = kmalloc(len, GFP_KERNEL);
+
+ if (!prison)
+ return NULL;
+
+ spin_lock_init(&prison->lock);
+ prison->cell_pool = mempool_create_kmalloc_pool(nr_cells,
+ sizeof(struct cell));
+ if (!prison->cell_pool) {
+ kfree(prison);
+ return NULL;
+ }
+
+ prison->nr_buckets = nr_buckets;
+ prison->hash_mask = nr_buckets - 1;
+ prison->cells = (struct hlist_head *) (prison + 1);
+ for (i = 0; i < nr_buckets; i++)
+ INIT_HLIST_HEAD(prison->cells + i);
+
+ return prison;
+}
+
+static void prison_destroy(struct bio_prison *prison)
+{
+ mempool_destroy(prison->cell_pool);
+ kfree(prison);
+}
+
+static uint32_t hash_key(struct bio_prison *prison, struct cell_key *key)
+{
+ const unsigned long BIG_PRIME = 4294967291UL;
+ uint64_t hash = key->block * BIG_PRIME;
+
+ return (uint32_t) (hash & prison->hash_mask);
+}
+
+static int keys_equal(struct cell_key *lhs, struct cell_key *rhs)
+{
+ return (lhs->virtual == rhs->virtual) &&
+ (lhs->dev == rhs->dev) &&
+ (lhs->block == rhs->block);
+}
+
+static struct cell *__search_bucket(struct hlist_head *bucket,
+ struct cell_key *key)
+{
+ struct cell *cell;
+ struct hlist_node *tmp;
+
+ hlist_for_each_entry(cell, tmp, bucket, list)
+ if (keys_equal(&cell->key, key))
+ return cell;
+
+ return NULL;
+}
+
+/*
+ * This may block if a new cell needs allocating. You must ensure that
+ * cells will be unlocked even if the calling thread is blocked.
+ *
+ * Returns the number of entries in the cell prior to the new addition
+ * or < 0 on failure.
+ */
+static int bio_detain(struct bio_prison *prison, struct cell_key *key,
+ struct bio *inmate, struct cell **ref)
+{
+ int r;
+ unsigned long flags;
+ uint32_t hash = hash_key(prison, key);
+ struct cell *uninitialized_var(cell), *cell2 = NULL;
+
+ BUG_ON(hash > prison->nr_buckets);
+
+ spin_lock_irqsave(&prison->lock, flags);
+ cell = __search_bucket(prison->cells + hash, key);
+
+ if (!cell) {
+ /*
+ * Allocate a new cell
+ */
+ spin_unlock_irqrestore(&prison->lock, flags);
+ cell2 = mempool_alloc(prison->cell_pool, GFP_NOIO);
+ spin_lock_irqsave(&prison->lock, flags);
+
+ /*
+ * We've been unlocked, so we have to double check that
+ * nobody else has inserted this cell in the meantime.
+ */
+ cell = __search_bucket(prison->cells + hash, key);
+
+ if (!cell) {
+ cell = cell2;
+ cell2 = NULL;
+
+ cell->prison = prison;
+ memcpy(&cell->key, key, sizeof(cell->key));
+ cell->count = 0;
+ bio_list_init(&cell->bios);
+ hlist_add_head(&cell->list, prison->cells + hash);
+ }
+ }
+
+ r = cell->count++;
+ bio_list_add(&cell->bios, inmate);
+ spin_unlock_irqrestore(&prison->lock, flags);
+
+ if (cell2)
+ mempool_free(cell2, prison->cell_pool);
+
+ *ref = cell;
+
+ return r;
+}
+
+/*
+ * @inmates must have been initialised prior to this call
+ */
+static void __cell_release(struct cell *cell, struct bio_list *inmates)
+{
+ struct bio_prison *prison = cell->prison;
+
+ hlist_del(&cell->list);
+
+ if (inmates)
+ bio_list_merge(inmates, &cell->bios);
+
+ mempool_free(cell, prison->cell_pool);
+}
+
+static void cell_release(struct cell *cell, struct bio_list *bios)
+{
+ unsigned long flags;
+ struct bio_prison *prison = cell->prison;
+
+ spin_lock_irqsave(&prison->lock, flags);
+ __cell_release(cell, bios);
+ spin_unlock_irqrestore(&prison->lock, flags);
+}
+
+/*
+ * There are a couple of places where we put a bio into a cell briefly
+ * before taking it out again. In these situations we know that no other
+ * bio may be in the cell. This function releases the cell, and also does
+ * a sanity check.
+ */
+static void cell_release_singleton(struct cell *cell, struct bio *bio)
+{
+ struct bio_prison *prison = cell->prison;
+ struct bio_list bios;
+ struct bio *b;
+ unsigned long flags;
+
+ bio_list_init(&bios);
+
+ spin_lock_irqsave(&prison->lock, flags);
+ __cell_release(cell, &bios);
+ spin_unlock_irqrestore(&prison->lock, flags);
+
+ b = bio_list_pop(&bios);
+ BUG_ON(b != bio);
+ BUG_ON(!bio_list_empty(&bios));
+}
+
+static void cell_error(struct cell *cell)
+{
+ struct bio_prison *prison = cell->prison;
+ struct bio_list bios;
+ struct bio *bio;
+ unsigned long flags;
+
+ bio_list_init(&bios);
+
+ spin_lock_irqsave(&prison->lock, flags);
+ __cell_release(cell, &bios);
+ spin_unlock_irqrestore(&prison->lock, flags);
+
+ while ((bio = bio_list_pop(&bios)))
+ bio_io_error(bio);
+}
+
+/*----------------------------------------------------------------*/
+
+/*
+ * We use the deferred set to keep track of pending reads to shared blocks.
+ * We do this to ensure the new mapping caused by a write isn't performed
+ * until these prior reads have completed. Otherwise the insertion of the
+ * new mapping could free the old block that the read bios are mapped to.
+ */
+
+struct deferred_set;
+struct deferred_entry {
+ struct deferred_set *ds;
+ unsigned count;
+ struct list_head work_items;
+};
+
+struct deferred_set {
+ spinlock_t lock;
+ unsigned current_entry;
+ unsigned sweeper;
+ struct deferred_entry entries[DEFERRED_SET_SIZE];
+};
+
+static void ds_init(struct deferred_set *ds)
+{
+ int i;
+
+ spin_lock_init(&ds->lock);
+ ds->current_entry = 0;
+ ds->sweeper = 0;
+ for (i = 0; i < DEFERRED_SET_SIZE; i++) {
+ ds->entries[i].ds = ds;
+ ds->entries[i].count = 0;
+ INIT_LIST_HEAD(&ds->entries[i].work_items);
+ }
+}
+
+static struct deferred_entry *ds_inc(struct deferred_set *ds)
+{
+ unsigned long flags;
+ struct deferred_entry *entry;
+
+ spin_lock_irqsave(&ds->lock, flags);
+ entry = ds->entries + ds->current_entry;
+ entry->count++;
+ spin_unlock_irqrestore(&ds->lock, flags);
+
+ return entry;
+}
+
+static unsigned ds_next(unsigned index)
+{
+ return (index + 1) % DEFERRED_SET_SIZE;
+}
+
+static void __sweep(struct deferred_set *ds, struct list_head *head)
+{
+ while ((ds->sweeper != ds->current_entry) &&
+ !ds->entries[ds->sweeper].count) {
+ list_splice_init(&ds->entries[ds->sweeper].work_items, head);
+ ds->sweeper = ds_next(ds->sweeper);
+ }
+
+ if ((ds->sweeper == ds->current_entry) && !ds->entries[ds->sweeper].count)
+ list_splice_init(&ds->entries[ds->sweeper].work_items, head);
+}
+
+static void ds_dec(struct deferred_entry *entry, struct list_head *head)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&entry->ds->lock, flags);
+ BUG_ON(!entry->count);
+ --entry->count;
+ __sweep(entry->ds, head);
+ spin_unlock_irqrestore(&entry->ds->lock, flags);
+}
+
+/*
+ * Returns 1 if deferred or 0 if no pending items to delay job.
+ */
+static int ds_add_work(struct deferred_set *ds, struct list_head *work)
+{
+ int r = 1;
+ unsigned long flags;
+ unsigned next_entry;
+
+ spin_lock_irqsave(&ds->lock, flags);
+ if ((ds->sweeper == ds->current_entry) &&
+ !ds->entries[ds->current_entry].count)
+ r = 0;
+ else {
+ list_add(work, &ds->entries[ds->current_entry].work_items);
+ next_entry = ds_next(ds->current_entry);
+ if (!ds->entries[next_entry].count)
+ ds->current_entry = next_entry;
+ }
+ spin_unlock_irqrestore(&ds->lock, flags);
+
+ return r;
+}
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Key building.
+ */
+static void build_data_key(struct dm_thin_device *td,
+ dm_block_t b, struct cell_key *key)
+{
+ key->virtual = 0;
+ key->dev = dm_thin_dev_id(td);
+ key->block = b;
+}
+
+static void build_virtual_key(struct dm_thin_device *td, dm_block_t b,
+ struct cell_key *key)
+{
+ key->virtual = 1;
+ key->dev = dm_thin_dev_id(td);
+ key->block = b;
+}
+
+/*----------------------------------------------------------------*/
+
+/*
+ * A pool device ties together a metadata device and a data device. It
+ * also provides the interface for creating and destroying internal
+ * devices.
+ */
+struct new_mapping;
+struct pool {
+ struct list_head list;
+ struct dm_target *ti; /* Only set if a pool target is bound */
+
+ struct mapped_device *pool_md;
+ struct block_device *md_dev;
+ struct dm_pool_metadata *pmd;
+
+ uint32_t sectors_per_block;
+ unsigned block_shift;
+ dm_block_t offset_mask;
+ dm_block_t low_water_blocks;
+
+ unsigned zero_new_blocks:1;
+ unsigned low_water_triggered:1; /* A dm event has been sent */
+ unsigned no_free_space:1; /* A -ENOSPC warning has been issued */
+
+ struct bio_prison *prison;
+ struct dm_kcopyd_client *copier;
+
+ struct workqueue_struct *wq;
+ struct work_struct worker;
+
+ unsigned ref_count;
+
+ spinlock_t lock;
+ struct bio_list deferred_bios;
+ struct bio_list deferred_flush_bios;
+ struct list_head prepared_mappings;
+
+ struct bio_list retry_on_resume_list;
+
+ struct deferred_set ds; /* FIXME: move to thin_c */
+
+ struct new_mapping *next_mapping;
+ mempool_t *mapping_pool;
+ mempool_t *endio_hook_pool;
+};
+
+/*
+ * Target context for a pool.
+ */
+struct pool_c {
+ struct dm_target *ti;
+ struct pool *pool;
+ struct dm_dev *data_dev;
+ struct dm_dev *metadata_dev;
+ struct dm_target_callbacks callbacks;
+
+ dm_block_t low_water_blocks;
+ unsigned zero_new_blocks:1;
+};
+
+/*
+ * Target context for a thin.
+ */
+struct thin_c {
+ struct dm_dev *pool_dev;
+ dm_thin_id dev_id;
+
+ struct pool *pool;
+ struct dm_thin_device *td;
+};
+
+/*----------------------------------------------------------------*/
+
+/*
+ * A global list of pools that uses a struct mapped_device as a key.
+ */
+static struct dm_thin_pool_table {
+ struct mutex mutex;
+ struct list_head pools;
+} dm_thin_pool_table;
+
+static void pool_table_init(void)
+{
+ mutex_init(&dm_thin_pool_table.mutex);
+ INIT_LIST_HEAD(&dm_thin_pool_table.pools);
+}
+
+static void __pool_table_insert(struct pool *pool)
+{
+ BUG_ON(!mutex_is_locked(&dm_thin_pool_table.mutex));
+ list_add(&pool->list, &dm_thin_pool_table.pools);
+}
+
+static void __pool_table_remove(struct pool *pool)
+{
+ BUG_ON(!mutex_is_locked(&dm_thin_pool_table.mutex));
+ list_del(&pool->list);
+}
+
+static struct pool *__pool_table_lookup(struct mapped_device *md)
+{
+ struct pool *pool = NULL, *tmp;
+
+ BUG_ON(!mutex_is_locked(&dm_thin_pool_table.mutex));
+
+ list_for_each_entry(tmp, &dm_thin_pool_table.pools, list) {
+ if (tmp->pool_md == md) {
+ pool = tmp;
+ break;
+ }
+ }
+
+ return pool;
+}
+
+static struct pool *__pool_table_lookup_metadata_dev(struct block_device *md_dev)
+{
+ struct pool *pool = NULL, *tmp;
+
+ BUG_ON(!mutex_is_locked(&dm_thin_pool_table.mutex));
+
+ list_for_each_entry(tmp, &dm_thin_pool_table.pools, list) {
+ if (tmp->md_dev == md_dev) {
+ pool = tmp;
+ break;
+ }
+ }
+
+ return pool;
+}
+
+/*----------------------------------------------------------------*/
+
+static void __requeue_bio_list(struct thin_c *tc, struct bio_list *master)
+{
+ struct bio *bio;
+ struct bio_list bios;
+
+ bio_list_init(&bios);
+ bio_list_merge(&bios, master);
+ bio_list_init(master);
+
+ while ((bio = bio_list_pop(&bios))) {
+ if (dm_get_mapinfo(bio)->ptr == tc)
+ bio_endio(bio, DM_ENDIO_REQUEUE);
+ else
+ bio_list_add(master, bio);
+ }
+}
+
+static void requeue_io(struct thin_c *tc)
+{
+ struct pool *pool = tc->pool;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pool->lock, flags);
+ __requeue_bio_list(tc, &pool->deferred_bios);
+ __requeue_bio_list(tc, &pool->retry_on_resume_list);
+ spin_unlock_irqrestore(&pool->lock, flags);
+}
+
+/*
+ * This section of code contains the logic for processing a thin device's IO.
+ * Much of the code depends on pool object resources (lists, workqueues, etc)
+ * but most is exclusively called from the thin target rather than the thin-pool
+ * target.
+ */
+
+static dm_block_t get_bio_block(struct thin_c *tc, struct bio *bio)
+{
+ return bio->bi_sector >> tc->pool->block_shift;
+}
+
+static void remap(struct thin_c *tc, struct bio *bio, dm_block_t block)
+{
+ struct pool *pool = tc->pool;
+
+ bio->bi_bdev = tc->pool_dev->bdev;
+ bio->bi_sector = (block << pool->block_shift) +
+ (bio->bi_sector & pool->offset_mask);
+}
+
+static void remap_and_issue(struct thin_c *tc, struct bio *bio,
+ dm_block_t block)
+{
+ struct pool *pool = tc->pool;
+ unsigned long flags;
+
+ remap(tc, bio, block);
+
+ /*
+ * Batch together any FUA/FLUSH bios we find and then issue
+ * a single commit for them in process_deferred_bios().
+ */
+ if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
+ spin_lock_irqsave(&pool->lock, flags);
+ bio_list_add(&pool->deferred_flush_bios, bio);
+ spin_unlock_irqrestore(&pool->lock, flags);
+ } else
+ generic_make_request(bio);
+}
+
+/*
+ * wake_worker() is used when new work is queued and when pool_resume is
+ * ready to continue deferred IO processing.
+ */
+static void wake_worker(struct pool *pool)
+{
+ queue_work(pool->wq, &pool->worker);
+}
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Bio endio functions.
+ */
+struct endio_hook {
+ struct thin_c *tc;
+ bio_end_io_t *saved_bi_end_io;
+ struct deferred_entry *entry;
+};
+
+struct new_mapping {
+ struct list_head list;
+
+ int prepared;
+
+ struct thin_c *tc;
+ dm_block_t virt_block;
+ dm_block_t data_block;
+ struct cell *cell;
+ int err;
+
+ /*
+ * If the bio covers the whole area of a block then we can avoid
+ * zeroing or copying. Instead this bio is hooked. The bio will
+ * still be in the cell, so care has to be taken to avoid issuing
+ * the bio twice.
+ */
+ struct bio *bio;
+ bio_end_io_t *saved_bi_end_io;
+};
+
+static void __maybe_add_mapping(struct new_mapping *m)
+{
+ struct pool *pool = m->tc->pool;
+
+ if (list_empty(&m->list) && m->prepared) {
+ list_add(&m->list, &pool->prepared_mappings);
+ wake_worker(pool);
+ }
+}
+
+static void copy_complete(int read_err, unsigned long write_err, void *context)
+{
+ unsigned long flags;
+ struct new_mapping *m = context;
+ struct pool *pool = m->tc->pool;
+
+ m->err = read_err || write_err ? -EIO : 0;
+
+ spin_lock_irqsave(&pool->lock, flags);
+ m->prepared = 1;
+ __maybe_add_mapping(m);
+ spin_unlock_irqrestore(&pool->lock, flags);
+}
+
+static void overwrite_endio(struct bio *bio, int err)
+{
+ unsigned long flags;
+ struct new_mapping *m = dm_get_mapinfo(bio)->ptr;
+ struct pool *pool = m->tc->pool;
+
+ m->err = err;
+
+ spin_lock_irqsave(&pool->lock, flags);
+ m->prepared = 1;
+ __maybe_add_mapping(m);
+ spin_unlock_irqrestore(&pool->lock, flags);
+}
+
+static void shared_read_endio(struct bio *bio, int err)
+{
+ struct list_head mappings;
+ struct new_mapping *m, *tmp;
+ struct endio_hook *h = dm_get_mapinfo(bio)->ptr;
+ unsigned long flags;
+ struct pool *pool = h->tc->pool;
+
+ bio->bi_end_io = h->saved_bi_end_io;
+ bio_endio(bio, err);
+
+ INIT_LIST_HEAD(&mappings);
+ ds_dec(h->entry, &mappings);
+
+ spin_lock_irqsave(&pool->lock, flags);
+ list_for_each_entry_safe(m, tmp, &mappings, list) {
+ list_del(&m->list);
+ INIT_LIST_HEAD(&m->list);
+ __maybe_add_mapping(m);
+ }
+ spin_unlock_irqrestore(&pool->lock, flags);
+
+ mempool_free(h, pool->endio_hook_pool);
+}
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Workqueue.
+ */
+
+/*
+ * Prepared mapping jobs.
+ */
+
+/*
+ * This sends the bios in the cell back to the deferred_bios list.
+ */
+static void cell_defer(struct thin_c *tc, struct cell *cell,
+ dm_block_t data_block)
+{
+ struct pool *pool = tc->pool;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pool->lock, flags);
+ cell_release(cell, &pool->deferred_bios);
+ spin_unlock_irqrestore(&tc->pool->lock, flags);
+
+ wake_worker(pool);
+}
+
+/*
+ * Same as cell_defer above, except it omits one particular detainee,
+ * a write bio that covers the block and has already been processed.
+ */
+static void cell_defer_except(struct thin_c *tc, struct cell *cell,
+ struct bio *exception)
+{
+ struct bio_list bios;
+ struct bio *bio;
+ struct pool *pool = tc->pool;
+ unsigned long flags;
+
+ bio_list_init(&bios);
+ cell_release(cell, &bios);
+
+ spin_lock_irqsave(&pool->lock, flags);
+ while ((bio = bio_list_pop(&bios)))
+ if (bio != exception)
+ bio_list_add(&pool->deferred_bios, bio);
+ spin_unlock_irqrestore(&pool->lock, flags);
+
+ wake_worker(pool);
+}
+
+static void process_prepared_mapping(struct new_mapping *m)
+{
+ struct thin_c *tc = m->tc;
+ struct bio *bio;
+ int r;
+
+ bio = m->bio;
+ if (bio)
+ bio->bi_end_io = m->saved_bi_end_io;
+
+ if (m->err) {
+ cell_error(m->cell);
+ return;
+ }
+
+ /*
+ * Commit the prepared block into the mapping btree.
+ * Any I/O for this block arriving after this point will get
+ * remapped to it directly.
+ */
+ r = dm_thin_insert_block(tc->td, m->virt_block, m->data_block);
+ if (r) {
+ DMERR("dm_thin_insert_block() failed");
+ cell_error(m->cell);
+ return;
+ }
+
+ /*
+ * Release any bios held while the block was being provisioned.
+ * If we are processing a write bio that completely covers the block,
+ * we already processed it so can ignore it now when processing
+ * the bios in the cell.
+ */
+ if (bio) {
+ cell_defer_except(tc, m->cell, bio);
+ bio_endio(bio, 0);
+ } else
+ cell_defer(tc, m->cell, m->data_block);
+
+ list_del(&m->list);
+ mempool_free(m, tc->pool->mapping_pool);
+}
+
+static void process_prepared_mappings(struct pool *pool)
+{
+ unsigned long flags;
+ struct list_head maps;
+ struct new_mapping *m, *tmp;
+
+ INIT_LIST_HEAD(&maps);
+ spin_lock_irqsave(&pool->lock, flags);
+ list_splice_init(&pool->prepared_mappings, &maps);
+ spin_unlock_irqrestore(&pool->lock, flags);
+
+ list_for_each_entry_safe(m, tmp, &maps, list)
+ process_prepared_mapping(m);
+}
+
+/*
+ * Deferred bio jobs.
+ */
+static int io_overwrites_block(struct pool *pool, struct bio *bio)
+{
+ return ((bio_data_dir(bio) == WRITE) &&
+ !(bio->bi_sector & pool->offset_mask)) &&
+ (bio->bi_size == (pool->sectors_per_block << SECTOR_SHIFT));
+}
+
+static void save_and_set_endio(struct bio *bio, bio_end_io_t **save,
+ bio_end_io_t *fn)
+{
+ *save = bio->bi_end_io;
+ bio->bi_end_io = fn;
+}
+
+static int ensure_next_mapping(struct pool *pool)
+{
+ if (pool->next_mapping)
+ return 0;
+
+ pool->next_mapping = mempool_alloc(pool->mapping_pool, GFP_ATOMIC);
+
+ return pool->next_mapping ? 0 : -ENOMEM;
+}
+
+static struct new_mapping *get_next_mapping(struct pool *pool)
+{
+ struct new_mapping *r = pool->next_mapping;
+
+ BUG_ON(!pool->next_mapping);
+
+ pool->next_mapping = NULL;
+
+ return r;
+}
+
+static void schedule_copy(struct thin_c *tc, dm_block_t virt_block,
+ dm_block_t data_origin, dm_block_t data_dest,
+ struct cell *cell, struct bio *bio)
+{
+ int r;
+ struct pool *pool = tc->pool;
+ struct new_mapping *m = get_next_mapping(pool);
+
+ INIT_LIST_HEAD(&m->list);
+ m->prepared = 0;
+ m->tc = tc;
+ m->virt_block = virt_block;
+ m->data_block = data_dest;
+ m->cell = cell;
+ m->err = 0;
+ m->bio = NULL;
+
+ ds_add_work(&pool->ds, &m->list);
+
+ /*
+ * IO to pool_dev remaps to the pool target's data_dev.
+ *
+ * If the whole block of data is being overwritten, we can issue the
+ * bio immediately. Otherwise we use kcopyd to clone the data first.
+ */
+ if (io_overwrites_block(pool, bio)) {
+ m->bio = bio;
+ save_and_set_endio(bio, &m->saved_bi_end_io, overwrite_endio);
+ dm_get_mapinfo(bio)->ptr = m;
+ remap_and_issue(tc, bio, data_dest);
+ } else {
+ struct dm_io_region from, to;
+
+ from.bdev = tc->pool_dev->bdev;
+ from.sector = data_origin * pool->sectors_per_block;
+ from.count = pool->sectors_per_block;
+
+ to.bdev = tc->pool_dev->bdev;
+ to.sector = data_dest * pool->sectors_per_block;
+ to.count = pool->sectors_per_block;
+
+ r = dm_kcopyd_copy(pool->copier, &from, 1, &to,
+ 0, copy_complete, m);
+ if (r < 0) {
+ mempool_free(m, pool->mapping_pool);
+ DMERR("dm_kcopyd_copy() failed");
+ cell_error(cell);
+ }
+ }
+}
+
+static void schedule_zero(struct thin_c *tc, dm_block_t virt_block,
+ dm_block_t data_block, struct cell *cell,
+ struct bio *bio)
+{
+ struct pool *pool = tc->pool;
+ struct new_mapping *m = get_next_mapping(pool);
+
+ INIT_LIST_HEAD(&m->list);
+ m->prepared = 0;
+ m->tc = tc;
+ m->virt_block = virt_block;
+ m->data_block = data_block;
+ m->cell = cell;
+ m->err = 0;
+ m->bio = NULL;
+
+ /*
+ * If the whole block of data is being overwritten or we are not
+ * zeroing pre-existing data, we can issue the bio immediately.
+ * Otherwise we use kcopyd to zero the data first.
+ */
+ if (!pool->zero_new_blocks)
+ process_prepared_mapping(m);
+
+ else if (io_overwrites_block(pool, bio)) {
+ m->bio = bio;
+ save_and_set_endio(bio, &m->saved_bi_end_io, overwrite_endio);
+ dm_get_mapinfo(bio)->ptr = m;
+ remap_and_issue(tc, bio, data_block);
+
+ } else {
+ int r;
+ struct dm_io_region to;
+
+ to.bdev = tc->pool_dev->bdev;
+ to.sector = data_block * pool->sectors_per_block;
+ to.count = pool->sectors_per_block;
+
+ r = dm_kcopyd_zero(pool->copier, 1, &to, 0, copy_complete, m);
+ if (r < 0) {
+ mempool_free(m, pool->mapping_pool);
+ DMERR("dm_kcopyd_zero() failed");
+ cell_error(cell);
+ }
+ }
+}
+
+static int alloc_data_block(struct thin_c *tc, dm_block_t *result)
+{
+ int r;
+ dm_block_t free_blocks;
+ unsigned long flags;
+ struct pool *pool = tc->pool;
+
+ r = dm_pool_get_free_block_count(pool->pmd, &free_blocks);
+ if (r)
+ return r;
+
+ if (free_blocks <= pool->low_water_blocks && !pool->low_water_triggered) {
+ DMWARN("%s: reached low water mark, sending event.",
+ dm_device_name(pool->pool_md));
+ spin_lock_irqsave(&pool->lock, flags);
+ pool->low_water_triggered = 1;
+ spin_unlock_irqrestore(&pool->lock, flags);
+ dm_table_event(pool->ti->table);
+ }
+
+ if (!free_blocks) {
+ if (pool->no_free_space)
+ return -ENOSPC;
+ else {
+ /*
+ * Try to commit to see if that will free up some
+ * more space.
+ */
+ r = dm_pool_commit_metadata(pool->pmd);
+ if (r) {
+ DMERR("%s: dm_pool_commit_metadata() failed, error = %d",
+ __func__, r);
+ return r;
+ }
+
+ r = dm_pool_get_free_block_count(pool->pmd, &free_blocks);
+ if (r)
+ return r;
+
+ /*
+ * If we still have no space we set a flag to avoid
+ * doing all this checking and return -ENOSPC.
+ */
+ if (!free_blocks) {
+ DMWARN("%s: no free space available.",
+ dm_device_name(pool->pool_md));
+ spin_lock_irqsave(&pool->lock, flags);
+ pool->no_free_space = 1;
+ spin_unlock_irqrestore(&pool->lock, flags);
+ return -ENOSPC;
+ }
+ }
+ }
+
+ r = dm_pool_alloc_data_block(pool->pmd, result);
+ if (r)
+ return r;
+
+ return 0;
+}
+
+/*
+ * If we have run out of space, queue bios until the device is
+ * resumed, presumably after having been reloaded with more space.
+ */
+static void retry_on_resume(struct bio *bio)
+{
+ struct thin_c *tc = dm_get_mapinfo(bio)->ptr;
+ struct pool *pool = tc->pool;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pool->lock, flags);
+ bio_list_add(&pool->retry_on_resume_list, bio);
+ spin_unlock_irqrestore(&pool->lock, flags);
+}
+
+static void no_space(struct cell *cell)
+{
+ struct bio *bio;
+ struct bio_list bios;
+
+ bio_list_init(&bios);
+ cell_release(cell, &bios);
+
+ while ((bio = bio_list_pop(&bios)))
+ retry_on_resume(bio);
+}
+
+static void break_sharing(struct thin_c *tc, struct bio *bio, dm_block_t block,
+ struct cell_key *key,
+ struct dm_thin_lookup_result *lookup_result,
+ struct cell *cell)
+{
+ int r;
+ dm_block_t data_block;
+
+ r = alloc_data_block(tc, &data_block);
+ switch (r) {
+ case 0:
+ schedule_copy(tc, block, lookup_result->block,
+ data_block, cell, bio);
+ break;
+
+ case -ENOSPC:
+ no_space(cell);
+ break;
+
+ default:
+ DMERR("%s: alloc_data_block() failed, error = %d", __func__, r);
+ cell_error(cell);
+ break;
+ }
+}
+
+static void process_shared_bio(struct thin_c *tc, struct bio *bio,
+ dm_block_t block,
+ struct dm_thin_lookup_result *lookup_result)
+{
+ struct cell *cell;
+ struct pool *pool = tc->pool;
+ struct cell_key key;
+
+ /*
+ * If cell is already occupied, then sharing is already in the process
+ * of being broken so we have nothing further to do here.
+ */
+ build_data_key(tc->td, lookup_result->block, &key);
+ if (bio_detain(pool->prison, &key, bio, &cell))
+ return;
+
+ if (bio_data_dir(bio) == WRITE)
+ break_sharing(tc, bio, block, &key, lookup_result, cell);
+ else {
+ struct endio_hook *h;
+ h = mempool_alloc(pool->endio_hook_pool, GFP_NOIO);
+
+ h->tc = tc;
+ h->entry = ds_inc(&pool->ds);
+ save_and_set_endio(bio, &h->saved_bi_end_io, shared_read_endio);
+ dm_get_mapinfo(bio)->ptr = h;
+
+ cell_release_singleton(cell, bio);
+ remap_and_issue(tc, bio, lookup_result->block);
+ }
+}
+
+static void provision_block(struct thin_c *tc, struct bio *bio, dm_block_t block,
+ struct cell *cell)
+{
+ int r;
+ dm_block_t data_block;
+
+ /*
+ * Remap empty bios (flushes) immediately, without provisioning.
+ */
+ if (!bio->bi_size) {
+ cell_release_singleton(cell, bio);
+ remap_and_issue(tc, bio, 0);
+ return;
+ }
+
+ /*
+ * Fill read bios with zeroes and complete them immediately.
+ */
+ if (bio_data_dir(bio) == READ) {
+ zero_fill_bio(bio);
+ cell_release_singleton(cell, bio);
+ bio_endio(bio, 0);
+ return;
+ }
+
+ r = alloc_data_block(tc, &data_block);
+ switch (r) {
+ case 0:
+ schedule_zero(tc, block, data_block, cell, bio);
+ break;
+
+ case -ENOSPC:
+ no_space(cell);
+ break;
+
+ default:
+ DMERR("%s: alloc_data_block() failed, error = %d", __func__, r);
+ cell_error(cell);
+ break;
+ }
+}
+
+static void process_bio(struct thin_c *tc, struct bio *bio)
+{
+ int r;
+ dm_block_t block = get_bio_block(tc, bio);
+ struct cell *cell;
+ struct cell_key key;
+ struct dm_thin_lookup_result lookup_result;
+
+ /*
+ * If cell is already occupied, then the block is already
+ * being provisioned so we have nothing further to do here.
+ */
+ build_virtual_key(tc->td, block, &key);
+ if (bio_detain(tc->pool->prison, &key, bio, &cell))
+ return;
+
+ r = dm_thin_find_block(tc->td, block, 1, &lookup_result);
+ switch (r) {
+ case 0:
+ /*
+ * We can release this cell now. This thread is the only
+ * one that puts bios into a cell, and we know there were
+ * no preceding bios.
+ */
+ /*
+ * TODO: this will probably have to change when discard goes
+ * back in.
+ */
+ cell_release_singleton(cell, bio);
+
+ if (lookup_result.shared)
+ process_shared_bio(tc, bio, block, &lookup_result);
+ else
+ remap_and_issue(tc, bio, lookup_result.block);
+ break;
+
+ case -ENODATA:
+ provision_block(tc, bio, block, cell);
+ break;
+
+ default:
+ DMERR("dm_thin_find_block() failed, error = %d", r);
+ bio_io_error(bio);
+ break;
+ }
+}
+
+static void process_deferred_bios(struct pool *pool)
+{
+ unsigned long flags;
+ struct bio *bio;
+ struct bio_list bios;
+ int r;
+
+ bio_list_init(&bios);
+
+ spin_lock_irqsave(&pool->lock, flags);
+ bio_list_merge(&bios, &pool->deferred_bios);
+ bio_list_init(&pool->deferred_bios);
+ spin_unlock_irqrestore(&pool->lock, flags);
+
+ while ((bio = bio_list_pop(&bios))) {
+ struct thin_c *tc = dm_get_mapinfo(bio)->ptr;
+ /*
+ * If we've got no free new_mapping structs, and processing
+ * this bio might require one, we pause until there are some
+ * prepared mappings to process.
+ */
+ if (ensure_next_mapping(pool)) {
+ spin_lock_irqsave(&pool->lock, flags);
+ bio_list_merge(&pool->deferred_bios, &bios);
+ spin_unlock_irqrestore(&pool->lock, flags);
+
+ break;
+ }
+ process_bio(tc, bio);
+ }
+
+ /*
+ * If there are any deferred flush bios, we must commit
+ * the metadata before issuing them.
+ */
+ bio_list_init(&bios);
+ spin_lock_irqsave(&pool->lock, flags);
+ bio_list_merge(&bios, &pool->deferred_flush_bios);
+ bio_list_init(&pool->deferred_flush_bios);
+ spin_unlock_irqrestore(&pool->lock, flags);
+
+ if (bio_list_empty(&bios))
+ return;
+
+ r = dm_pool_commit_metadata(pool->pmd);
+ if (r) {
+ DMERR("%s: dm_pool_commit_metadata() failed, error = %d",
+ __func__, r);
+ while ((bio = bio_list_pop(&bios)))
+ bio_io_error(bio);
+ return;
+ }
+
+ while ((bio = bio_list_pop(&bios)))
+ generic_make_request(bio);
+}
+
+static void do_worker(struct work_struct *ws)
+{
+ struct pool *pool = container_of(ws, struct pool, worker);
+
+ process_prepared_mappings(pool);
+ process_deferred_bios(pool);
+}
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Mapping functions.
+ */
+
+/*
+ * Called only while mapping a thin bio to hand it over to the workqueue.
+ */
+static void thin_defer_bio(struct thin_c *tc, struct bio *bio)
+{
+ unsigned long flags;
+ struct pool *pool = tc->pool;
+
+ spin_lock_irqsave(&pool->lock, flags);
+ bio_list_add(&pool->deferred_bios, bio);
+ spin_unlock_irqrestore(&pool->lock, flags);
+
+ wake_worker(pool);
+}
+
+/*
+ * Non-blocking function called from the thin target's map function.
+ */
+static int thin_bio_map(struct dm_target *ti, struct bio *bio,
+ union map_info *map_context)
+{
+ int r;
+ struct thin_c *tc = ti->private;
+ dm_block_t block = get_bio_block(tc, bio);
+ struct dm_thin_device *td = tc->td;
+ struct dm_thin_lookup_result result;
+
+ /*
+ * Save the thin context for easy access from the deferred bio later.
+ */
+ map_context->ptr = tc;
+
+ if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
+ thin_defer_bio(tc, bio);
+ return DM_MAPIO_SUBMITTED;
+ }
+
+ r = dm_thin_find_block(td, block, 0, &result);
+
+ /*
+ * Note that we defer readahead too.
+ */
+ switch (r) {
+ case 0:
+ if (unlikely(result.shared)) {
+ /*
+ * We have a race condition here between the
+ * result.shared value returned by the lookup and
+ * snapshot creation, which may cause new
+ * sharing.
+ *
+ * To avoid this always quiesce the origin before
+ * taking the snap. You want to do this anyway to
+ * ensure a consistent application view
+ * (i.e. lockfs).
+ *
+ * More distant ancestors are irrelevant. The
+ * shared flag will be set in their case.
+ */
+ thin_defer_bio(tc, bio);
+ r = DM_MAPIO_SUBMITTED;
+ } else {
+ remap(tc, bio, result.block);
+ r = DM_MAPIO_REMAPPED;
+ }
+ break;
+
+ case -ENODATA:
+ /*
+ * In future, the failed dm_thin_find_block above could
+ * provide the hint to load the metadata into cache.
+ */
+ case -EWOULDBLOCK:
+ thin_defer_bio(tc, bio);
+ r = DM_MAPIO_SUBMITTED;
+ break;
+ }
+
+ return r;
+}
+
+static int pool_is_congested(struct dm_target_callbacks *cb, int bdi_bits)
+{
+ int r;
+ unsigned long flags;
+ struct pool_c *pt = container_of(cb, struct pool_c, callbacks);
+
+ spin_lock_irqsave(&pt->pool->lock, flags);
+ r = !bio_list_empty(&pt->pool->retry_on_resume_list);
+ spin_unlock_irqrestore(&pt->pool->lock, flags);
+
+ if (!r) {
+ struct request_queue *q = bdev_get_queue(pt->data_dev->bdev);
+ r = bdi_congested(&q->backing_dev_info, bdi_bits);
+ }
+
+ return r;
+}
+
+static void __requeue_bios(struct pool *pool)
+{
+ bio_list_merge(&pool->deferred_bios, &pool->retry_on_resume_list);
+ bio_list_init(&pool->retry_on_resume_list);
+}
+
+/*----------------------------------------------------------------
+ * Binding of control targets to a pool object
+ *--------------------------------------------------------------*/
+static int bind_control_target(struct pool *pool, struct dm_target *ti)
+{
+ struct pool_c *pt = ti->private;
+
+ pool->ti = ti;
+ pool->low_water_blocks = pt->low_water_blocks;
+ pool->zero_new_blocks = pt->zero_new_blocks;
+
+ return 0;
+}
+
+static void unbind_control_target(struct pool *pool, struct dm_target *ti)
+{
+ if (pool->ti == ti)
+ pool->ti = NULL;
+}
+
+/*----------------------------------------------------------------
+ * Pool creation
+ *--------------------------------------------------------------*/
+static void __pool_destroy(struct pool *pool)
+{
+ __pool_table_remove(pool);
+
+ if (dm_pool_metadata_close(pool->pmd) < 0)
+ DMWARN("%s: dm_pool_metadata_close() failed.", __func__);
+
+ prison_destroy(pool->prison);
+ dm_kcopyd_client_destroy(pool->copier);
+
+ if (pool->wq)
+ destroy_workqueue(pool->wq);
+
+ if (pool->next_mapping)
+ mempool_free(pool->next_mapping, pool->mapping_pool);
+ mempool_destroy(pool->mapping_pool);
+ mempool_destroy(pool->endio_hook_pool);
+ kfree(pool);
+}
+
+static struct pool *pool_create(struct mapped_device *pool_md,
+ struct block_device *metadata_dev,
+ unsigned long block_size, char **error)
+{
+ int r;
+ void *err_p;
+ struct pool *pool;
+ struct dm_pool_metadata *pmd;
+
+ pmd = dm_pool_metadata_open(metadata_dev, block_size);
+ if (IS_ERR(pmd)) {
+ *error = "Error creating metadata object";
+ return (struct pool *)pmd;
+ }
+
+ pool = kmalloc(sizeof(*pool), GFP_KERNEL);
+ if (!pool) {
+ *error = "Error allocating memory for pool";
+ err_p = ERR_PTR(-ENOMEM);
+ goto bad_pool;
+ }
+
+ pool->pmd = pmd;
+ pool->sectors_per_block = block_size;
+ pool->block_shift = ffs(block_size) - 1;
+ pool->offset_mask = block_size - 1;
+ pool->low_water_blocks = 0;
+ pool->zero_new_blocks = 1;
+ pool->prison = prison_create(PRISON_CELLS);
+ if (!pool->prison) {
+ *error = "Error creating pool's bio prison";
+ err_p = ERR_PTR(-ENOMEM);
+ goto bad_prison;
+ }
+
+ pool->copier = dm_kcopyd_client_create();
+ if (IS_ERR(pool->copier)) {
+ r = PTR_ERR(pool->copier);
+ *error = "Error creating pool's kcopyd client";
+ err_p = ERR_PTR(r);
+ goto bad_kcopyd_client;
+ }
+
+ /*
+ * Create singlethreaded workqueue that will service all devices
+ * that use this metadata.
+ */
+ pool->wq = alloc_ordered_workqueue("dm-" DM_MSG_PREFIX, WQ_MEM_RECLAIM);
+ if (!pool->wq) {
+ *error = "Error creating pool's workqueue";
+ err_p = ERR_PTR(-ENOMEM);
+ goto bad_wq;
+ }
+
+ INIT_WORK(&pool->worker, do_worker);
+ spin_lock_init(&pool->lock);
+ bio_list_init(&pool->deferred_bios);
+ bio_list_init(&pool->deferred_flush_bios);
+ INIT_LIST_HEAD(&pool->prepared_mappings);
+ pool->low_water_triggered = 0;
+ pool->no_free_space = 0;
+ bio_list_init(&pool->retry_on_resume_list);
+ ds_init(&pool->ds);
+
+ pool->next_mapping = NULL;
+ pool->mapping_pool =
+ mempool_create_kmalloc_pool(MAPPING_POOL_SIZE, sizeof(struct new_mapping));
+ if (!pool->mapping_pool) {
+ *error = "Error creating pool's mapping mempool";
+ err_p = ERR_PTR(-ENOMEM);
+ goto bad_mapping_pool;
+ }
+
+ pool->endio_hook_pool =
+ mempool_create_kmalloc_pool(ENDIO_HOOK_POOL_SIZE, sizeof(struct endio_hook));
+ if (!pool->endio_hook_pool) {
+ *error = "Error creating pool's endio_hook mempool";
+ err_p = ERR_PTR(-ENOMEM);
+ goto bad_endio_hook_pool;
+ }
+ pool->ref_count = 1;
+ pool->pool_md = pool_md;
+ pool->md_dev = metadata_dev;
+ __pool_table_insert(pool);
+
+ return pool;
+
+bad_endio_hook_pool:
+ mempool_destroy(pool->mapping_pool);
+bad_mapping_pool:
+ destroy_workqueue(pool->wq);
+bad_wq:
+ dm_kcopyd_client_destroy(pool->copier);
+bad_kcopyd_client:
+ prison_destroy(pool->prison);
+bad_prison:
+ kfree(pool);
+bad_pool:
+ if (dm_pool_metadata_close(pmd))
+ DMWARN("%s: dm_pool_metadata_close() failed.", __func__);
+
+ return err_p;
+}
+
+static void __pool_inc(struct pool *pool)
+{
+ BUG_ON(!mutex_is_locked(&dm_thin_pool_table.mutex));
+ pool->ref_count++;
+}
+
+static void __pool_dec(struct pool *pool)
+{
+ BUG_ON(!mutex_is_locked(&dm_thin_pool_table.mutex));
+ BUG_ON(!pool->ref_count);
+ if (!--pool->ref_count)
+ __pool_destroy(pool);
+}
+
+static struct pool *__pool_find(struct mapped_device *pool_md,
+ struct block_device *metadata_dev,
+ unsigned long block_size, char **error)
+{
+ struct pool *pool = __pool_table_lookup_metadata_dev(metadata_dev);
+
+ if (pool) {
+ if (pool->pool_md != pool_md)
+ return ERR_PTR(-EBUSY);
+ __pool_inc(pool);
+
+ } else {
+ pool = __pool_table_lookup(pool_md);
+ if (pool) {
+ if (pool->md_dev != metadata_dev)
+ return ERR_PTR(-EINVAL);
+ __pool_inc(pool);
+
+ } else
+ pool = pool_create(pool_md, metadata_dev, block_size, error);
+ }
+
+ return pool;
+}
+
+/*----------------------------------------------------------------
+ * Pool target methods
+ *--------------------------------------------------------------*/
+static void pool_dtr(struct dm_target *ti)
+{
+ struct pool_c *pt = ti->private;
+
+ mutex_lock(&dm_thin_pool_table.mutex);
+
+ unbind_control_target(pt->pool, ti);
+ __pool_dec(pt->pool);
+ dm_put_device(ti, pt->metadata_dev);
+ dm_put_device(ti, pt->data_dev);
+ kfree(pt);
+
+ mutex_unlock(&dm_thin_pool_table.mutex);
+}
+
+struct pool_features {
+ unsigned zero_new_blocks:1;
+};
+
+static int parse_pool_features(struct dm_arg_set *as, struct pool_features *pf,
+ struct dm_target *ti)
+{
+ int r;
+ unsigned argc;
+ const char *arg_name;
+
+ static struct dm_arg _args[] = {
+ {0, 1, "Invalid number of pool feature arguments"},
+ };
+
+ /*
+ * No feature arguments supplied.
+ */
+ if (!as->argc)
+ return 0;
+
+ r = dm_read_arg_group(_args, as, &argc, &ti->error);
+ if (r)
+ return -EINVAL;
+
+ while (argc && !r) {
+ arg_name = dm_shift_arg(as);
+ argc--;
+
+ if (!strcasecmp(arg_name, "skip_block_zeroing")) {
+ pf->zero_new_blocks = 0;
+ continue;
+ }
+
+ ti->error = "Unrecognised pool feature requested";
+ r = -EINVAL;
+ }
+
+ return r;
+}
+
+/*
+ * thin-pool <metadata dev> <data dev>
+ * <data block size (sectors)>
+ * <low water mark (blocks)>
+ * [<#feature args> [<arg>]*]
+ *
+ * Optional feature arguments are:
+ * skip_block_zeroing: skips the zeroing of newly-provisioned blocks.
+ */
+static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv)
+{
+ int r;
+ struct pool_c *pt;
+ struct pool *pool;
+ struct pool_features pf;
+ struct dm_arg_set as;
+ struct dm_dev *data_dev;
+ unsigned long block_size;
+ dm_block_t low_water_blocks;
+ struct dm_dev *metadata_dev;
+ sector_t metadata_dev_size;
+
+ /*
+ * FIXME Remove validation from scope of lock.
+ */
+ mutex_lock(&dm_thin_pool_table.mutex);
+
+ if (argc < 4) {
+ ti->error = "Invalid argument count";
+ r = -EINVAL;
+ goto out_unlock;
+ }
+ as.argc = argc;
+ as.argv = argv;
+
+ r = dm_get_device(ti, argv[0], FMODE_READ | FMODE_WRITE, &metadata_dev);
+ if (r) {
+ ti->error = "Error opening metadata block device";
+ goto out_unlock;
+ }
+
+ metadata_dev_size = i_size_read(metadata_dev->bdev->bd_inode) >> SECTOR_SHIFT;
+ if (metadata_dev_size > METADATA_DEV_MAX_SECTORS) {
+ ti->error = "Metadata device is too large";
+ r = -EINVAL;
+ goto out_metadata;
+ }
+
+ r = dm_get_device(ti, argv[1], FMODE_READ | FMODE_WRITE, &data_dev);
+ if (r) {
+ ti->error = "Error getting data device";
+ goto out_metadata;
+ }
+
+ if (kstrtoul(argv[2], 10, &block_size) || !block_size ||
+ block_size < DATA_DEV_BLOCK_SIZE_MIN_SECTORS ||
+ block_size > DATA_DEV_BLOCK_SIZE_MAX_SECTORS ||
+ !is_power_of_2(block_size)) {
+ ti->error = "Invalid block size";
+ r = -EINVAL;
+ goto out;
+ }
+
+ if (kstrtoull(argv[3], 10, (unsigned long long *)&low_water_blocks)) {
+ ti->error = "Invalid low water mark";
+ r = -EINVAL;
+ goto out;
+ }
+
+ /*
+ * Set default pool features.
+ */
+ memset(&pf, 0, sizeof(pf));
+ pf.zero_new_blocks = 1;
+
+ dm_consume_args(&as, 4);
+ r = parse_pool_features(&as, &pf, ti);
+ if (r)
+ goto out;
+
+ pt = kzalloc(sizeof(*pt), GFP_KERNEL);
+ if (!pt) {
+ r = -ENOMEM;
+ goto out;
+ }
+
+ pool = __pool_find(dm_table_get_md(ti->table), metadata_dev->bdev,
+ block_size, &ti->error);
+ if (IS_ERR(pool)) {
+ r = PTR_ERR(pool);
+ goto out_free_pt;
+ }
+
+ pt->pool = pool;
+ pt->ti = ti;
+ pt->metadata_dev = metadata_dev;
+ pt->data_dev = data_dev;
+ pt->low_water_blocks = low_water_blocks;
+ pt->zero_new_blocks = pf.zero_new_blocks;
+ ti->num_flush_requests = 1;
+ ti->num_discard_requests = 0;
+ ti->private = pt;
+
+ pt->callbacks.congested_fn = pool_is_congested;
+ dm_table_add_target_callbacks(ti->table, &pt->callbacks);
+
+ mutex_unlock(&dm_thin_pool_table.mutex);
+
+ return 0;
+
+out_free_pt:
+ kfree(pt);
+out:
+ dm_put_device(ti, data_dev);
+out_metadata:
+ dm_put_device(ti, metadata_dev);
+out_unlock:
+ mutex_unlock(&dm_thin_pool_table.mutex);
+
+ return r;
+}
+
+static int pool_map(struct dm_target *ti, struct bio *bio,
+ union map_info *map_context)
+{
+ int r;
+ struct pool_c *pt = ti->private;
+ struct pool *pool = pt->pool;
+ unsigned long flags;
+
+ /*
+ * As this is a singleton target, ti->begin is always zero.
+ */
+ spin_lock_irqsave(&pool->lock, flags);
+ bio->bi_bdev = pt->data_dev->bdev;
+ r = DM_MAPIO_REMAPPED;
+ spin_unlock_irqrestore(&pool->lock, flags);
+
+ return r;
+}
+
+/*
+ * Retrieves the number of blocks of the data device from
+ * the superblock and compares it to the actual device size,
+ * thus resizing the data device in case it has grown.
+ *
+ * This both copes with opening preallocated data devices in the ctr
+ * being followed by a resume
+ * -and-
+ * calling the resume method individually after userspace has
+ * grown the data device in reaction to a table event.
+ */
+static int pool_preresume(struct dm_target *ti)
+{
+ int r;
+ struct pool_c *pt = ti->private;
+ struct pool *pool = pt->pool;
+ dm_block_t data_size, sb_data_size;
+
+ /*
+ * Take control of the pool object.
+ */
+ r = bind_control_target(pool, ti);
+ if (r)
+ return r;
+
+ data_size = ti->len >> pool->block_shift;
+ r = dm_pool_get_data_dev_size(pool->pmd, &sb_data_size);
+ if (r) {
+ DMERR("failed to retrieve data device size");
+ return r;
+ }
+
+ if (data_size < sb_data_size) {
+ DMERR("pool target too small, is %llu blocks (expected %llu)",
+ data_size, sb_data_size);
+ return -EINVAL;
+
+ } else if (data_size > sb_data_size) {
+ r = dm_pool_resize_data_dev(pool->pmd, data_size);
+ if (r) {
+ DMERR("failed to resize data device");
+ return r;
+ }
+
+ r = dm_pool_commit_metadata(pool->pmd);
+ if (r) {
+ DMERR("%s: dm_pool_commit_metadata() failed, error = %d",
+ __func__, r);
+ return r;
+ }
+ }
+
+ return 0;
+}
+
+static void pool_resume(struct dm_target *ti)
+{
+ struct pool_c *pt = ti->private;
+ struct pool *pool = pt->pool;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pool->lock, flags);
+ pool->low_water_triggered = 0;
+ pool->no_free_space = 0;
+ __requeue_bios(pool);
+ spin_unlock_irqrestore(&pool->lock, flags);
+
+ wake_worker(pool);
+}
+
+static void pool_postsuspend(struct dm_target *ti)
+{
+ int r;
+ struct pool_c *pt = ti->private;
+ struct pool *pool = pt->pool;
+
+ flush_workqueue(pool->wq);
+
+ r = dm_pool_commit_metadata(pool->pmd);
+ if (r < 0) {
+ DMERR("%s: dm_pool_commit_metadata() failed, error = %d",
+ __func__, r);
+ /* FIXME: invalidate device? error the next FUA or FLUSH bio ?*/
+ }
+}
+
+static int check_arg_count(unsigned argc, unsigned args_required)
+{
+ if (argc != args_required) {
+ DMWARN("Message received with %u arguments instead of %u.",
+ argc, args_required);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int read_dev_id(char *arg, dm_thin_id *dev_id, int warning)
+{
+ if (!kstrtoull(arg, 10, (unsigned long long *)dev_id) &&
+ *dev_id <= MAX_DEV_ID)
+ return 0;
+
+ if (warning)
+ DMWARN("Message received with invalid device id: %s", arg);
+
+ return -EINVAL;
+}
+
+static int process_create_thin_mesg(unsigned argc, char **argv, struct pool *pool)
+{
+ dm_thin_id dev_id;
+ int r;
+
+ r = check_arg_count(argc, 2);
+ if (r)
+ return r;
+
+ r = read_dev_id(argv[1], &dev_id, 1);
+ if (r)
+ return r;
+
+ r = dm_pool_create_thin(pool->pmd, dev_id);
+ if (r) {
+ DMWARN("Creation of new thinly-provisioned device with id %s failed.",
+ argv[1]);
+ return r;
+ }
+
+ return 0;
+}
+
+static int process_create_snap_mesg(unsigned argc, char **argv, struct pool *pool)
+{
+ dm_thin_id dev_id;
+ dm_thin_id origin_dev_id;
+ int r;
+
+ r = check_arg_count(argc, 3);
+ if (r)
+ return r;
+
+ r = read_dev_id(argv[1], &dev_id, 1);
+ if (r)
+ return r;
+
+ r = read_dev_id(argv[2], &origin_dev_id, 1);
+ if (r)
+ return r;
+
+ r = dm_pool_create_snap(pool->pmd, dev_id, origin_dev_id);
+ if (r) {
+ DMWARN("Creation of new snapshot %s of device %s failed.",
+ argv[1], argv[2]);
+ return r;
+ }
+
+ return 0;
+}
+
+static int process_delete_mesg(unsigned argc, char **argv, struct pool *pool)
+{
+ dm_thin_id dev_id;
+ int r;
+
+ r = check_arg_count(argc, 2);
+ if (r)
+ return r;
+
+ r = read_dev_id(argv[1], &dev_id, 1);
+ if (r)
+ return r;
+
+ r = dm_pool_delete_thin_device(pool->pmd, dev_id);
+ if (r)
+ DMWARN("Deletion of thin device %s failed.", argv[1]);
+
+ return r;
+}
+
+static int process_set_transaction_id_mesg(unsigned argc, char **argv, struct pool *pool)
+{
+ dm_thin_id old_id, new_id;
+ int r;
+
+ r = check_arg_count(argc, 3);
+ if (r)
+ return r;
+
+ if (kstrtoull(argv[1], 10, (unsigned long long *)&old_id)) {
+ DMWARN("set_transaction_id message: Unrecognised id %s.", argv[1]);
+ return -EINVAL;
+ }
+
+ if (kstrtoull(argv[2], 10, (unsigned long long *)&new_id)) {
+ DMWARN("set_transaction_id message: Unrecognised new id %s.", argv[2]);
+ return -EINVAL;
+ }
+
+ r = dm_pool_set_metadata_transaction_id(pool->pmd, old_id, new_id);
+ if (r) {
+ DMWARN("Failed to change transaction id from %s to %s.",
+ argv[1], argv[2]);
+ return r;
+ }
+
+ return 0;
+}
+
+/*
+ * Messages supported:
+ * create_thin <dev_id>
+ * create_snap <dev_id> <origin_id>
+ * delete <dev_id>
+ * trim <dev_id> <new_size_in_sectors>
+ * set_transaction_id <current_trans_id> <new_trans_id>
+ */
+static int pool_message(struct dm_target *ti, unsigned argc, char **argv)
+{
+ int r = -EINVAL;
+ struct pool_c *pt = ti->private;
+ struct pool *pool = pt->pool;
+
+ if (!strcasecmp(argv[0], "create_thin"))
+ r = process_create_thin_mesg(argc, argv, pool);
+
+ else if (!strcasecmp(argv[0], "create_snap"))
+ r = process_create_snap_mesg(argc, argv, pool);
+
+ else if (!strcasecmp(argv[0], "delete"))
+ r = process_delete_mesg(argc, argv, pool);
+
+ else if (!strcasecmp(argv[0], "set_transaction_id"))
+ r = process_set_transaction_id_mesg(argc, argv, pool);
+
+ else
+ DMWARN("Unrecognised thin pool target message received: %s", argv[0]);
+
+ if (!r) {
+ r = dm_pool_commit_metadata(pool->pmd);
+ if (r)
+ DMERR("%s message: dm_pool_commit_metadata() failed, error = %d",
+ argv[0], r);
+ }
+
+ return r;
+}
+
+/*
+ * Status line is:
+ * <transaction id> <used metadata sectors>/<total metadata sectors>
+ * <used data sectors>/<total data sectors> <held metadata root>
+ */
+static int pool_status(struct dm_target *ti, status_type_t type,
+ char *result, unsigned maxlen)
+{
+ int r;
+ unsigned sz = 0;
+ uint64_t transaction_id;
+ dm_block_t nr_free_blocks_data;
+ dm_block_t nr_free_blocks_metadata;
+ dm_block_t nr_blocks_data;
+ dm_block_t nr_blocks_metadata;
+ dm_block_t held_root;
+ char buf[BDEVNAME_SIZE];
+ char buf2[BDEVNAME_SIZE];
+ struct pool_c *pt = ti->private;
+ struct pool *pool = pt->pool;
+
+ switch (type) {
+ case STATUSTYPE_INFO:
+ r = dm_pool_get_metadata_transaction_id(pool->pmd,
+ &transaction_id);
+ if (r)
+ return r;
+
+ r = dm_pool_get_free_metadata_block_count(pool->pmd,
+ &nr_free_blocks_metadata);
+ if (r)
+ return r;
+
+ r = dm_pool_get_metadata_dev_size(pool->pmd, &nr_blocks_metadata);
+ if (r)
+ return r;
+
+ r = dm_pool_get_free_block_count(pool->pmd,
+ &nr_free_blocks_data);
+ if (r)
+ return r;
+
+ r = dm_pool_get_data_dev_size(pool->pmd, &nr_blocks_data);
+ if (r)
+ return r;
+
+ r = dm_pool_get_held_metadata_root(pool->pmd, &held_root);
+ if (r)
+ return r;
+
+ DMEMIT("%llu %llu/%llu %llu/%llu ",
+ (unsigned long long)transaction_id,
+ (unsigned long long)(nr_blocks_metadata - nr_free_blocks_metadata),
+ (unsigned long long)nr_blocks_metadata,
+ (unsigned long long)(nr_blocks_data - nr_free_blocks_data),
+ (unsigned long long)nr_blocks_data);
+
+ if (held_root)
+ DMEMIT("%llu", held_root);
+ else
+ DMEMIT("-");
+
+ break;
+
+ case STATUSTYPE_TABLE:
+ DMEMIT("%s %s %lu %llu ",
+ format_dev_t(buf, pt->metadata_dev->bdev->bd_dev),
+ format_dev_t(buf2, pt->data_dev->bdev->bd_dev),
+ (unsigned long)pool->sectors_per_block,
+ (unsigned long long)pt->low_water_blocks);
+
+ DMEMIT("%u ", !pool->zero_new_blocks);
+
+ if (!pool->zero_new_blocks)
+ DMEMIT("skip_block_zeroing ");
+ break;
+ }
+
+ return 0;
+}
+
+static int pool_iterate_devices(struct dm_target *ti,
+ iterate_devices_callout_fn fn, void *data)
+{
+ struct pool_c *pt = ti->private;
+
+ return fn(ti, pt->data_dev, 0, ti->len, data);
+}
+
+static int pool_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
+ struct bio_vec *biovec, int max_size)
+{
+ struct pool_c *pt = ti->private;
+ struct request_queue *q = bdev_get_queue(pt->data_dev->bdev);
+
+ if (!q->merge_bvec_fn)
+ return max_size;
+
+ bvm->bi_bdev = pt->data_dev->bdev;
+
+ return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
+}
+
+static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits)
+{
+ struct pool_c *pt = ti->private;
+ struct pool *pool = pt->pool;
+
+ blk_limits_io_min(limits, 0);
+ blk_limits_io_opt(limits, pool->sectors_per_block << SECTOR_SHIFT);
+}
+
+static struct target_type pool_target = {
+ .name = "thin-pool",
+ .features = DM_TARGET_SINGLETON | DM_TARGET_ALWAYS_WRITEABLE |
+ DM_TARGET_IMMUTABLE,
+ .version = {1, 0, 0},
+ .module = THIS_MODULE,
+ .ctr = pool_ctr,
+ .dtr = pool_dtr,
+ .map = pool_map,
+ .postsuspend = pool_postsuspend,
+ .preresume = pool_preresume,
+ .resume = pool_resume,
+ .message = pool_message,
+ .status = pool_status,
+ .merge = pool_merge,
+ .iterate_devices = pool_iterate_devices,
+ .io_hints = pool_io_hints,
+};
+
+/*----------------------------------------------------------------
+ * Thin target methods
+ *--------------------------------------------------------------*/
+static void thin_dtr(struct dm_target *ti)
+{
+ struct thin_c *tc = ti->private;
+
+ mutex_lock(&dm_thin_pool_table.mutex);
+
+ __pool_dec(tc->pool);
+ dm_pool_close_thin_device(tc->td);
+ dm_put_device(ti, tc->pool_dev);
+ kfree(tc);
+
+ mutex_unlock(&dm_thin_pool_table.mutex);
+}
+
+/*
+ * Thin target parameters:
+ *
+ * <pool_dev> <dev_id>
+ *
+ * pool_dev: the path to the pool (eg, /dev/mapper/my_pool)
+ * dev_id: the internal device identifier
+ */
+static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
+{
+ int r;
+ struct thin_c *tc;
+ struct dm_dev *pool_dev;
+ struct mapped_device *pool_md;
+
+ mutex_lock(&dm_thin_pool_table.mutex);
+
+ if (argc != 2) {
+ ti->error = "Invalid argument count";
+ r = -EINVAL;
+ goto out_unlock;
+ }
+
+ tc = ti->private = kzalloc(sizeof(*tc), GFP_KERNEL);
+ if (!tc) {
+ ti->error = "Out of memory";
+ r = -ENOMEM;
+ goto out_unlock;
+ }
+
+ r = dm_get_device(ti, argv[0], dm_table_get_mode(ti->table), &pool_dev);
+ if (r) {
+ ti->error = "Error opening pool device";
+ goto bad_pool_dev;
+ }
+ tc->pool_dev = pool_dev;
+
+ if (read_dev_id(argv[1], (unsigned long long *)&tc->dev_id, 0)) {
+ ti->error = "Invalid device id";
+ r = -EINVAL;
+ goto bad_common;
+ }
+
+ pool_md = dm_get_md(tc->pool_dev->bdev->bd_dev);
+ if (!pool_md) {
+ ti->error = "Couldn't get pool mapped device";
+ r = -EINVAL;
+ goto bad_common;
+ }
+
+ tc->pool = __pool_table_lookup(pool_md);
+ if (!tc->pool) {
+ ti->error = "Couldn't find pool object";
+ r = -EINVAL;
+ goto bad_pool_lookup;
+ }
+ __pool_inc(tc->pool);
+
+ r = dm_pool_open_thin_device(tc->pool->pmd, tc->dev_id, &tc->td);
+ if (r) {
+ ti->error = "Couldn't open thin internal device";
+ goto bad_thin_open;
+ }
+
+ ti->split_io = tc->pool->sectors_per_block;
+ ti->num_flush_requests = 1;
+ ti->num_discard_requests = 0;
+ ti->discards_supported = 0;
+
+ dm_put(pool_md);
+
+ mutex_unlock(&dm_thin_pool_table.mutex);
+
+ return 0;
+
+bad_thin_open:
+ __pool_dec(tc->pool);
+bad_pool_lookup:
+ dm_put(pool_md);
+bad_common:
+ dm_put_device(ti, tc->pool_dev);
+bad_pool_dev:
+ kfree(tc);
+out_unlock:
+ mutex_unlock(&dm_thin_pool_table.mutex);
+
+ return r;
+}
+
+static int thin_map(struct dm_target *ti, struct bio *bio,
+ union map_info *map_context)
+{
+ bio->bi_sector -= ti->begin;
+
+ return thin_bio_map(ti, bio, map_context);
+}
+
+static void thin_postsuspend(struct dm_target *ti)
+{
+ if (dm_noflush_suspending(ti))
+ requeue_io((struct thin_c *)ti->private);
+}
+
+/*
+ * <nr mapped sectors> <highest mapped sector>
+ */
+static int thin_status(struct dm_target *ti, status_type_t type,
+ char *result, unsigned maxlen)
+{
+ int r;
+ ssize_t sz = 0;
+ dm_block_t mapped, highest;
+ char buf[BDEVNAME_SIZE];
+ struct thin_c *tc = ti->private;
+
+ if (!tc->td)
+ DMEMIT("-");
+ else {
+ switch (type) {
+ case STATUSTYPE_INFO:
+ r = dm_thin_get_mapped_count(tc->td, &mapped);
+ if (r)
+ return r;
+
+ r = dm_thin_get_highest_mapped_block(tc->td, &highest);
+ if (r < 0)
+ return r;
+
+ DMEMIT("%llu ", mapped * tc->pool->sectors_per_block);
+ if (r)
+ DMEMIT("%llu", ((highest + 1) *
+ tc->pool->sectors_per_block) - 1);
+ else
+ DMEMIT("-");
+ break;
+
+ case STATUSTYPE_TABLE:
+ DMEMIT("%s %lu",
+ format_dev_t(buf, tc->pool_dev->bdev->bd_dev),
+ (unsigned long) tc->dev_id);
+ break;
+ }
+ }
+
+ return 0;
+}
+
+static int thin_iterate_devices(struct dm_target *ti,
+ iterate_devices_callout_fn fn, void *data)
+{
+ dm_block_t blocks;
+ struct thin_c *tc = ti->private;
+
+ /*
+ * We can't call dm_pool_get_data_dev_size() since that blocks. So
+ * we follow a more convoluted path through to the pool's target.
+ */
+ if (!tc->pool->ti)
+ return 0; /* nothing is bound */
+
+ blocks = tc->pool->ti->len >> tc->pool->block_shift;
+ if (blocks)
+ return fn(ti, tc->pool_dev, 0, tc->pool->sectors_per_block * blocks, data);
+
+ return 0;
+}
+
+static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits)
+{
+ struct thin_c *tc = ti->private;
+
+ blk_limits_io_min(limits, 0);
+ blk_limits_io_opt(limits, tc->pool->sectors_per_block << SECTOR_SHIFT);
+}
+
+static struct target_type thin_target = {
+ .name = "thin",
+ .version = {1, 0, 0},
+ .module = THIS_MODULE,
+ .ctr = thin_ctr,
+ .dtr = thin_dtr,
+ .map = thin_map,
+ .postsuspend = thin_postsuspend,
+ .status = thin_status,
+ .iterate_devices = thin_iterate_devices,
+ .io_hints = thin_io_hints,
+};
+
+/*----------------------------------------------------------------*/
+
+static int __init dm_thin_init(void)
+{
+ int r;
+
+ pool_table_init();
+
+ r = dm_register_target(&thin_target);
+ if (r)
+ return r;
+
+ r = dm_register_target(&pool_target);
+ if (r)
+ dm_unregister_target(&thin_target);
+
+ return r;
+}
+
+static void dm_thin_exit(void)
+{
+ dm_unregister_target(&thin_target);
+ dm_unregister_target(&pool_target);
+}
+
+module_init(dm_thin_init);
+module_exit(dm_thin_exit);
+
+MODULE_DESCRIPTION(DM_NAME "device-mapper thin provisioning target");
+MODULE_AUTHOR("Joe Thornber <dm-devel@redhat.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 52b39f335bb3..6b6616a41baa 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -25,6 +25,16 @@
#define DM_MSG_PREFIX "core"
+#ifdef CONFIG_PRINTK
+/*
+ * ratelimit state to be used in DMXXX_LIMIT().
+ */
+DEFINE_RATELIMIT_STATE(dm_ratelimit_state,
+ DEFAULT_RATELIMIT_INTERVAL,
+ DEFAULT_RATELIMIT_BURST);
+EXPORT_SYMBOL(dm_ratelimit_state);
+#endif
+
/*
* Cookies are numeric values sent with CHANGE and REMOVE
* uevents while resuming, removing or renaming the device.
@@ -130,6 +140,8 @@ struct mapped_device {
/* Protect queue and type against concurrent access. */
struct mutex type_lock;
+ struct target_type *immutable_target_type;
+
struct gendisk *disk;
char name[16];
@@ -2086,6 +2098,8 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
write_lock_irqsave(&md->map_lock, flags);
old_map = md->map;
md->map = t;
+ md->immutable_target_type = dm_table_get_immutable_target_type(t);
+
dm_table_set_restrictions(t, q, limits);
if (merge_is_optional)
set_bit(DMF_MERGE_IS_OPTIONAL, &md->flags);
@@ -2156,6 +2170,11 @@ unsigned dm_get_md_type(struct mapped_device *md)
return md->type;
}
+struct target_type *dm_get_immutable_target_type(struct mapped_device *md)
+{
+ return md->immutable_target_type;
+}
+
/*
* Fully initialize a request-based queue (->elevator, ->request_fn, etc).
*/
@@ -2231,6 +2250,7 @@ struct mapped_device *dm_get_md(dev_t dev)
return md;
}
+EXPORT_SYMBOL_GPL(dm_get_md);
void *dm_get_mdptr(struct mapped_device *md)
{
@@ -2316,7 +2336,6 @@ static int dm_wait_for_completion(struct mapped_device *md, int interruptible)
while (1) {
set_current_state(interruptible);
- smp_mb();
if (!md_in_flight(md))
break;
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index 6745dbd278a4..b7dacd59d8d7 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -60,6 +60,7 @@ int dm_table_resume_targets(struct dm_table *t);
int dm_table_any_congested(struct dm_table *t, int bdi_bits);
int dm_table_any_busy_target(struct dm_table *t);
unsigned dm_table_get_type(struct dm_table *t);
+struct target_type *dm_table_get_immutable_target_type(struct dm_table *t);
bool dm_table_request_based(struct dm_table *t);
bool dm_table_supports_discards(struct dm_table *t);
int dm_table_alloc_md_mempools(struct dm_table *t);
@@ -72,6 +73,7 @@ void dm_lock_md_type(struct mapped_device *md);
void dm_unlock_md_type(struct mapped_device *md);
void dm_set_md_type(struct mapped_device *md, unsigned type);
unsigned dm_get_md_type(struct mapped_device *md);
+struct target_type *dm_get_immutable_target_type(struct mapped_device *md);
int dm_setup_md_queue(struct mapped_device *md);
diff --git a/drivers/md/persistent-data/Kconfig b/drivers/md/persistent-data/Kconfig
new file mode 100644
index 000000000000..ceb359050a59
--- /dev/null
+++ b/drivers/md/persistent-data/Kconfig
@@ -0,0 +1,8 @@
+config DM_PERSISTENT_DATA
+ tristate
+ depends on BLK_DEV_DM && EXPERIMENTAL
+ select LIBCRC32C
+ select DM_BUFIO
+ ---help---
+ Library providing immutable on-disk data structure support for
+ device-mapper targets such as the thin provisioning target.
diff --git a/drivers/md/persistent-data/Makefile b/drivers/md/persistent-data/Makefile
new file mode 100644
index 000000000000..cfa95f662230
--- /dev/null
+++ b/drivers/md/persistent-data/Makefile
@@ -0,0 +1,11 @@
+obj-$(CONFIG_DM_PERSISTENT_DATA) += dm-persistent-data.o
+dm-persistent-data-objs := \
+ dm-block-manager.o \
+ dm-space-map-checker.o \
+ dm-space-map-common.o \
+ dm-space-map-disk.o \
+ dm-space-map-metadata.o \
+ dm-transaction-manager.o \
+ dm-btree.o \
+ dm-btree-remove.o \
+ dm-btree-spine.o
diff --git a/drivers/md/persistent-data/dm-block-manager.c b/drivers/md/persistent-data/dm-block-manager.c
new file mode 100644
index 000000000000..0317ecdc6e53
--- /dev/null
+++ b/drivers/md/persistent-data/dm-block-manager.c
@@ -0,0 +1,620 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+#include "dm-block-manager.h"
+#include "dm-persistent-data-internal.h"
+#include "../dm-bufio.h"
+
+#include <linux/crc32c.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/rwsem.h>
+#include <linux/device-mapper.h>
+#include <linux/stacktrace.h>
+
+#define DM_MSG_PREFIX "block manager"
+
+/*----------------------------------------------------------------*/
+
+/*
+ * This is a read/write semaphore with a couple of differences.
+ *
+ * i) There is a restriction on the number of concurrent read locks that
+ * may be held at once. This is just an implementation detail.
+ *
+ * ii) Recursive locking attempts are detected and return EINVAL. A stack
+ * trace is also emitted for the previous lock aquisition.
+ *
+ * iii) Priority is given to write locks.
+ */
+#define MAX_HOLDERS 4
+#define MAX_STACK 10
+
+typedef unsigned long stack_entries[MAX_STACK];
+
+struct block_lock {
+ spinlock_t lock;
+ __s32 count;
+ struct list_head waiters;
+ struct task_struct *holders[MAX_HOLDERS];
+
+#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
+ struct stack_trace traces[MAX_HOLDERS];
+ stack_entries entries[MAX_HOLDERS];
+#endif
+};
+
+struct waiter {
+ struct list_head list;
+ struct task_struct *task;
+ int wants_write;
+};
+
+static unsigned __find_holder(struct block_lock *lock,
+ struct task_struct *task)
+{
+ unsigned i;
+
+ for (i = 0; i < MAX_HOLDERS; i++)
+ if (lock->holders[i] == task)
+ break;
+
+ BUG_ON(i == MAX_HOLDERS);
+ return i;
+}
+
+/* call this *after* you increment lock->count */
+static void __add_holder(struct block_lock *lock, struct task_struct *task)
+{
+ unsigned h = __find_holder(lock, NULL);
+#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
+ struct stack_trace *t;
+#endif
+
+ get_task_struct(task);
+ lock->holders[h] = task;
+
+#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
+ t = lock->traces + h;
+ t->nr_entries = 0;
+ t->max_entries = MAX_STACK;
+ t->entries = lock->entries[h];
+ t->skip = 2;
+ save_stack_trace(t);
+#endif
+}
+
+/* call this *before* you decrement lock->count */
+static void __del_holder(struct block_lock *lock, struct task_struct *task)
+{
+ unsigned h = __find_holder(lock, task);
+ lock->holders[h] = NULL;
+ put_task_struct(task);
+}
+
+static int __check_holder(struct block_lock *lock)
+{
+ unsigned i;
+#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
+ static struct stack_trace t;
+ static stack_entries entries;
+#endif
+
+ for (i = 0; i < MAX_HOLDERS; i++) {
+ if (lock->holders[i] == current) {
+ DMERR("recursive lock detected in pool metadata");
+#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
+ DMERR("previously held here:");
+ print_stack_trace(lock->traces + i, 4);
+
+ DMERR("subsequent aquisition attempted here:");
+ t.nr_entries = 0;
+ t.max_entries = MAX_STACK;
+ t.entries = entries;
+ t.skip = 3;
+ save_stack_trace(&t);
+ print_stack_trace(&t, 4);
+#endif
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+static void __wait(struct waiter *w)
+{
+ for (;;) {
+ set_task_state(current, TASK_UNINTERRUPTIBLE);
+
+ if (!w->task)
+ break;
+
+ schedule();
+ }
+
+ set_task_state(current, TASK_RUNNING);
+}
+
+static void __wake_waiter(struct waiter *w)
+{
+ struct task_struct *task;
+
+ list_del(&w->list);
+ task = w->task;
+ smp_mb();
+ w->task = NULL;
+ wake_up_process(task);
+}
+
+/*
+ * We either wake a few readers or a single writer.
+ */
+static void __wake_many(struct block_lock *lock)
+{
+ struct waiter *w, *tmp;
+
+ BUG_ON(lock->count < 0);
+ list_for_each_entry_safe(w, tmp, &lock->waiters, list) {
+ if (lock->count >= MAX_HOLDERS)
+ return;
+
+ if (w->wants_write) {
+ if (lock->count > 0)
+ return; /* still read locked */
+
+ lock->count = -1;
+ __add_holder(lock, w->task);
+ __wake_waiter(w);
+ return;
+ }
+
+ lock->count++;
+ __add_holder(lock, w->task);
+ __wake_waiter(w);
+ }
+}
+
+static void bl_init(struct block_lock *lock)
+{
+ int i;
+
+ spin_lock_init(&lock->lock);
+ lock->count = 0;
+ INIT_LIST_HEAD(&lock->waiters);
+ for (i = 0; i < MAX_HOLDERS; i++)
+ lock->holders[i] = NULL;
+}
+
+static int __available_for_read(struct block_lock *lock)
+{
+ return lock->count >= 0 &&
+ lock->count < MAX_HOLDERS &&
+ list_empty(&lock->waiters);
+}
+
+static int bl_down_read(struct block_lock *lock)
+{
+ int r;
+ struct waiter w;
+
+ spin_lock(&lock->lock);
+ r = __check_holder(lock);
+ if (r) {
+ spin_unlock(&lock->lock);
+ return r;
+ }
+
+ if (__available_for_read(lock)) {
+ lock->count++;
+ __add_holder(lock, current);
+ spin_unlock(&lock->lock);
+ return 0;
+ }
+
+ get_task_struct(current);
+
+ w.task = current;
+ w.wants_write = 0;
+ list_add_tail(&w.list, &lock->waiters);
+ spin_unlock(&lock->lock);
+
+ __wait(&w);
+ put_task_struct(current);
+ return 0;
+}
+
+static int bl_down_read_nonblock(struct block_lock *lock)
+{
+ int r;
+
+ spin_lock(&lock->lock);
+ r = __check_holder(lock);
+ if (r)
+ goto out;
+
+ if (__available_for_read(lock)) {
+ lock->count++;
+ __add_holder(lock, current);
+ r = 0;
+ } else
+ r = -EWOULDBLOCK;
+
+out:
+ spin_unlock(&lock->lock);
+ return r;
+}
+
+static void bl_up_read(struct block_lock *lock)
+{
+ spin_lock(&lock->lock);
+ BUG_ON(lock->count <= 0);
+ __del_holder(lock, current);
+ --lock->count;
+ if (!list_empty(&lock->waiters))
+ __wake_many(lock);
+ spin_unlock(&lock->lock);
+}
+
+static int bl_down_write(struct block_lock *lock)
+{
+ int r;
+ struct waiter w;
+
+ spin_lock(&lock->lock);
+ r = __check_holder(lock);
+ if (r) {
+ spin_unlock(&lock->lock);
+ return r;
+ }
+
+ if (lock->count == 0 && list_empty(&lock->waiters)) {
+ lock->count = -1;
+ __add_holder(lock, current);
+ spin_unlock(&lock->lock);
+ return 0;
+ }
+
+ get_task_struct(current);
+ w.task = current;
+ w.wants_write = 1;
+
+ /*
+ * Writers given priority. We know there's only one mutator in the
+ * system, so ignoring the ordering reversal.
+ */
+ list_add(&w.list, &lock->waiters);
+ spin_unlock(&lock->lock);
+
+ __wait(&w);
+ put_task_struct(current);
+
+ return 0;
+}
+
+static void bl_up_write(struct block_lock *lock)
+{
+ spin_lock(&lock->lock);
+ __del_holder(lock, current);
+ lock->count = 0;
+ if (!list_empty(&lock->waiters))
+ __wake_many(lock);
+ spin_unlock(&lock->lock);
+}
+
+static void report_recursive_bug(dm_block_t b, int r)
+{
+ if (r == -EINVAL)
+ DMERR("recursive acquisition of block %llu requested.",
+ (unsigned long long) b);
+}
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Block manager is currently implemented using dm-bufio. struct
+ * dm_block_manager and struct dm_block map directly onto a couple of
+ * structs in the bufio interface. I want to retain the freedom to move
+ * away from bufio in the future. So these structs are just cast within
+ * this .c file, rather than making it through to the public interface.
+ */
+static struct dm_buffer *to_buffer(struct dm_block *b)
+{
+ return (struct dm_buffer *) b;
+}
+
+static struct dm_bufio_client *to_bufio(struct dm_block_manager *bm)
+{
+ return (struct dm_bufio_client *) bm;
+}
+
+dm_block_t dm_block_location(struct dm_block *b)
+{
+ return dm_bufio_get_block_number(to_buffer(b));
+}
+EXPORT_SYMBOL_GPL(dm_block_location);
+
+void *dm_block_data(struct dm_block *b)
+{
+ return dm_bufio_get_block_data(to_buffer(b));
+}
+EXPORT_SYMBOL_GPL(dm_block_data);
+
+struct buffer_aux {
+ struct dm_block_validator *validator;
+ struct block_lock lock;
+ int write_locked;
+};
+
+static void dm_block_manager_alloc_callback(struct dm_buffer *buf)
+{
+ struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
+ aux->validator = NULL;
+ bl_init(&aux->lock);
+}
+
+static void dm_block_manager_write_callback(struct dm_buffer *buf)
+{
+ struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
+ if (aux->validator) {
+ aux->validator->prepare_for_write(aux->validator, (struct dm_block *) buf,
+ dm_bufio_get_block_size(dm_bufio_get_client(buf)));
+ }
+}
+
+/*----------------------------------------------------------------
+ * Public interface
+ *--------------------------------------------------------------*/
+struct dm_block_manager *dm_block_manager_create(struct block_device *bdev,
+ unsigned block_size,
+ unsigned cache_size,
+ unsigned max_held_per_thread)
+{
+ return (struct dm_block_manager *)
+ dm_bufio_client_create(bdev, block_size, max_held_per_thread,
+ sizeof(struct buffer_aux),
+ dm_block_manager_alloc_callback,
+ dm_block_manager_write_callback);
+}
+EXPORT_SYMBOL_GPL(dm_block_manager_create);
+
+void dm_block_manager_destroy(struct dm_block_manager *bm)
+{
+ return dm_bufio_client_destroy(to_bufio(bm));
+}
+EXPORT_SYMBOL_GPL(dm_block_manager_destroy);
+
+unsigned dm_bm_block_size(struct dm_block_manager *bm)
+{
+ return dm_bufio_get_block_size(to_bufio(bm));
+}
+EXPORT_SYMBOL_GPL(dm_bm_block_size);
+
+dm_block_t dm_bm_nr_blocks(struct dm_block_manager *bm)
+{
+ return dm_bufio_get_device_size(to_bufio(bm));
+}
+
+static int dm_bm_validate_buffer(struct dm_block_manager *bm,
+ struct dm_buffer *buf,
+ struct buffer_aux *aux,
+ struct dm_block_validator *v)
+{
+ if (unlikely(!aux->validator)) {
+ int r;
+ if (!v)
+ return 0;
+ r = v->check(v, (struct dm_block *) buf, dm_bufio_get_block_size(to_bufio(bm)));
+ if (unlikely(r))
+ return r;
+ aux->validator = v;
+ } else {
+ if (unlikely(aux->validator != v)) {
+ DMERR("validator mismatch (old=%s vs new=%s) for block %llu",
+ aux->validator->name, v ? v->name : "NULL",
+ (unsigned long long)
+ dm_bufio_get_block_number(buf));
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+int dm_bm_read_lock(struct dm_block_manager *bm, dm_block_t b,
+ struct dm_block_validator *v,
+ struct dm_block **result)
+{
+ struct buffer_aux *aux;
+ void *p;
+ int r;
+
+ p = dm_bufio_read(to_bufio(bm), b, (struct dm_buffer **) result);
+ if (unlikely(IS_ERR(p)))
+ return PTR_ERR(p);
+
+ aux = dm_bufio_get_aux_data(to_buffer(*result));
+ r = bl_down_read(&aux->lock);
+ if (unlikely(r)) {
+ dm_bufio_release(to_buffer(*result));
+ report_recursive_bug(b, r);
+ return r;
+ }
+
+ aux->write_locked = 0;
+
+ r = dm_bm_validate_buffer(bm, to_buffer(*result), aux, v);
+ if (unlikely(r)) {
+ bl_up_read(&aux->lock);
+ dm_bufio_release(to_buffer(*result));
+ return r;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(dm_bm_read_lock);
+
+int dm_bm_write_lock(struct dm_block_manager *bm,
+ dm_block_t b, struct dm_block_validator *v,
+ struct dm_block **result)
+{
+ struct buffer_aux *aux;
+ void *p;
+ int r;
+
+ p = dm_bufio_read(to_bufio(bm), b, (struct dm_buffer **) result);
+ if (unlikely(IS_ERR(p)))
+ return PTR_ERR(p);
+
+ aux = dm_bufio_get_aux_data(to_buffer(*result));
+ r = bl_down_write(&aux->lock);
+ if (r) {
+ dm_bufio_release(to_buffer(*result));
+ report_recursive_bug(b, r);
+ return r;
+ }
+
+ aux->write_locked = 1;
+
+ r = dm_bm_validate_buffer(bm, to_buffer(*result), aux, v);
+ if (unlikely(r)) {
+ bl_up_write(&aux->lock);
+ dm_bufio_release(to_buffer(*result));
+ return r;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(dm_bm_write_lock);
+
+int dm_bm_read_try_lock(struct dm_block_manager *bm,
+ dm_block_t b, struct dm_block_validator *v,
+ struct dm_block **result)
+{
+ struct buffer_aux *aux;
+ void *p;
+ int r;
+
+ p = dm_bufio_get(to_bufio(bm), b, (struct dm_buffer **) result);
+ if (unlikely(IS_ERR(p)))
+ return PTR_ERR(p);
+ if (unlikely(!p))
+ return -EWOULDBLOCK;
+
+ aux = dm_bufio_get_aux_data(to_buffer(*result));
+ r = bl_down_read_nonblock(&aux->lock);
+ if (r < 0) {
+ dm_bufio_release(to_buffer(*result));
+ report_recursive_bug(b, r);
+ return r;
+ }
+ aux->write_locked = 0;
+
+ r = dm_bm_validate_buffer(bm, to_buffer(*result), aux, v);
+ if (unlikely(r)) {
+ bl_up_read(&aux->lock);
+ dm_bufio_release(to_buffer(*result));
+ return r;
+ }
+
+ return 0;
+}
+
+int dm_bm_write_lock_zero(struct dm_block_manager *bm,
+ dm_block_t b, struct dm_block_validator *v,
+ struct dm_block **result)
+{
+ int r;
+ struct buffer_aux *aux;
+ void *p;
+
+ p = dm_bufio_new(to_bufio(bm), b, (struct dm_buffer **) result);
+ if (unlikely(IS_ERR(p)))
+ return PTR_ERR(p);
+
+ memset(p, 0, dm_bm_block_size(bm));
+
+ aux = dm_bufio_get_aux_data(to_buffer(*result));
+ r = bl_down_write(&aux->lock);
+ if (r) {
+ dm_bufio_release(to_buffer(*result));
+ return r;
+ }
+
+ aux->write_locked = 1;
+ aux->validator = v;
+
+ return 0;
+}
+
+int dm_bm_unlock(struct dm_block *b)
+{
+ struct buffer_aux *aux;
+ aux = dm_bufio_get_aux_data(to_buffer(b));
+
+ if (aux->write_locked) {
+ dm_bufio_mark_buffer_dirty(to_buffer(b));
+ bl_up_write(&aux->lock);
+ } else
+ bl_up_read(&aux->lock);
+
+ dm_bufio_release(to_buffer(b));
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(dm_bm_unlock);
+
+int dm_bm_unlock_move(struct dm_block *b, dm_block_t n)
+{
+ struct buffer_aux *aux;
+
+ aux = dm_bufio_get_aux_data(to_buffer(b));
+
+ if (aux->write_locked) {
+ dm_bufio_mark_buffer_dirty(to_buffer(b));
+ bl_up_write(&aux->lock);
+ } else
+ bl_up_read(&aux->lock);
+
+ dm_bufio_release_move(to_buffer(b), n);
+ return 0;
+}
+
+int dm_bm_flush_and_unlock(struct dm_block_manager *bm,
+ struct dm_block *superblock)
+{
+ int r;
+
+ r = dm_bufio_write_dirty_buffers(to_bufio(bm));
+ if (unlikely(r))
+ return r;
+ r = dm_bufio_issue_flush(to_bufio(bm));
+ if (unlikely(r))
+ return r;
+
+ dm_bm_unlock(superblock);
+
+ r = dm_bufio_write_dirty_buffers(to_bufio(bm));
+ if (unlikely(r))
+ return r;
+ r = dm_bufio_issue_flush(to_bufio(bm));
+ if (unlikely(r))
+ return r;
+
+ return 0;
+}
+
+u32 dm_bm_checksum(const void *data, size_t len, u32 init_xor)
+{
+ return crc32c(~(u32) 0, data, len) ^ init_xor;
+}
+EXPORT_SYMBOL_GPL(dm_bm_checksum);
+
+/*----------------------------------------------------------------*/
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Joe Thornber <dm-devel@redhat.com>");
+MODULE_DESCRIPTION("Immutable metadata library for dm");
+
+/*----------------------------------------------------------------*/
diff --git a/drivers/md/persistent-data/dm-block-manager.h b/drivers/md/persistent-data/dm-block-manager.h
new file mode 100644
index 000000000000..924833d2dfa6
--- /dev/null
+++ b/drivers/md/persistent-data/dm-block-manager.h
@@ -0,0 +1,123 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef _LINUX_DM_BLOCK_MANAGER_H
+#define _LINUX_DM_BLOCK_MANAGER_H
+
+#include <linux/types.h>
+#include <linux/blkdev.h>
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Block number.
+ */
+typedef uint64_t dm_block_t;
+struct dm_block;
+
+dm_block_t dm_block_location(struct dm_block *b);
+void *dm_block_data(struct dm_block *b);
+
+/*----------------------------------------------------------------*/
+
+/*
+ * @name should be a unique identifier for the block manager, no longer
+ * than 32 chars.
+ *
+ * @max_held_per_thread should be the maximum number of locks, read or
+ * write, that an individual thread holds at any one time.
+ */
+struct dm_block_manager;
+struct dm_block_manager *dm_block_manager_create(
+ struct block_device *bdev, unsigned block_size,
+ unsigned cache_size, unsigned max_held_per_thread);
+void dm_block_manager_destroy(struct dm_block_manager *bm);
+
+unsigned dm_bm_block_size(struct dm_block_manager *bm);
+dm_block_t dm_bm_nr_blocks(struct dm_block_manager *bm);
+
+/*----------------------------------------------------------------*/
+
+/*
+ * The validator allows the caller to verify newly-read data and modify
+ * the data just before writing, e.g. to calculate checksums. It's
+ * important to be consistent with your use of validators. The only time
+ * you can change validators is if you call dm_bm_write_lock_zero.
+ */
+struct dm_block_validator {
+ const char *name;
+ void (*prepare_for_write)(struct dm_block_validator *v, struct dm_block *b, size_t block_size);
+
+ /*
+ * Return 0 if the checksum is valid or < 0 on error.
+ */
+ int (*check)(struct dm_block_validator *v, struct dm_block *b, size_t block_size);
+};
+
+/*----------------------------------------------------------------*/
+
+/*
+ * You can have multiple concurrent readers or a single writer holding a
+ * block lock.
+ */
+
+/*
+ * dm_bm_lock() locks a block and returns through @result a pointer to
+ * memory that holds a copy of that block. If you have write-locked the
+ * block then any changes you make to memory pointed to by @result will be
+ * written back to the disk sometime after dm_bm_unlock is called.
+ */
+int dm_bm_read_lock(struct dm_block_manager *bm, dm_block_t b,
+ struct dm_block_validator *v,
+ struct dm_block **result);
+
+int dm_bm_write_lock(struct dm_block_manager *bm, dm_block_t b,
+ struct dm_block_validator *v,
+ struct dm_block **result);
+
+/*
+ * The *_try_lock variants return -EWOULDBLOCK if the block isn't
+ * available immediately.
+ */
+int dm_bm_read_try_lock(struct dm_block_manager *bm, dm_block_t b,
+ struct dm_block_validator *v,
+ struct dm_block **result);
+
+/*
+ * Use dm_bm_write_lock_zero() when you know you're going to
+ * overwrite the block completely. It saves a disk read.
+ */
+int dm_bm_write_lock_zero(struct dm_block_manager *bm, dm_block_t b,
+ struct dm_block_validator *v,
+ struct dm_block **result);
+
+int dm_bm_unlock(struct dm_block *b);
+
+/*
+ * An optimisation; we often want to copy a block's contents to a new
+ * block. eg, as part of the shadowing operation. It's far better for
+ * bufio to do this move behind the scenes than hold 2 locks and memcpy the
+ * data.
+ */
+int dm_bm_unlock_move(struct dm_block *b, dm_block_t n);
+
+/*
+ * It's a common idiom to have a superblock that should be committed last.
+ *
+ * @superblock should be write-locked on entry. It will be unlocked during
+ * this function. All dirty blocks are guaranteed to be written and flushed
+ * before the superblock.
+ *
+ * This method always blocks.
+ */
+int dm_bm_flush_and_unlock(struct dm_block_manager *bm,
+ struct dm_block *superblock);
+
+u32 dm_bm_checksum(const void *data, size_t len, u32 init_xor);
+
+/*----------------------------------------------------------------*/
+
+#endif /* _LINUX_DM_BLOCK_MANAGER_H */
diff --git a/drivers/md/persistent-data/dm-btree-internal.h b/drivers/md/persistent-data/dm-btree-internal.h
new file mode 100644
index 000000000000..d279c768f8f1
--- /dev/null
+++ b/drivers/md/persistent-data/dm-btree-internal.h
@@ -0,0 +1,137 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef DM_BTREE_INTERNAL_H
+#define DM_BTREE_INTERNAL_H
+
+#include "dm-btree.h"
+
+/*----------------------------------------------------------------*/
+
+/*
+ * We'll need 2 accessor functions for n->csum and n->blocknr
+ * to support dm-btree-spine.c in that case.
+ */
+
+enum node_flags {
+ INTERNAL_NODE = 1,
+ LEAF_NODE = 1 << 1
+};
+
+/*
+ * Every btree node begins with this structure. Make sure it's a multiple
+ * of 8-bytes in size, otherwise the 64bit keys will be mis-aligned.
+ */
+struct node_header {
+ __le32 csum;
+ __le32 flags;
+ __le64 blocknr; /* Block this node is supposed to live in. */
+
+ __le32 nr_entries;
+ __le32 max_entries;
+ __le32 value_size;
+ __le32 padding;
+} __packed;
+
+struct node {
+ struct node_header header;
+ __le64 keys[0];
+} __packed;
+
+
+void inc_children(struct dm_transaction_manager *tm, struct node *n,
+ struct dm_btree_value_type *vt);
+
+int new_block(struct dm_btree_info *info, struct dm_block **result);
+int unlock_block(struct dm_btree_info *info, struct dm_block *b);
+
+/*
+ * Spines keep track of the rolling locks. There are 2 variants, read-only
+ * and one that uses shadowing. These are separate structs to allow the
+ * type checker to spot misuse, for example accidentally calling read_lock
+ * on a shadow spine.
+ */
+struct ro_spine {
+ struct dm_btree_info *info;
+
+ int count;
+ struct dm_block *nodes[2];
+};
+
+void init_ro_spine(struct ro_spine *s, struct dm_btree_info *info);
+int exit_ro_spine(struct ro_spine *s);
+int ro_step(struct ro_spine *s, dm_block_t new_child);
+struct node *ro_node(struct ro_spine *s);
+
+struct shadow_spine {
+ struct dm_btree_info *info;
+
+ int count;
+ struct dm_block *nodes[2];
+
+ dm_block_t root;
+};
+
+void init_shadow_spine(struct shadow_spine *s, struct dm_btree_info *info);
+int exit_shadow_spine(struct shadow_spine *s);
+
+int shadow_step(struct shadow_spine *s, dm_block_t b,
+ struct dm_btree_value_type *vt);
+
+/*
+ * The spine must have at least one entry before calling this.
+ */
+struct dm_block *shadow_current(struct shadow_spine *s);
+
+/*
+ * The spine must have at least two entries before calling this.
+ */
+struct dm_block *shadow_parent(struct shadow_spine *s);
+
+int shadow_has_parent(struct shadow_spine *s);
+
+int shadow_root(struct shadow_spine *s);
+
+/*
+ * Some inlines.
+ */
+static inline __le64 *key_ptr(struct node *n, uint32_t index)
+{
+ return n->keys + index;
+}
+
+static inline void *value_base(struct node *n)
+{
+ return &n->keys[le32_to_cpu(n->header.max_entries)];
+}
+
+/*
+ * FIXME: Now that value size is stored in node we don't need the third parm.
+ */
+static inline void *value_ptr(struct node *n, uint32_t index, size_t value_size)
+{
+ BUG_ON(value_size != le32_to_cpu(n->header.value_size));
+ return value_base(n) + (value_size * index);
+}
+
+/*
+ * Assumes the values are suitably-aligned and converts to core format.
+ */
+static inline uint64_t value64(struct node *n, uint32_t index)
+{
+ __le64 *values_le = value_base(n);
+
+ return le64_to_cpu(values_le[index]);
+}
+
+/*
+ * Searching for a key within a single node.
+ */
+int lower_bound(struct node *n, uint64_t key);
+
+extern struct dm_block_validator btree_node_validator;
+
+#endif /* DM_BTREE_INTERNAL_H */
diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c
new file mode 100644
index 000000000000..65fd85ec6514
--- /dev/null
+++ b/drivers/md/persistent-data/dm-btree-remove.c
@@ -0,0 +1,566 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-btree.h"
+#include "dm-btree-internal.h"
+#include "dm-transaction-manager.h"
+
+#include <linux/module.h>
+
+/*
+ * Removing an entry from a btree
+ * ==============================
+ *
+ * A very important constraint for our btree is that no node, except the
+ * root, may have fewer than a certain number of entries.
+ * (MIN_ENTRIES <= nr_entries <= MAX_ENTRIES).
+ *
+ * Ensuring this is complicated by the way we want to only ever hold the
+ * locks on 2 nodes concurrently, and only change nodes in a top to bottom
+ * fashion.
+ *
+ * Each node may have a left or right sibling. When decending the spine,
+ * if a node contains only MIN_ENTRIES then we try and increase this to at
+ * least MIN_ENTRIES + 1. We do this in the following ways:
+ *
+ * [A] No siblings => this can only happen if the node is the root, in which
+ * case we copy the childs contents over the root.
+ *
+ * [B] No left sibling
+ * ==> rebalance(node, right sibling)
+ *
+ * [C] No right sibling
+ * ==> rebalance(left sibling, node)
+ *
+ * [D] Both siblings, total_entries(left, node, right) <= DEL_THRESHOLD
+ * ==> delete node adding it's contents to left and right
+ *
+ * [E] Both siblings, total_entries(left, node, right) > DEL_THRESHOLD
+ * ==> rebalance(left, node, right)
+ *
+ * After these operations it's possible that the our original node no
+ * longer contains the desired sub tree. For this reason this rebalancing
+ * is performed on the children of the current node. This also avoids
+ * having a special case for the root.
+ *
+ * Once this rebalancing has occurred we can then step into the child node
+ * for internal nodes. Or delete the entry for leaf nodes.
+ */
+
+/*
+ * Some little utilities for moving node data around.
+ */
+static void node_shift(struct node *n, int shift)
+{
+ uint32_t nr_entries = le32_to_cpu(n->header.nr_entries);
+ uint32_t value_size = le32_to_cpu(n->header.value_size);
+
+ if (shift < 0) {
+ shift = -shift;
+ BUG_ON(shift > nr_entries);
+ BUG_ON((void *) key_ptr(n, shift) >= value_ptr(n, shift, value_size));
+ memmove(key_ptr(n, 0),
+ key_ptr(n, shift),
+ (nr_entries - shift) * sizeof(__le64));
+ memmove(value_ptr(n, 0, value_size),
+ value_ptr(n, shift, value_size),
+ (nr_entries - shift) * value_size);
+ } else {
+ BUG_ON(nr_entries + shift > le32_to_cpu(n->header.max_entries));
+ memmove(key_ptr(n, shift),
+ key_ptr(n, 0),
+ nr_entries * sizeof(__le64));
+ memmove(value_ptr(n, shift, value_size),
+ value_ptr(n, 0, value_size),
+ nr_entries * value_size);
+ }
+}
+
+static void node_copy(struct node *left, struct node *right, int shift)
+{
+ uint32_t nr_left = le32_to_cpu(left->header.nr_entries);
+ uint32_t value_size = le32_to_cpu(left->header.value_size);
+ BUG_ON(value_size != le32_to_cpu(right->header.value_size));
+
+ if (shift < 0) {
+ shift = -shift;
+ BUG_ON(nr_left + shift > le32_to_cpu(left->header.max_entries));
+ memcpy(key_ptr(left, nr_left),
+ key_ptr(right, 0),
+ shift * sizeof(__le64));
+ memcpy(value_ptr(left, nr_left, value_size),
+ value_ptr(right, 0, value_size),
+ shift * value_size);
+ } else {
+ BUG_ON(shift > le32_to_cpu(right->header.max_entries));
+ memcpy(key_ptr(right, 0),
+ key_ptr(left, nr_left - shift),
+ shift * sizeof(__le64));
+ memcpy(value_ptr(right, 0, value_size),
+ value_ptr(left, nr_left - shift, value_size),
+ shift * value_size);
+ }
+}
+
+/*
+ * Delete a specific entry from a leaf node.
+ */
+static void delete_at(struct node *n, unsigned index)
+{
+ unsigned nr_entries = le32_to_cpu(n->header.nr_entries);
+ unsigned nr_to_copy = nr_entries - (index + 1);
+ uint32_t value_size = le32_to_cpu(n->header.value_size);
+ BUG_ON(index >= nr_entries);
+
+ if (nr_to_copy) {
+ memmove(key_ptr(n, index),
+ key_ptr(n, index + 1),
+ nr_to_copy * sizeof(__le64));
+
+ memmove(value_ptr(n, index, value_size),
+ value_ptr(n, index + 1, value_size),
+ nr_to_copy * value_size);
+ }
+
+ n->header.nr_entries = cpu_to_le32(nr_entries - 1);
+}
+
+static unsigned del_threshold(struct node *n)
+{
+ return le32_to_cpu(n->header.max_entries) / 3;
+}
+
+static unsigned merge_threshold(struct node *n)
+{
+ /*
+ * The extra one is because we know we're potentially going to
+ * delete an entry.
+ */
+ return 2 * (le32_to_cpu(n->header.max_entries) / 3) + 1;
+}
+
+struct child {
+ unsigned index;
+ struct dm_block *block;
+ struct node *n;
+};
+
+static struct dm_btree_value_type le64_type = {
+ .context = NULL,
+ .size = sizeof(__le64),
+ .inc = NULL,
+ .dec = NULL,
+ .equal = NULL
+};
+
+static int init_child(struct dm_btree_info *info, struct node *parent,
+ unsigned index, struct child *result)
+{
+ int r, inc;
+ dm_block_t root;
+
+ result->index = index;
+ root = value64(parent, index);
+
+ r = dm_tm_shadow_block(info->tm, root, &btree_node_validator,
+ &result->block, &inc);
+ if (r)
+ return r;
+
+ result->n = dm_block_data(result->block);
+
+ if (inc)
+ inc_children(info->tm, result->n, &le64_type);
+
+ *((__le64 *) value_ptr(parent, index, sizeof(__le64))) =
+ cpu_to_le64(dm_block_location(result->block));
+
+ return 0;
+}
+
+static int exit_child(struct dm_btree_info *info, struct child *c)
+{
+ return dm_tm_unlock(info->tm, c->block);
+}
+
+static void shift(struct node *left, struct node *right, int count)
+{
+ if (!count)
+ return;
+
+ if (count > 0) {
+ node_shift(right, count);
+ node_copy(left, right, count);
+ } else {
+ node_copy(left, right, count);
+ node_shift(right, count);
+ }
+
+ left->header.nr_entries =
+ cpu_to_le32(le32_to_cpu(left->header.nr_entries) - count);
+ BUG_ON(le32_to_cpu(left->header.nr_entries) > le32_to_cpu(left->header.max_entries));
+
+ right->header.nr_entries =
+ cpu_to_le32(le32_to_cpu(right->header.nr_entries) + count);
+ BUG_ON(le32_to_cpu(right->header.nr_entries) > le32_to_cpu(right->header.max_entries));
+}
+
+static void __rebalance2(struct dm_btree_info *info, struct node *parent,
+ struct child *l, struct child *r)
+{
+ struct node *left = l->n;
+ struct node *right = r->n;
+ uint32_t nr_left = le32_to_cpu(left->header.nr_entries);
+ uint32_t nr_right = le32_to_cpu(right->header.nr_entries);
+
+ if (nr_left + nr_right <= merge_threshold(left)) {
+ /*
+ * Merge
+ */
+ node_copy(left, right, -nr_right);
+ left->header.nr_entries = cpu_to_le32(nr_left + nr_right);
+ delete_at(parent, r->index);
+
+ /*
+ * We need to decrement the right block, but not it's
+ * children, since they're still referenced by left.
+ */
+ dm_tm_dec(info->tm, dm_block_location(r->block));
+ } else {
+ /*
+ * Rebalance.
+ */
+ unsigned target_left = (nr_left + nr_right) / 2;
+ unsigned shift_ = nr_left - target_left;
+ BUG_ON(le32_to_cpu(left->header.max_entries) <= nr_left - shift_);
+ BUG_ON(le32_to_cpu(right->header.max_entries) <= nr_right + shift_);
+ shift(left, right, nr_left - target_left);
+ *key_ptr(parent, r->index) = right->keys[0];
+ }
+}
+
+static int rebalance2(struct shadow_spine *s, struct dm_btree_info *info,
+ unsigned left_index)
+{
+ int r;
+ struct node *parent;
+ struct child left, right;
+
+ parent = dm_block_data(shadow_current(s));
+
+ r = init_child(info, parent, left_index, &left);
+ if (r)
+ return r;
+
+ r = init_child(info, parent, left_index + 1, &right);
+ if (r) {
+ exit_child(info, &left);
+ return r;
+ }
+
+ __rebalance2(info, parent, &left, &right);
+
+ r = exit_child(info, &left);
+ if (r) {
+ exit_child(info, &right);
+ return r;
+ }
+
+ return exit_child(info, &right);
+}
+
+static void __rebalance3(struct dm_btree_info *info, struct node *parent,
+ struct child *l, struct child *c, struct child *r)
+{
+ struct node *left = l->n;
+ struct node *center = c->n;
+ struct node *right = r->n;
+
+ uint32_t nr_left = le32_to_cpu(left->header.nr_entries);
+ uint32_t nr_center = le32_to_cpu(center->header.nr_entries);
+ uint32_t nr_right = le32_to_cpu(right->header.nr_entries);
+ uint32_t max_entries = le32_to_cpu(left->header.max_entries);
+
+ unsigned target;
+
+ BUG_ON(left->header.max_entries != center->header.max_entries);
+ BUG_ON(center->header.max_entries != right->header.max_entries);
+
+ if (((nr_left + nr_center + nr_right) / 2) < merge_threshold(center)) {
+ /*
+ * Delete center node:
+ *
+ * We dump as many entries from center as possible into
+ * left, then the rest in right, then rebalance2. This
+ * wastes some cpu, but I want something simple atm.
+ */
+ unsigned shift = min(max_entries - nr_left, nr_center);
+
+ BUG_ON(nr_left + shift > max_entries);
+ node_copy(left, center, -shift);
+ left->header.nr_entries = cpu_to_le32(nr_left + shift);
+
+ if (shift != nr_center) {
+ shift = nr_center - shift;
+ BUG_ON((nr_right + shift) >= max_entries);
+ node_shift(right, shift);
+ node_copy(center, right, shift);
+ right->header.nr_entries = cpu_to_le32(nr_right + shift);
+ }
+ *key_ptr(parent, r->index) = right->keys[0];
+
+ delete_at(parent, c->index);
+ r->index--;
+
+ dm_tm_dec(info->tm, dm_block_location(c->block));
+ __rebalance2(info, parent, l, r);
+
+ return;
+ }
+
+ /*
+ * Rebalance
+ */
+ target = (nr_left + nr_center + nr_right) / 3;
+ BUG_ON(target > max_entries);
+
+ /*
+ * Adjust the left node
+ */
+ shift(left, center, nr_left - target);
+
+ /*
+ * Adjust the right node
+ */
+ shift(center, right, target - nr_right);
+ *key_ptr(parent, c->index) = center->keys[0];
+ *key_ptr(parent, r->index) = right->keys[0];
+}
+
+static int rebalance3(struct shadow_spine *s, struct dm_btree_info *info,
+ unsigned left_index)
+{
+ int r;
+ struct node *parent = dm_block_data(shadow_current(s));
+ struct child left, center, right;
+
+ /*
+ * FIXME: fill out an array?
+ */
+ r = init_child(info, parent, left_index, &left);
+ if (r)
+ return r;
+
+ r = init_child(info, parent, left_index + 1, &center);
+ if (r) {
+ exit_child(info, &left);
+ return r;
+ }
+
+ r = init_child(info, parent, left_index + 2, &right);
+ if (r) {
+ exit_child(info, &left);
+ exit_child(info, &center);
+ return r;
+ }
+
+ __rebalance3(info, parent, &left, &center, &right);
+
+ r = exit_child(info, &left);
+ if (r) {
+ exit_child(info, &center);
+ exit_child(info, &right);
+ return r;
+ }
+
+ r = exit_child(info, &center);
+ if (r) {
+ exit_child(info, &right);
+ return r;
+ }
+
+ r = exit_child(info, &right);
+ if (r)
+ return r;
+
+ return 0;
+}
+
+static int get_nr_entries(struct dm_transaction_manager *tm,
+ dm_block_t b, uint32_t *result)
+{
+ int r;
+ struct dm_block *block;
+ struct node *n;
+
+ r = dm_tm_read_lock(tm, b, &btree_node_validator, &block);
+ if (r)
+ return r;
+
+ n = dm_block_data(block);
+ *result = le32_to_cpu(n->header.nr_entries);
+
+ return dm_tm_unlock(tm, block);
+}
+
+static int rebalance_children(struct shadow_spine *s,
+ struct dm_btree_info *info, uint64_t key)
+{
+ int i, r, has_left_sibling, has_right_sibling;
+ uint32_t child_entries;
+ struct node *n;
+
+ n = dm_block_data(shadow_current(s));
+
+ if (le32_to_cpu(n->header.nr_entries) == 1) {
+ struct dm_block *child;
+ dm_block_t b = value64(n, 0);
+
+ r = dm_tm_read_lock(info->tm, b, &btree_node_validator, &child);
+ if (r)
+ return r;
+
+ memcpy(n, dm_block_data(child),
+ dm_bm_block_size(dm_tm_get_bm(info->tm)));
+ r = dm_tm_unlock(info->tm, child);
+ if (r)
+ return r;
+
+ dm_tm_dec(info->tm, dm_block_location(child));
+ return 0;
+ }
+
+ i = lower_bound(n, key);
+ if (i < 0)
+ return -ENODATA;
+
+ r = get_nr_entries(info->tm, value64(n, i), &child_entries);
+ if (r)
+ return r;
+
+ if (child_entries > del_threshold(n))
+ return 0;
+
+ has_left_sibling = i > 0;
+ has_right_sibling = i < (le32_to_cpu(n->header.nr_entries) - 1);
+
+ if (!has_left_sibling)
+ r = rebalance2(s, info, i);
+
+ else if (!has_right_sibling)
+ r = rebalance2(s, info, i - 1);
+
+ else
+ r = rebalance3(s, info, i - 1);
+
+ return r;
+}
+
+static int do_leaf(struct node *n, uint64_t key, unsigned *index)
+{
+ int i = lower_bound(n, key);
+
+ if ((i < 0) ||
+ (i >= le32_to_cpu(n->header.nr_entries)) ||
+ (le64_to_cpu(n->keys[i]) != key))
+ return -ENODATA;
+
+ *index = i;
+
+ return 0;
+}
+
+/*
+ * Prepares for removal from one level of the hierarchy. The caller must
+ * call delete_at() to remove the entry at index.
+ */
+static int remove_raw(struct shadow_spine *s, struct dm_btree_info *info,
+ struct dm_btree_value_type *vt, dm_block_t root,
+ uint64_t key, unsigned *index)
+{
+ int i = *index, r;
+ struct node *n;
+
+ for (;;) {
+ r = shadow_step(s, root, vt);
+ if (r < 0)
+ break;
+
+ /*
+ * We have to patch up the parent node, ugly, but I don't
+ * see a way to do this automatically as part of the spine
+ * op.
+ */
+ if (shadow_has_parent(s)) {
+ __le64 location = cpu_to_le64(dm_block_location(shadow_current(s)));
+ memcpy(value_ptr(dm_block_data(shadow_parent(s)), i, sizeof(__le64)),
+ &location, sizeof(__le64));
+ }
+
+ n = dm_block_data(shadow_current(s));
+
+ if (le32_to_cpu(n->header.flags) & LEAF_NODE)
+ return do_leaf(n, key, index);
+
+ r = rebalance_children(s, info, key);
+ if (r)
+ break;
+
+ n = dm_block_data(shadow_current(s));
+ if (le32_to_cpu(n->header.flags) & LEAF_NODE)
+ return do_leaf(n, key, index);
+
+ i = lower_bound(n, key);
+
+ /*
+ * We know the key is present, or else
+ * rebalance_children would have returned
+ * -ENODATA
+ */
+ root = value64(n, i);
+ }
+
+ return r;
+}
+
+int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
+ uint64_t *keys, dm_block_t *new_root)
+{
+ unsigned level, last_level = info->levels - 1;
+ int index = 0, r = 0;
+ struct shadow_spine spine;
+ struct node *n;
+
+ init_shadow_spine(&spine, info);
+ for (level = 0; level < info->levels; level++) {
+ r = remove_raw(&spine, info,
+ (level == last_level ?
+ &info->value_type : &le64_type),
+ root, keys[level], (unsigned *)&index);
+ if (r < 0)
+ break;
+
+ n = dm_block_data(shadow_current(&spine));
+ if (level != last_level) {
+ root = value64(n, index);
+ continue;
+ }
+
+ BUG_ON(index < 0 || index >= le32_to_cpu(n->header.nr_entries));
+
+ if (info->value_type.dec)
+ info->value_type.dec(info->value_type.context,
+ value_ptr(n, index, info->value_type.size));
+
+ delete_at(n, index);
+ }
+
+ *new_root = shadow_root(&spine);
+ exit_shadow_spine(&spine);
+
+ return r;
+}
+EXPORT_SYMBOL_GPL(dm_btree_remove);
diff --git a/drivers/md/persistent-data/dm-btree-spine.c b/drivers/md/persistent-data/dm-btree-spine.c
new file mode 100644
index 000000000000..d9a7912ee8ee
--- /dev/null
+++ b/drivers/md/persistent-data/dm-btree-spine.c
@@ -0,0 +1,244 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-btree-internal.h"
+#include "dm-transaction-manager.h"
+
+#include <linux/device-mapper.h>
+
+#define DM_MSG_PREFIX "btree spine"
+
+/*----------------------------------------------------------------*/
+
+#define BTREE_CSUM_XOR 121107
+
+static int node_check(struct dm_block_validator *v,
+ struct dm_block *b,
+ size_t block_size);
+
+static void node_prepare_for_write(struct dm_block_validator *v,
+ struct dm_block *b,
+ size_t block_size)
+{
+ struct node *n = dm_block_data(b);
+ struct node_header *h = &n->header;
+
+ h->blocknr = cpu_to_le64(dm_block_location(b));
+ h->csum = cpu_to_le32(dm_bm_checksum(&h->flags,
+ block_size - sizeof(__le32),
+ BTREE_CSUM_XOR));
+
+ BUG_ON(node_check(v, b, 4096));
+}
+
+static int node_check(struct dm_block_validator *v,
+ struct dm_block *b,
+ size_t block_size)
+{
+ struct node *n = dm_block_data(b);
+ struct node_header *h = &n->header;
+ size_t value_size;
+ __le32 csum_disk;
+ uint32_t flags;
+
+ if (dm_block_location(b) != le64_to_cpu(h->blocknr)) {
+ DMERR("node_check failed blocknr %llu wanted %llu",
+ le64_to_cpu(h->blocknr), dm_block_location(b));
+ return -ENOTBLK;
+ }
+
+ csum_disk = cpu_to_le32(dm_bm_checksum(&h->flags,
+ block_size - sizeof(__le32),
+ BTREE_CSUM_XOR));
+ if (csum_disk != h->csum) {
+ DMERR("node_check failed csum %u wanted %u",
+ le32_to_cpu(csum_disk), le32_to_cpu(h->csum));
+ return -EILSEQ;
+ }
+
+ value_size = le32_to_cpu(h->value_size);
+
+ if (sizeof(struct node_header) +
+ (sizeof(__le64) + value_size) * le32_to_cpu(h->max_entries) > block_size) {
+ DMERR("node_check failed: max_entries too large");
+ return -EILSEQ;
+ }
+
+ if (le32_to_cpu(h->nr_entries) > le32_to_cpu(h->max_entries)) {
+ DMERR("node_check failed, too many entries");
+ return -EILSEQ;
+ }
+
+ /*
+ * The node must be either INTERNAL or LEAF.
+ */
+ flags = le32_to_cpu(h->flags);
+ if (!(flags & INTERNAL_NODE) && !(flags & LEAF_NODE)) {
+ DMERR("node_check failed, node is neither INTERNAL or LEAF");
+ return -EILSEQ;
+ }
+
+ return 0;
+}
+
+struct dm_block_validator btree_node_validator = {
+ .name = "btree_node",
+ .prepare_for_write = node_prepare_for_write,
+ .check = node_check
+};
+
+/*----------------------------------------------------------------*/
+
+static int bn_read_lock(struct dm_btree_info *info, dm_block_t b,
+ struct dm_block **result)
+{
+ return dm_tm_read_lock(info->tm, b, &btree_node_validator, result);
+}
+
+static int bn_shadow(struct dm_btree_info *info, dm_block_t orig,
+ struct dm_btree_value_type *vt,
+ struct dm_block **result)
+{
+ int r, inc;
+
+ r = dm_tm_shadow_block(info->tm, orig, &btree_node_validator,
+ result, &inc);
+ if (!r && inc)
+ inc_children(info->tm, dm_block_data(*result), vt);
+
+ return r;
+}
+
+int new_block(struct dm_btree_info *info, struct dm_block **result)
+{
+ return dm_tm_new_block(info->tm, &btree_node_validator, result);
+}
+
+int unlock_block(struct dm_btree_info *info, struct dm_block *b)
+{
+ return dm_tm_unlock(info->tm, b);
+}
+
+/*----------------------------------------------------------------*/
+
+void init_ro_spine(struct ro_spine *s, struct dm_btree_info *info)
+{
+ s->info = info;
+ s->count = 0;
+ s->nodes[0] = NULL;
+ s->nodes[1] = NULL;
+}
+
+int exit_ro_spine(struct ro_spine *s)
+{
+ int r = 0, i;
+
+ for (i = 0; i < s->count; i++) {
+ int r2 = unlock_block(s->info, s->nodes[i]);
+ if (r2 < 0)
+ r = r2;
+ }
+
+ return r;
+}
+
+int ro_step(struct ro_spine *s, dm_block_t new_child)
+{
+ int r;
+
+ if (s->count == 2) {
+ r = unlock_block(s->info, s->nodes[0]);
+ if (r < 0)
+ return r;
+ s->nodes[0] = s->nodes[1];
+ s->count--;
+ }
+
+ r = bn_read_lock(s->info, new_child, s->nodes + s->count);
+ if (!r)
+ s->count++;
+
+ return r;
+}
+
+struct node *ro_node(struct ro_spine *s)
+{
+ struct dm_block *block;
+
+ BUG_ON(!s->count);
+ block = s->nodes[s->count - 1];
+
+ return dm_block_data(block);
+}
+
+/*----------------------------------------------------------------*/
+
+void init_shadow_spine(struct shadow_spine *s, struct dm_btree_info *info)
+{
+ s->info = info;
+ s->count = 0;
+}
+
+int exit_shadow_spine(struct shadow_spine *s)
+{
+ int r = 0, i;
+
+ for (i = 0; i < s->count; i++) {
+ int r2 = unlock_block(s->info, s->nodes[i]);
+ if (r2 < 0)
+ r = r2;
+ }
+
+ return r;
+}
+
+int shadow_step(struct shadow_spine *s, dm_block_t b,
+ struct dm_btree_value_type *vt)
+{
+ int r;
+
+ if (s->count == 2) {
+ r = unlock_block(s->info, s->nodes[0]);
+ if (r < 0)
+ return r;
+ s->nodes[0] = s->nodes[1];
+ s->count--;
+ }
+
+ r = bn_shadow(s->info, b, vt, s->nodes + s->count);
+ if (!r) {
+ if (!s->count)
+ s->root = dm_block_location(s->nodes[0]);
+
+ s->count++;
+ }
+
+ return r;
+}
+
+struct dm_block *shadow_current(struct shadow_spine *s)
+{
+ BUG_ON(!s->count);
+
+ return s->nodes[s->count - 1];
+}
+
+struct dm_block *shadow_parent(struct shadow_spine *s)
+{
+ BUG_ON(s->count != 2);
+
+ return s->count == 2 ? s->nodes[0] : NULL;
+}
+
+int shadow_has_parent(struct shadow_spine *s)
+{
+ return s->count >= 2;
+}
+
+int shadow_root(struct shadow_spine *s)
+{
+ return s->root;
+}
diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c
new file mode 100644
index 000000000000..e0638be53ea4
--- /dev/null
+++ b/drivers/md/persistent-data/dm-btree.c
@@ -0,0 +1,805 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-btree-internal.h"
+#include "dm-space-map.h"
+#include "dm-transaction-manager.h"
+
+#include <linux/module.h>
+#include <linux/device-mapper.h>
+
+#define DM_MSG_PREFIX "btree"
+
+/*----------------------------------------------------------------
+ * Array manipulation
+ *--------------------------------------------------------------*/
+static void memcpy_disk(void *dest, const void *src, size_t len)
+ __dm_written_to_disk(src)
+{
+ memcpy(dest, src, len);
+ __dm_unbless_for_disk(src);
+}
+
+static void array_insert(void *base, size_t elt_size, unsigned nr_elts,
+ unsigned index, void *elt)
+ __dm_written_to_disk(elt)
+{
+ if (index < nr_elts)
+ memmove(base + (elt_size * (index + 1)),
+ base + (elt_size * index),
+ (nr_elts - index) * elt_size);
+
+ memcpy_disk(base + (elt_size * index), elt, elt_size);
+}
+
+/*----------------------------------------------------------------*/
+
+/* makes the assumption that no two keys are the same. */
+static int bsearch(struct node *n, uint64_t key, int want_hi)
+{
+ int lo = -1, hi = le32_to_cpu(n->header.nr_entries);
+
+ while (hi - lo > 1) {
+ int mid = lo + ((hi - lo) / 2);
+ uint64_t mid_key = le64_to_cpu(n->keys[mid]);
+
+ if (mid_key == key)
+ return mid;
+
+ if (mid_key < key)
+ lo = mid;
+ else
+ hi = mid;
+ }
+
+ return want_hi ? hi : lo;
+}
+
+int lower_bound(struct node *n, uint64_t key)
+{
+ return bsearch(n, key, 0);
+}
+
+void inc_children(struct dm_transaction_manager *tm, struct node *n,
+ struct dm_btree_value_type *vt)
+{
+ unsigned i;
+ uint32_t nr_entries = le32_to_cpu(n->header.nr_entries);
+
+ if (le32_to_cpu(n->header.flags) & INTERNAL_NODE)
+ for (i = 0; i < nr_entries; i++)
+ dm_tm_inc(tm, value64(n, i));
+ else if (vt->inc)
+ for (i = 0; i < nr_entries; i++)
+ vt->inc(vt->context,
+ value_ptr(n, i, vt->size));
+}
+
+static int insert_at(size_t value_size, struct node *node, unsigned index,
+ uint64_t key, void *value)
+ __dm_written_to_disk(value)
+{
+ uint32_t nr_entries = le32_to_cpu(node->header.nr_entries);
+ __le64 key_le = cpu_to_le64(key);
+
+ if (index > nr_entries ||
+ index >= le32_to_cpu(node->header.max_entries)) {
+ DMERR("too many entries in btree node for insert");
+ __dm_unbless_for_disk(value);
+ return -ENOMEM;
+ }
+
+ __dm_bless_for_disk(&key_le);
+
+ array_insert(node->keys, sizeof(*node->keys), nr_entries, index, &key_le);
+ array_insert(value_base(node), value_size, nr_entries, index, value);
+ node->header.nr_entries = cpu_to_le32(nr_entries + 1);
+
+ return 0;
+}
+
+/*----------------------------------------------------------------*/
+
+/*
+ * We want 3n entries (for some n). This works more nicely for repeated
+ * insert remove loops than (2n + 1).
+ */
+static uint32_t calc_max_entries(size_t value_size, size_t block_size)
+{
+ uint32_t total, n;
+ size_t elt_size = sizeof(uint64_t) + value_size; /* key + value */
+
+ block_size -= sizeof(struct node_header);
+ total = block_size / elt_size;
+ n = total / 3; /* rounds down */
+
+ return 3 * n;
+}
+
+int dm_btree_empty(struct dm_btree_info *info, dm_block_t *root)
+{
+ int r;
+ struct dm_block *b;
+ struct node *n;
+ size_t block_size;
+ uint32_t max_entries;
+
+ r = new_block(info, &b);
+ if (r < 0)
+ return r;
+
+ block_size = dm_bm_block_size(dm_tm_get_bm(info->tm));
+ max_entries = calc_max_entries(info->value_type.size, block_size);
+
+ n = dm_block_data(b);
+ memset(n, 0, block_size);
+ n->header.flags = cpu_to_le32(LEAF_NODE);
+ n->header.nr_entries = cpu_to_le32(0);
+ n->header.max_entries = cpu_to_le32(max_entries);
+ n->header.value_size = cpu_to_le32(info->value_type.size);
+
+ *root = dm_block_location(b);
+ return unlock_block(info, b);
+}
+EXPORT_SYMBOL_GPL(dm_btree_empty);
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Deletion uses a recursive algorithm, since we have limited stack space
+ * we explicitly manage our own stack on the heap.
+ */
+#define MAX_SPINE_DEPTH 64
+struct frame {
+ struct dm_block *b;
+ struct node *n;
+ unsigned level;
+ unsigned nr_children;
+ unsigned current_child;
+};
+
+struct del_stack {
+ struct dm_transaction_manager *tm;
+ int top;
+ struct frame spine[MAX_SPINE_DEPTH];
+};
+
+static int top_frame(struct del_stack *s, struct frame **f)
+{
+ if (s->top < 0) {
+ DMERR("btree deletion stack empty");
+ return -EINVAL;
+ }
+
+ *f = s->spine + s->top;
+
+ return 0;
+}
+
+static int unprocessed_frames(struct del_stack *s)
+{
+ return s->top >= 0;
+}
+
+static int push_frame(struct del_stack *s, dm_block_t b, unsigned level)
+{
+ int r;
+ uint32_t ref_count;
+
+ if (s->top >= MAX_SPINE_DEPTH - 1) {
+ DMERR("btree deletion stack out of memory");
+ return -ENOMEM;
+ }
+
+ r = dm_tm_ref(s->tm, b, &ref_count);
+ if (r)
+ return r;
+
+ if (ref_count > 1)
+ /*
+ * This is a shared node, so we can just decrement it's
+ * reference counter and leave the children.
+ */
+ dm_tm_dec(s->tm, b);
+
+ else {
+ struct frame *f = s->spine + ++s->top;
+
+ r = dm_tm_read_lock(s->tm, b, &btree_node_validator, &f->b);
+ if (r) {
+ s->top--;
+ return r;
+ }
+
+ f->n = dm_block_data(f->b);
+ f->level = level;
+ f->nr_children = le32_to_cpu(f->n->header.nr_entries);
+ f->current_child = 0;
+ }
+
+ return 0;
+}
+
+static void pop_frame(struct del_stack *s)
+{
+ struct frame *f = s->spine + s->top--;
+
+ dm_tm_dec(s->tm, dm_block_location(f->b));
+ dm_tm_unlock(s->tm, f->b);
+}
+
+int dm_btree_del(struct dm_btree_info *info, dm_block_t root)
+{
+ int r;
+ struct del_stack *s;
+
+ s = kmalloc(sizeof(*s), GFP_KERNEL);
+ if (!s)
+ return -ENOMEM;
+ s->tm = info->tm;
+ s->top = -1;
+
+ r = push_frame(s, root, 1);
+ if (r)
+ goto out;
+
+ while (unprocessed_frames(s)) {
+ uint32_t flags;
+ struct frame *f;
+ dm_block_t b;
+
+ r = top_frame(s, &f);
+ if (r)
+ goto out;
+
+ if (f->current_child >= f->nr_children) {
+ pop_frame(s);
+ continue;
+ }
+
+ flags = le32_to_cpu(f->n->header.flags);
+ if (flags & INTERNAL_NODE) {
+ b = value64(f->n, f->current_child);
+ f->current_child++;
+ r = push_frame(s, b, f->level);
+ if (r)
+ goto out;
+
+ } else if (f->level != (info->levels - 1)) {
+ b = value64(f->n, f->current_child);
+ f->current_child++;
+ r = push_frame(s, b, f->level + 1);
+ if (r)
+ goto out;
+
+ } else {
+ if (info->value_type.dec) {
+ unsigned i;
+
+ for (i = 0; i < f->nr_children; i++)
+ info->value_type.dec(info->value_type.context,
+ value_ptr(f->n, i, info->value_type.size));
+ }
+ f->current_child = f->nr_children;
+ }
+ }
+
+out:
+ kfree(s);
+ return r;
+}
+EXPORT_SYMBOL_GPL(dm_btree_del);
+
+/*----------------------------------------------------------------*/
+
+static int btree_lookup_raw(struct ro_spine *s, dm_block_t block, uint64_t key,
+ int (*search_fn)(struct node *, uint64_t),
+ uint64_t *result_key, void *v, size_t value_size)
+{
+ int i, r;
+ uint32_t flags, nr_entries;
+
+ do {
+ r = ro_step(s, block);
+ if (r < 0)
+ return r;
+
+ i = search_fn(ro_node(s), key);
+
+ flags = le32_to_cpu(ro_node(s)->header.flags);
+ nr_entries = le32_to_cpu(ro_node(s)->header.nr_entries);
+ if (i < 0 || i >= nr_entries)
+ return -ENODATA;
+
+ if (flags & INTERNAL_NODE)
+ block = value64(ro_node(s), i);
+
+ } while (!(flags & LEAF_NODE));
+
+ *result_key = le64_to_cpu(ro_node(s)->keys[i]);
+ memcpy(v, value_ptr(ro_node(s), i, value_size), value_size);
+
+ return 0;
+}
+
+int dm_btree_lookup(struct dm_btree_info *info, dm_block_t root,
+ uint64_t *keys, void *value_le)
+{
+ unsigned level, last_level = info->levels - 1;
+ int r = -ENODATA;
+ uint64_t rkey;
+ __le64 internal_value_le;
+ struct ro_spine spine;
+
+ init_ro_spine(&spine, info);
+ for (level = 0; level < info->levels; level++) {
+ size_t size;
+ void *value_p;
+
+ if (level == last_level) {
+ value_p = value_le;
+ size = info->value_type.size;
+
+ } else {
+ value_p = &internal_value_le;
+ size = sizeof(uint64_t);
+ }
+
+ r = btree_lookup_raw(&spine, root, keys[level],
+ lower_bound, &rkey,
+ value_p, size);
+
+ if (!r) {
+ if (rkey != keys[level]) {
+ exit_ro_spine(&spine);
+ return -ENODATA;
+ }
+ } else {
+ exit_ro_spine(&spine);
+ return r;
+ }
+
+ root = le64_to_cpu(internal_value_le);
+ }
+ exit_ro_spine(&spine);
+
+ return r;
+}
+EXPORT_SYMBOL_GPL(dm_btree_lookup);
+
+/*
+ * Splits a node by creating a sibling node and shifting half the nodes
+ * contents across. Assumes there is a parent node, and it has room for
+ * another child.
+ *
+ * Before:
+ * +--------+
+ * | Parent |
+ * +--------+
+ * |
+ * v
+ * +----------+
+ * | A ++++++ |
+ * +----------+
+ *
+ *
+ * After:
+ * +--------+
+ * | Parent |
+ * +--------+
+ * | |
+ * v +------+
+ * +---------+ |
+ * | A* +++ | v
+ * +---------+ +-------+
+ * | B +++ |
+ * +-------+
+ *
+ * Where A* is a shadow of A.
+ */
+static int btree_split_sibling(struct shadow_spine *s, dm_block_t root,
+ unsigned parent_index, uint64_t key)
+{
+ int r;
+ size_t size;
+ unsigned nr_left, nr_right;
+ struct dm_block *left, *right, *parent;
+ struct node *ln, *rn, *pn;
+ __le64 location;
+
+ left = shadow_current(s);
+
+ r = new_block(s->info, &right);
+ if (r < 0)
+ return r;
+
+ ln = dm_block_data(left);
+ rn = dm_block_data(right);
+
+ nr_left = le32_to_cpu(ln->header.nr_entries) / 2;
+ nr_right = le32_to_cpu(ln->header.nr_entries) - nr_left;
+
+ ln->header.nr_entries = cpu_to_le32(nr_left);
+
+ rn->header.flags = ln->header.flags;
+ rn->header.nr_entries = cpu_to_le32(nr_right);
+ rn->header.max_entries = ln->header.max_entries;
+ rn->header.value_size = ln->header.value_size;
+ memcpy(rn->keys, ln->keys + nr_left, nr_right * sizeof(rn->keys[0]));
+
+ size = le32_to_cpu(ln->header.flags) & INTERNAL_NODE ?
+ sizeof(uint64_t) : s->info->value_type.size;
+ memcpy(value_ptr(rn, 0, size), value_ptr(ln, nr_left, size),
+ size * nr_right);
+
+ /*
+ * Patch up the parent
+ */
+ parent = shadow_parent(s);
+
+ pn = dm_block_data(parent);
+ location = cpu_to_le64(dm_block_location(left));
+ __dm_bless_for_disk(&location);
+ memcpy_disk(value_ptr(pn, parent_index, sizeof(__le64)),
+ &location, sizeof(__le64));
+
+ location = cpu_to_le64(dm_block_location(right));
+ __dm_bless_for_disk(&location);
+
+ r = insert_at(sizeof(__le64), pn, parent_index + 1,
+ le64_to_cpu(rn->keys[0]), &location);
+ if (r)
+ return r;
+
+ if (key < le64_to_cpu(rn->keys[0])) {
+ unlock_block(s->info, right);
+ s->nodes[1] = left;
+ } else {
+ unlock_block(s->info, left);
+ s->nodes[1] = right;
+ }
+
+ return 0;
+}
+
+/*
+ * Splits a node by creating two new children beneath the given node.
+ *
+ * Before:
+ * +----------+
+ * | A ++++++ |
+ * +----------+
+ *
+ *
+ * After:
+ * +------------+
+ * | A (shadow) |
+ * +------------+
+ * | |
+ * +------+ +----+
+ * | |
+ * v v
+ * +-------+ +-------+
+ * | B +++ | | C +++ |
+ * +-------+ +-------+
+ */
+static int btree_split_beneath(struct shadow_spine *s, uint64_t key)
+{
+ int r;
+ size_t size;
+ unsigned nr_left, nr_right;
+ struct dm_block *left, *right, *new_parent;
+ struct node *pn, *ln, *rn;
+ __le64 val;
+
+ new_parent = shadow_current(s);
+
+ r = new_block(s->info, &left);
+ if (r < 0)
+ return r;
+
+ r = new_block(s->info, &right);
+ if (r < 0) {
+ /* FIXME: put left */
+ return r;
+ }
+
+ pn = dm_block_data(new_parent);
+ ln = dm_block_data(left);
+ rn = dm_block_data(right);
+
+ nr_left = le32_to_cpu(pn->header.nr_entries) / 2;
+ nr_right = le32_to_cpu(pn->header.nr_entries) - nr_left;
+
+ ln->header.flags = pn->header.flags;
+ ln->header.nr_entries = cpu_to_le32(nr_left);
+ ln->header.max_entries = pn->header.max_entries;
+ ln->header.value_size = pn->header.value_size;
+
+ rn->header.flags = pn->header.flags;
+ rn->header.nr_entries = cpu_to_le32(nr_right);
+ rn->header.max_entries = pn->header.max_entries;
+ rn->header.value_size = pn->header.value_size;
+
+ memcpy(ln->keys, pn->keys, nr_left * sizeof(pn->keys[0]));
+ memcpy(rn->keys, pn->keys + nr_left, nr_right * sizeof(pn->keys[0]));
+
+ size = le32_to_cpu(pn->header.flags) & INTERNAL_NODE ?
+ sizeof(__le64) : s->info->value_type.size;
+ memcpy(value_ptr(ln, 0, size), value_ptr(pn, 0, size), nr_left * size);
+ memcpy(value_ptr(rn, 0, size), value_ptr(pn, nr_left, size),
+ nr_right * size);
+
+ /* new_parent should just point to l and r now */
+ pn->header.flags = cpu_to_le32(INTERNAL_NODE);
+ pn->header.nr_entries = cpu_to_le32(2);
+ pn->header.max_entries = cpu_to_le32(
+ calc_max_entries(sizeof(__le64),
+ dm_bm_block_size(
+ dm_tm_get_bm(s->info->tm))));
+ pn->header.value_size = cpu_to_le32(sizeof(__le64));
+
+ val = cpu_to_le64(dm_block_location(left));
+ __dm_bless_for_disk(&val);
+ pn->keys[0] = ln->keys[0];
+ memcpy_disk(value_ptr(pn, 0, sizeof(__le64)), &val, sizeof(__le64));
+
+ val = cpu_to_le64(dm_block_location(right));
+ __dm_bless_for_disk(&val);
+ pn->keys[1] = rn->keys[0];
+ memcpy_disk(value_ptr(pn, 1, sizeof(__le64)), &val, sizeof(__le64));
+
+ /*
+ * rejig the spine. This is ugly, since it knows too
+ * much about the spine
+ */
+ if (s->nodes[0] != new_parent) {
+ unlock_block(s->info, s->nodes[0]);
+ s->nodes[0] = new_parent;
+ }
+ if (key < le64_to_cpu(rn->keys[0])) {
+ unlock_block(s->info, right);
+ s->nodes[1] = left;
+ } else {
+ unlock_block(s->info, left);
+ s->nodes[1] = right;
+ }
+ s->count = 2;
+
+ return 0;
+}
+
+static int btree_insert_raw(struct shadow_spine *s, dm_block_t root,
+ struct dm_btree_value_type *vt,
+ uint64_t key, unsigned *index)
+{
+ int r, i = *index, top = 1;
+ struct node *node;
+
+ for (;;) {
+ r = shadow_step(s, root, vt);
+ if (r < 0)
+ return r;
+
+ node = dm_block_data(shadow_current(s));
+
+ /*
+ * We have to patch up the parent node, ugly, but I don't
+ * see a way to do this automatically as part of the spine
+ * op.
+ */
+ if (shadow_has_parent(s) && i >= 0) { /* FIXME: second clause unness. */
+ __le64 location = cpu_to_le64(dm_block_location(shadow_current(s)));
+
+ __dm_bless_for_disk(&location);
+ memcpy_disk(value_ptr(dm_block_data(shadow_parent(s)), i, sizeof(uint64_t)),
+ &location, sizeof(__le64));
+ }
+
+ node = dm_block_data(shadow_current(s));
+
+ if (node->header.nr_entries == node->header.max_entries) {
+ if (top)
+ r = btree_split_beneath(s, key);
+ else
+ r = btree_split_sibling(s, root, i, key);
+
+ if (r < 0)
+ return r;
+ }
+
+ node = dm_block_data(shadow_current(s));
+
+ i = lower_bound(node, key);
+
+ if (le32_to_cpu(node->header.flags) & LEAF_NODE)
+ break;
+
+ if (i < 0) {
+ /* change the bounds on the lowest key */
+ node->keys[0] = cpu_to_le64(key);
+ i = 0;
+ }
+
+ root = value64(node, i);
+ top = 0;
+ }
+
+ if (i < 0 || le64_to_cpu(node->keys[i]) != key)
+ i++;
+
+ *index = i;
+ return 0;
+}
+
+static int insert(struct dm_btree_info *info, dm_block_t root,
+ uint64_t *keys, void *value, dm_block_t *new_root,
+ int *inserted)
+ __dm_written_to_disk(value)
+{
+ int r, need_insert;
+ unsigned level, index = -1, last_level = info->levels - 1;
+ dm_block_t block = root;
+ struct shadow_spine spine;
+ struct node *n;
+ struct dm_btree_value_type le64_type;
+
+ le64_type.context = NULL;
+ le64_type.size = sizeof(__le64);
+ le64_type.inc = NULL;
+ le64_type.dec = NULL;
+ le64_type.equal = NULL;
+
+ init_shadow_spine(&spine, info);
+
+ for (level = 0; level < (info->levels - 1); level++) {
+ r = btree_insert_raw(&spine, block, &le64_type, keys[level], &index);
+ if (r < 0)
+ goto bad;
+
+ n = dm_block_data(shadow_current(&spine));
+ need_insert = ((index >= le32_to_cpu(n->header.nr_entries)) ||
+ (le64_to_cpu(n->keys[index]) != keys[level]));
+
+ if (need_insert) {
+ dm_block_t new_tree;
+ __le64 new_le;
+
+ r = dm_btree_empty(info, &new_tree);
+ if (r < 0)
+ goto bad;
+
+ new_le = cpu_to_le64(new_tree);
+ __dm_bless_for_disk(&new_le);
+
+ r = insert_at(sizeof(uint64_t), n, index,
+ keys[level], &new_le);
+ if (r)
+ goto bad;
+ }
+
+ if (level < last_level)
+ block = value64(n, index);
+ }
+
+ r = btree_insert_raw(&spine, block, &info->value_type,
+ keys[level], &index);
+ if (r < 0)
+ goto bad;
+
+ n = dm_block_data(shadow_current(&spine));
+ need_insert = ((index >= le32_to_cpu(n->header.nr_entries)) ||
+ (le64_to_cpu(n->keys[index]) != keys[level]));
+
+ if (need_insert) {
+ if (inserted)
+ *inserted = 1;
+
+ r = insert_at(info->value_type.size, n, index,
+ keys[level], value);
+ if (r)
+ goto bad_unblessed;
+ } else {
+ if (inserted)
+ *inserted = 0;
+
+ if (info->value_type.dec &&
+ (!info->value_type.equal ||
+ !info->value_type.equal(
+ info->value_type.context,
+ value_ptr(n, index, info->value_type.size),
+ value))) {
+ info->value_type.dec(info->value_type.context,
+ value_ptr(n, index, info->value_type.size));
+ }
+ memcpy_disk(value_ptr(n, index, info->value_type.size),
+ value, info->value_type.size);
+ }
+
+ *new_root = shadow_root(&spine);
+ exit_shadow_spine(&spine);
+
+ return 0;
+
+bad:
+ __dm_unbless_for_disk(value);
+bad_unblessed:
+ exit_shadow_spine(&spine);
+ return r;
+}
+
+int dm_btree_insert(struct dm_btree_info *info, dm_block_t root,
+ uint64_t *keys, void *value, dm_block_t *new_root)
+ __dm_written_to_disk(value)
+{
+ return insert(info, root, keys, value, new_root, NULL);
+}
+EXPORT_SYMBOL_GPL(dm_btree_insert);
+
+int dm_btree_insert_notify(struct dm_btree_info *info, dm_block_t root,
+ uint64_t *keys, void *value, dm_block_t *new_root,
+ int *inserted)
+ __dm_written_to_disk(value)
+{
+ return insert(info, root, keys, value, new_root, inserted);
+}
+EXPORT_SYMBOL_GPL(dm_btree_insert_notify);
+
+/*----------------------------------------------------------------*/
+
+static int find_highest_key(struct ro_spine *s, dm_block_t block,
+ uint64_t *result_key, dm_block_t *next_block)
+{
+ int i, r;
+ uint32_t flags;
+
+ do {
+ r = ro_step(s, block);
+ if (r < 0)
+ return r;
+
+ flags = le32_to_cpu(ro_node(s)->header.flags);
+ i = le32_to_cpu(ro_node(s)->header.nr_entries);
+ if (!i)
+ return -ENODATA;
+ else
+ i--;
+
+ *result_key = le64_to_cpu(ro_node(s)->keys[i]);
+ if (next_block || flags & INTERNAL_NODE)
+ block = value64(ro_node(s), i);
+
+ } while (flags & INTERNAL_NODE);
+
+ if (next_block)
+ *next_block = block;
+ return 0;
+}
+
+int dm_btree_find_highest_key(struct dm_btree_info *info, dm_block_t root,
+ uint64_t *result_keys)
+{
+ int r = 0, count = 0, level;
+ struct ro_spine spine;
+
+ init_ro_spine(&spine, info);
+ for (level = 0; level < info->levels; level++) {
+ r = find_highest_key(&spine, root, result_keys + level,
+ level == info->levels - 1 ? NULL : &root);
+ if (r == -ENODATA) {
+ r = 0;
+ break;
+
+ } else if (r)
+ break;
+
+ count++;
+ }
+ exit_ro_spine(&spine);
+
+ return r ? r : count;
+}
+EXPORT_SYMBOL_GPL(dm_btree_find_highest_key);
diff --git a/drivers/md/persistent-data/dm-btree.h b/drivers/md/persistent-data/dm-btree.h
new file mode 100644
index 000000000000..ae02c84410ff
--- /dev/null
+++ b/drivers/md/persistent-data/dm-btree.h
@@ -0,0 +1,145 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+#ifndef _LINUX_DM_BTREE_H
+#define _LINUX_DM_BTREE_H
+
+#include "dm-block-manager.h"
+
+struct dm_transaction_manager;
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Annotations used to check on-disk metadata is handled as little-endian.
+ */
+#ifdef __CHECKER__
+# define __dm_written_to_disk(x) __releases(x)
+# define __dm_reads_from_disk(x) __acquires(x)
+# define __dm_bless_for_disk(x) __acquire(x)
+# define __dm_unbless_for_disk(x) __release(x)
+#else
+# define __dm_written_to_disk(x)
+# define __dm_reads_from_disk(x)
+# define __dm_bless_for_disk(x)
+# define __dm_unbless_for_disk(x)
+#endif
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Manipulates hierarchical B+ trees with 64-bit keys and arbitrary-sized
+ * values.
+ */
+
+/*
+ * Infomation about the values stored within the btree.
+ */
+struct dm_btree_value_type {
+ void *context;
+
+ /*
+ * The size in bytes of each value.
+ */
+ uint32_t size;
+
+ /*
+ * Any of these methods can be safely set to NULL if you do not
+ * need the corresponding feature.
+ */
+
+ /*
+ * The btree is making a duplicate of the value, for instance
+ * because previously-shared btree nodes have now diverged.
+ * @value argument is the new copy that the copy function may modify.
+ * (Probably it just wants to increment a reference count
+ * somewhere.) This method is _not_ called for insertion of a new
+ * value: It is assumed the ref count is already 1.
+ */
+ void (*inc)(void *context, void *value);
+
+ /*
+ * This value is being deleted. The btree takes care of freeing
+ * the memory pointed to by @value. Often the del function just
+ * needs to decrement a reference count somewhere.
+ */
+ void (*dec)(void *context, void *value);
+
+ /*
+ * A test for equality between two values. When a value is
+ * overwritten with a new one, the old one has the dec method
+ * called _unless_ the new and old value are deemed equal.
+ */
+ int (*equal)(void *context, void *value1, void *value2);
+};
+
+/*
+ * The shape and contents of a btree.
+ */
+struct dm_btree_info {
+ struct dm_transaction_manager *tm;
+
+ /*
+ * Number of nested btrees. (Not the depth of a single tree.)
+ */
+ unsigned levels;
+ struct dm_btree_value_type value_type;
+};
+
+/*
+ * Set up an empty tree. O(1).
+ */
+int dm_btree_empty(struct dm_btree_info *info, dm_block_t *root);
+
+/*
+ * Delete a tree. O(n) - this is the slow one! It can also block, so
+ * please don't call it on an IO path.
+ */
+int dm_btree_del(struct dm_btree_info *info, dm_block_t root);
+
+/*
+ * All the lookup functions return -ENODATA if the key cannot be found.
+ */
+
+/*
+ * Tries to find a key that matches exactly. O(ln(n))
+ */
+int dm_btree_lookup(struct dm_btree_info *info, dm_block_t root,
+ uint64_t *keys, void *value_le);
+
+/*
+ * Insertion (or overwrite an existing value). O(ln(n))
+ */
+int dm_btree_insert(struct dm_btree_info *info, dm_block_t root,
+ uint64_t *keys, void *value, dm_block_t *new_root)
+ __dm_written_to_disk(value);
+
+/*
+ * A variant of insert that indicates whether it actually inserted or just
+ * overwrote. Useful if you're keeping track of the number of entries in a
+ * tree.
+ */
+int dm_btree_insert_notify(struct dm_btree_info *info, dm_block_t root,
+ uint64_t *keys, void *value, dm_block_t *new_root,
+ int *inserted)
+ __dm_written_to_disk(value);
+
+/*
+ * Remove a key if present. This doesn't remove empty sub trees. Normally
+ * subtrees represent a separate entity, like a snapshot map, so this is
+ * correct behaviour. O(ln(n)).
+ */
+int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
+ uint64_t *keys, dm_block_t *new_root);
+
+/*
+ * Returns < 0 on failure. Otherwise the number of key entries that have
+ * been filled out. Remember trees can have zero entries, and as such have
+ * no highest key.
+ */
+int dm_btree_find_highest_key(struct dm_btree_info *info, dm_block_t root,
+ uint64_t *result_keys);
+
+#endif /* _LINUX_DM_BTREE_H */
diff --git a/drivers/md/persistent-data/dm-persistent-data-internal.h b/drivers/md/persistent-data/dm-persistent-data-internal.h
new file mode 100644
index 000000000000..c49e26fff36c
--- /dev/null
+++ b/drivers/md/persistent-data/dm-persistent-data-internal.h
@@ -0,0 +1,19 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef _DM_PERSISTENT_DATA_INTERNAL_H
+#define _DM_PERSISTENT_DATA_INTERNAL_H
+
+#include "dm-block-manager.h"
+
+static inline unsigned dm_hash_block(dm_block_t b, unsigned hash_mask)
+{
+ const unsigned BIG_PRIME = 4294967291UL;
+
+ return (((unsigned) b) * BIG_PRIME) & hash_mask;
+}
+
+#endif /* _PERSISTENT_DATA_INTERNAL_H */
diff --git a/drivers/md/persistent-data/dm-space-map-checker.c b/drivers/md/persistent-data/dm-space-map-checker.c
new file mode 100644
index 000000000000..bb44a937fe63
--- /dev/null
+++ b/drivers/md/persistent-data/dm-space-map-checker.c
@@ -0,0 +1,437 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-space-map-checker.h"
+
+#include <linux/device-mapper.h>
+
+#ifdef CONFIG_DM_DEBUG_SPACE_MAPS
+
+#define DM_MSG_PREFIX "space map checker"
+
+/*----------------------------------------------------------------*/
+
+struct count_array {
+ dm_block_t nr;
+ dm_block_t nr_free;
+
+ uint32_t *counts;
+};
+
+static int ca_get_count(struct count_array *ca, dm_block_t b, uint32_t *count)
+{
+ if (b >= ca->nr)
+ return -EINVAL;
+
+ *count = ca->counts[b];
+ return 0;
+}
+
+static int ca_count_more_than_one(struct count_array *ca, dm_block_t b, int *r)
+{
+ if (b >= ca->nr)
+ return -EINVAL;
+
+ *r = ca->counts[b] > 1;
+ return 0;
+}
+
+static int ca_set_count(struct count_array *ca, dm_block_t b, uint32_t count)
+{
+ uint32_t old_count;
+
+ if (b >= ca->nr)
+ return -EINVAL;
+
+ old_count = ca->counts[b];
+
+ if (!count && old_count)
+ ca->nr_free++;
+
+ else if (count && !old_count)
+ ca->nr_free--;
+
+ ca->counts[b] = count;
+ return 0;
+}
+
+static int ca_inc_block(struct count_array *ca, dm_block_t b)
+{
+ if (b >= ca->nr)
+ return -EINVAL;
+
+ ca_set_count(ca, b, ca->counts[b] + 1);
+ return 0;
+}
+
+static int ca_dec_block(struct count_array *ca, dm_block_t b)
+{
+ if (b >= ca->nr)
+ return -EINVAL;
+
+ BUG_ON(ca->counts[b] == 0);
+ ca_set_count(ca, b, ca->counts[b] - 1);
+ return 0;
+}
+
+static int ca_create(struct count_array *ca, struct dm_space_map *sm)
+{
+ int r;
+ dm_block_t nr_blocks;
+
+ r = dm_sm_get_nr_blocks(sm, &nr_blocks);
+ if (r)
+ return r;
+
+ ca->nr = nr_blocks;
+ ca->nr_free = nr_blocks;
+ ca->counts = kzalloc(sizeof(*ca->counts) * nr_blocks, GFP_KERNEL);
+ if (!ca->counts)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int ca_load(struct count_array *ca, struct dm_space_map *sm)
+{
+ int r;
+ uint32_t count;
+ dm_block_t nr_blocks, i;
+
+ r = dm_sm_get_nr_blocks(sm, &nr_blocks);
+ if (r)
+ return r;
+
+ BUG_ON(ca->nr != nr_blocks);
+
+ DMWARN("Loading debug space map from disk. This may take some time");
+ for (i = 0; i < nr_blocks; i++) {
+ r = dm_sm_get_count(sm, i, &count);
+ if (r) {
+ DMERR("load failed");
+ return r;
+ }
+
+ ca_set_count(ca, i, count);
+ }
+ DMWARN("Load complete");
+
+ return 0;
+}
+
+static int ca_extend(struct count_array *ca, dm_block_t extra_blocks)
+{
+ dm_block_t nr_blocks = ca->nr + extra_blocks;
+ uint32_t *counts = kzalloc(sizeof(*counts) * nr_blocks, GFP_KERNEL);
+ if (!counts)
+ return -ENOMEM;
+
+ memcpy(counts, ca->counts, sizeof(*counts) * ca->nr);
+ kfree(ca->counts);
+ ca->nr = nr_blocks;
+ ca->nr_free += extra_blocks;
+ ca->counts = counts;
+ return 0;
+}
+
+static int ca_commit(struct count_array *old, struct count_array *new)
+{
+ if (old->nr != new->nr) {
+ BUG_ON(old->nr > new->nr);
+ ca_extend(old, new->nr - old->nr);
+ }
+
+ BUG_ON(old->nr != new->nr);
+ old->nr_free = new->nr_free;
+ memcpy(old->counts, new->counts, sizeof(*old->counts) * old->nr);
+ return 0;
+}
+
+static void ca_destroy(struct count_array *ca)
+{
+ kfree(ca->counts);
+}
+
+/*----------------------------------------------------------------*/
+
+struct sm_checker {
+ struct dm_space_map sm;
+
+ struct count_array old_counts;
+ struct count_array counts;
+
+ struct dm_space_map *real_sm;
+};
+
+static void sm_checker_destroy(struct dm_space_map *sm)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+
+ dm_sm_destroy(smc->real_sm);
+ ca_destroy(&smc->old_counts);
+ ca_destroy(&smc->counts);
+ kfree(smc);
+}
+
+static int sm_checker_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ int r = dm_sm_get_nr_blocks(smc->real_sm, count);
+ if (!r)
+ BUG_ON(smc->old_counts.nr != *count);
+ return r;
+}
+
+static int sm_checker_get_nr_free(struct dm_space_map *sm, dm_block_t *count)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ int r = dm_sm_get_nr_free(smc->real_sm, count);
+ if (!r) {
+ /*
+ * Slow, but we know it's correct.
+ */
+ dm_block_t b, n = 0;
+ for (b = 0; b < smc->old_counts.nr; b++)
+ if (smc->old_counts.counts[b] == 0 &&
+ smc->counts.counts[b] == 0)
+ n++;
+
+ if (n != *count)
+ DMERR("free block counts differ, checker %u, sm-disk:%u",
+ (unsigned) n, (unsigned) *count);
+ }
+ return r;
+}
+
+static int sm_checker_new_block(struct dm_space_map *sm, dm_block_t *b)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ int r = dm_sm_new_block(smc->real_sm, b);
+
+ if (!r) {
+ BUG_ON(*b >= smc->old_counts.nr);
+ BUG_ON(smc->old_counts.counts[*b] != 0);
+ BUG_ON(*b >= smc->counts.nr);
+ BUG_ON(smc->counts.counts[*b] != 0);
+ ca_set_count(&smc->counts, *b, 1);
+ }
+
+ return r;
+}
+
+static int sm_checker_inc_block(struct dm_space_map *sm, dm_block_t b)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ int r = dm_sm_inc_block(smc->real_sm, b);
+ int r2 = ca_inc_block(&smc->counts, b);
+ BUG_ON(r != r2);
+ return r;
+}
+
+static int sm_checker_dec_block(struct dm_space_map *sm, dm_block_t b)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ int r = dm_sm_dec_block(smc->real_sm, b);
+ int r2 = ca_dec_block(&smc->counts, b);
+ BUG_ON(r != r2);
+ return r;
+}
+
+static int sm_checker_get_count(struct dm_space_map *sm, dm_block_t b, uint32_t *result)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ uint32_t result2 = 0;
+ int r = dm_sm_get_count(smc->real_sm, b, result);
+ int r2 = ca_get_count(&smc->counts, b, &result2);
+
+ BUG_ON(r != r2);
+ if (!r)
+ BUG_ON(*result != result2);
+ return r;
+}
+
+static int sm_checker_count_more_than_one(struct dm_space_map *sm, dm_block_t b, int *result)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ int result2 = 0;
+ int r = dm_sm_count_is_more_than_one(smc->real_sm, b, result);
+ int r2 = ca_count_more_than_one(&smc->counts, b, &result2);
+
+ BUG_ON(r != r2);
+ if (!r)
+ BUG_ON(!(*result) && result2);
+ return r;
+}
+
+static int sm_checker_set_count(struct dm_space_map *sm, dm_block_t b, uint32_t count)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ uint32_t old_rc;
+ int r = dm_sm_set_count(smc->real_sm, b, count);
+ int r2;
+
+ BUG_ON(b >= smc->counts.nr);
+ old_rc = smc->counts.counts[b];
+ r2 = ca_set_count(&smc->counts, b, count);
+ BUG_ON(r != r2);
+
+ return r;
+}
+
+static int sm_checker_commit(struct dm_space_map *sm)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ int r;
+
+ r = dm_sm_commit(smc->real_sm);
+ if (r)
+ return r;
+
+ r = ca_commit(&smc->old_counts, &smc->counts);
+ if (r)
+ return r;
+
+ return 0;
+}
+
+static int sm_checker_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ int r = dm_sm_extend(smc->real_sm, extra_blocks);
+ if (r)
+ return r;
+
+ return ca_extend(&smc->counts, extra_blocks);
+}
+
+static int sm_checker_root_size(struct dm_space_map *sm, size_t *result)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ return dm_sm_root_size(smc->real_sm, result);
+}
+
+static int sm_checker_copy_root(struct dm_space_map *sm, void *copy_to_here_le, size_t len)
+{
+ struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
+ return dm_sm_copy_root(smc->real_sm, copy_to_here_le, len);
+}
+
+/*----------------------------------------------------------------*/
+
+static struct dm_space_map ops_ = {
+ .destroy = sm_checker_destroy,
+ .get_nr_blocks = sm_checker_get_nr_blocks,
+ .get_nr_free = sm_checker_get_nr_free,
+ .inc_block = sm_checker_inc_block,
+ .dec_block = sm_checker_dec_block,
+ .new_block = sm_checker_new_block,
+ .get_count = sm_checker_get_count,
+ .count_is_more_than_one = sm_checker_count_more_than_one,
+ .set_count = sm_checker_set_count,
+ .commit = sm_checker_commit,
+ .extend = sm_checker_extend,
+ .root_size = sm_checker_root_size,
+ .copy_root = sm_checker_copy_root
+};
+
+struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm)
+{
+ int r;
+ struct sm_checker *smc;
+
+ if (!sm)
+ return NULL;
+
+ smc = kmalloc(sizeof(*smc), GFP_KERNEL);
+ if (!smc)
+ return NULL;
+
+ memcpy(&smc->sm, &ops_, sizeof(smc->sm));
+ r = ca_create(&smc->old_counts, sm);
+ if (r) {
+ kfree(smc);
+ return NULL;
+ }
+
+ r = ca_create(&smc->counts, sm);
+ if (r) {
+ ca_destroy(&smc->old_counts);
+ kfree(smc);
+ return NULL;
+ }
+
+ smc->real_sm = sm;
+
+ r = ca_load(&smc->counts, sm);
+ if (r) {
+ ca_destroy(&smc->counts);
+ ca_destroy(&smc->old_counts);
+ kfree(smc);
+ return NULL;
+ }
+
+ r = ca_commit(&smc->old_counts, &smc->counts);
+ if (r) {
+ ca_destroy(&smc->counts);
+ ca_destroy(&smc->old_counts);
+ kfree(smc);
+ return NULL;
+ }
+
+ return &smc->sm;
+}
+EXPORT_SYMBOL_GPL(dm_sm_checker_create);
+
+struct dm_space_map *dm_sm_checker_create_fresh(struct dm_space_map *sm)
+{
+ int r;
+ struct sm_checker *smc;
+
+ if (!sm)
+ return NULL;
+
+ smc = kmalloc(sizeof(*smc), GFP_KERNEL);
+ if (!smc)
+ return NULL;
+
+ memcpy(&smc->sm, &ops_, sizeof(smc->sm));
+ r = ca_create(&smc->old_counts, sm);
+ if (r) {
+ kfree(smc);
+ return NULL;
+ }
+
+ r = ca_create(&smc->counts, sm);
+ if (r) {
+ ca_destroy(&smc->old_counts);
+ kfree(smc);
+ return NULL;
+ }
+
+ smc->real_sm = sm;
+ return &smc->sm;
+}
+EXPORT_SYMBOL_GPL(dm_sm_checker_create_fresh);
+
+/*----------------------------------------------------------------*/
+
+#else
+
+struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm)
+{
+ return sm;
+}
+EXPORT_SYMBOL_GPL(dm_sm_checker_create);
+
+struct dm_space_map *dm_sm_checker_create_fresh(struct dm_space_map *sm)
+{
+ return sm;
+}
+EXPORT_SYMBOL_GPL(dm_sm_checker_create_fresh);
+
+/*----------------------------------------------------------------*/
+
+#endif
diff --git a/drivers/md/persistent-data/dm-space-map-checker.h b/drivers/md/persistent-data/dm-space-map-checker.h
new file mode 100644
index 000000000000..444dccf6688c
--- /dev/null
+++ b/drivers/md/persistent-data/dm-space-map-checker.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef SNAPSHOTS_SPACE_MAP_CHECKER_H
+#define SNAPSHOTS_SPACE_MAP_CHECKER_H
+
+#include "dm-space-map.h"
+
+/*----------------------------------------------------------------*/
+
+/*
+ * This space map wraps a real on-disk space map, and verifies all of its
+ * operations. It uses a lot of memory, so only use if you have a specific
+ * problem that you're debugging.
+ *
+ * Ownership of @sm passes.
+ */
+struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm);
+struct dm_space_map *dm_sm_checker_create_fresh(struct dm_space_map *sm);
+
+/*----------------------------------------------------------------*/
+
+#endif
diff --git a/drivers/md/persistent-data/dm-space-map-common.c b/drivers/md/persistent-data/dm-space-map-common.c
new file mode 100644
index 000000000000..df2494c06cdc
--- /dev/null
+++ b/drivers/md/persistent-data/dm-space-map-common.c
@@ -0,0 +1,705 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-space-map-common.h"
+#include "dm-transaction-manager.h"
+
+#include <linux/bitops.h>
+#include <linux/device-mapper.h>
+
+#define DM_MSG_PREFIX "space map common"
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Index validator.
+ */
+#define INDEX_CSUM_XOR 160478
+
+static void index_prepare_for_write(struct dm_block_validator *v,
+ struct dm_block *b,
+ size_t block_size)
+{
+ struct disk_metadata_index *mi_le = dm_block_data(b);
+
+ mi_le->blocknr = cpu_to_le64(dm_block_location(b));
+ mi_le->csum = cpu_to_le32(dm_bm_checksum(&mi_le->padding,
+ block_size - sizeof(__le32),
+ INDEX_CSUM_XOR));
+}
+
+static int index_check(struct dm_block_validator *v,
+ struct dm_block *b,
+ size_t block_size)
+{
+ struct disk_metadata_index *mi_le = dm_block_data(b);
+ __le32 csum_disk;
+
+ if (dm_block_location(b) != le64_to_cpu(mi_le->blocknr)) {
+ DMERR("index_check failed blocknr %llu wanted %llu",
+ le64_to_cpu(mi_le->blocknr), dm_block_location(b));
+ return -ENOTBLK;
+ }
+
+ csum_disk = cpu_to_le32(dm_bm_checksum(&mi_le->padding,
+ block_size - sizeof(__le32),
+ INDEX_CSUM_XOR));
+ if (csum_disk != mi_le->csum) {
+ DMERR("index_check failed csum %u wanted %u",
+ le32_to_cpu(csum_disk), le32_to_cpu(mi_le->csum));
+ return -EILSEQ;
+ }
+
+ return 0;
+}
+
+static struct dm_block_validator index_validator = {
+ .name = "index",
+ .prepare_for_write = index_prepare_for_write,
+ .check = index_check
+};
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Bitmap validator
+ */
+#define BITMAP_CSUM_XOR 240779
+
+static void bitmap_prepare_for_write(struct dm_block_validator *v,
+ struct dm_block *b,
+ size_t block_size)
+{
+ struct disk_bitmap_header *disk_header = dm_block_data(b);
+
+ disk_header->blocknr = cpu_to_le64(dm_block_location(b));
+ disk_header->csum = cpu_to_le32(dm_bm_checksum(&disk_header->not_used,
+ block_size - sizeof(__le32),
+ BITMAP_CSUM_XOR));
+}
+
+static int bitmap_check(struct dm_block_validator *v,
+ struct dm_block *b,
+ size_t block_size)
+{
+ struct disk_bitmap_header *disk_header = dm_block_data(b);
+ __le32 csum_disk;
+
+ if (dm_block_location(b) != le64_to_cpu(disk_header->blocknr)) {
+ DMERR("bitmap check failed blocknr %llu wanted %llu",
+ le64_to_cpu(disk_header->blocknr), dm_block_location(b));
+ return -ENOTBLK;
+ }
+
+ csum_disk = cpu_to_le32(dm_bm_checksum(&disk_header->not_used,
+ block_size - sizeof(__le32),
+ BITMAP_CSUM_XOR));
+ if (csum_disk != disk_header->csum) {
+ DMERR("bitmap check failed csum %u wanted %u",
+ le32_to_cpu(csum_disk), le32_to_cpu(disk_header->csum));
+ return -EILSEQ;
+ }
+
+ return 0;
+}
+
+static struct dm_block_validator dm_sm_bitmap_validator = {
+ .name = "sm_bitmap",
+ .prepare_for_write = bitmap_prepare_for_write,
+ .check = bitmap_check
+};
+
+/*----------------------------------------------------------------*/
+
+#define ENTRIES_PER_WORD 32
+#define ENTRIES_SHIFT 5
+
+static void *dm_bitmap_data(struct dm_block *b)
+{
+ return dm_block_data(b) + sizeof(struct disk_bitmap_header);
+}
+
+#define WORD_MASK_HIGH 0xAAAAAAAAAAAAAAAAULL
+
+static unsigned bitmap_word_used(void *addr, unsigned b)
+{
+ __le64 *words_le = addr;
+ __le64 *w_le = words_le + (b >> ENTRIES_SHIFT);
+
+ uint64_t bits = le64_to_cpu(*w_le);
+ uint64_t mask = (bits + WORD_MASK_HIGH + 1) & WORD_MASK_HIGH;
+
+ return !(~bits & mask);
+}
+
+static unsigned sm_lookup_bitmap(void *addr, unsigned b)
+{
+ __le64 *words_le = addr;
+ __le64 *w_le = words_le + (b >> ENTRIES_SHIFT);
+ unsigned hi, lo;
+
+ b = (b & (ENTRIES_PER_WORD - 1)) << 1;
+ hi = !!test_bit_le(b, (void *) w_le);
+ lo = !!test_bit_le(b + 1, (void *) w_le);
+ return (hi << 1) | lo;
+}
+
+static void sm_set_bitmap(void *addr, unsigned b, unsigned val)
+{
+ __le64 *words_le = addr;
+ __le64 *w_le = words_le + (b >> ENTRIES_SHIFT);
+
+ b = (b & (ENTRIES_PER_WORD - 1)) << 1;
+
+ if (val & 2)
+ __set_bit_le(b, (void *) w_le);
+ else
+ __clear_bit_le(b, (void *) w_le);
+
+ if (val & 1)
+ __set_bit_le(b + 1, (void *) w_le);
+ else
+ __clear_bit_le(b + 1, (void *) w_le);
+}
+
+static int sm_find_free(void *addr, unsigned begin, unsigned end,
+ unsigned *result)
+{
+ while (begin < end) {
+ if (!(begin & (ENTRIES_PER_WORD - 1)) &&
+ bitmap_word_used(addr, begin)) {
+ begin += ENTRIES_PER_WORD;
+ continue;
+ }
+
+ if (!sm_lookup_bitmap(addr, begin)) {
+ *result = begin;
+ return 0;
+ }
+
+ begin++;
+ }
+
+ return -ENOSPC;
+}
+
+/*----------------------------------------------------------------*/
+
+static int sm_ll_init(struct ll_disk *ll, struct dm_transaction_manager *tm)
+{
+ ll->tm = tm;
+
+ ll->bitmap_info.tm = tm;
+ ll->bitmap_info.levels = 1;
+
+ /*
+ * Because the new bitmap blocks are created via a shadow
+ * operation, the old entry has already had its reference count
+ * decremented and we don't need the btree to do any bookkeeping.
+ */
+ ll->bitmap_info.value_type.size = sizeof(struct disk_index_entry);
+ ll->bitmap_info.value_type.inc = NULL;
+ ll->bitmap_info.value_type.dec = NULL;
+ ll->bitmap_info.value_type.equal = NULL;
+
+ ll->ref_count_info.tm = tm;
+ ll->ref_count_info.levels = 1;
+ ll->ref_count_info.value_type.size = sizeof(uint32_t);
+ ll->ref_count_info.value_type.inc = NULL;
+ ll->ref_count_info.value_type.dec = NULL;
+ ll->ref_count_info.value_type.equal = NULL;
+
+ ll->block_size = dm_bm_block_size(dm_tm_get_bm(tm));
+
+ if (ll->block_size > (1 << 30)) {
+ DMERR("block size too big to hold bitmaps");
+ return -EINVAL;
+ }
+
+ ll->entries_per_block = (ll->block_size - sizeof(struct disk_bitmap_header)) *
+ ENTRIES_PER_BYTE;
+ ll->nr_blocks = 0;
+ ll->bitmap_root = 0;
+ ll->ref_count_root = 0;
+
+ return 0;
+}
+
+int sm_ll_extend(struct ll_disk *ll, dm_block_t extra_blocks)
+{
+ int r;
+ dm_block_t i, nr_blocks, nr_indexes;
+ unsigned old_blocks, blocks;
+
+ nr_blocks = ll->nr_blocks + extra_blocks;
+ old_blocks = dm_sector_div_up(ll->nr_blocks, ll->entries_per_block);
+ blocks = dm_sector_div_up(nr_blocks, ll->entries_per_block);
+
+ nr_indexes = dm_sector_div_up(nr_blocks, ll->entries_per_block);
+ if (nr_indexes > ll->max_entries(ll)) {
+ DMERR("space map too large");
+ return -EINVAL;
+ }
+
+ for (i = old_blocks; i < blocks; i++) {
+ struct dm_block *b;
+ struct disk_index_entry idx;
+
+ r = dm_tm_new_block(ll->tm, &dm_sm_bitmap_validator, &b);
+ if (r < 0)
+ return r;
+ idx.blocknr = cpu_to_le64(dm_block_location(b));
+
+ r = dm_tm_unlock(ll->tm, b);
+ if (r < 0)
+ return r;
+
+ idx.nr_free = cpu_to_le32(ll->entries_per_block);
+ idx.none_free_before = 0;
+
+ r = ll->save_ie(ll, i, &idx);
+ if (r < 0)
+ return r;
+ }
+
+ ll->nr_blocks = nr_blocks;
+ return 0;
+}
+
+int sm_ll_lookup_bitmap(struct ll_disk *ll, dm_block_t b, uint32_t *result)
+{
+ int r;
+ dm_block_t index = b;
+ struct disk_index_entry ie_disk;
+ struct dm_block *blk;
+
+ b = do_div(index, ll->entries_per_block);
+ r = ll->load_ie(ll, index, &ie_disk);
+ if (r < 0)
+ return r;
+
+ r = dm_tm_read_lock(ll->tm, le64_to_cpu(ie_disk.blocknr),
+ &dm_sm_bitmap_validator, &blk);
+ if (r < 0)
+ return r;
+
+ *result = sm_lookup_bitmap(dm_bitmap_data(blk), b);
+
+ return dm_tm_unlock(ll->tm, blk);
+}
+
+int sm_ll_lookup(struct ll_disk *ll, dm_block_t b, uint32_t *result)
+{
+ __le32 le_rc;
+ int r = sm_ll_lookup_bitmap(ll, b, result);
+
+ if (r)
+ return r;
+
+ if (*result != 3)
+ return r;
+
+ r = dm_btree_lookup(&ll->ref_count_info, ll->ref_count_root, &b, &le_rc);
+ if (r < 0)
+ return r;
+
+ *result = le32_to_cpu(le_rc);
+
+ return r;
+}
+
+int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
+ dm_block_t end, dm_block_t *result)
+{
+ int r;
+ struct disk_index_entry ie_disk;
+ dm_block_t i, index_begin = begin;
+ dm_block_t index_end = dm_sector_div_up(end, ll->entries_per_block);
+
+ /*
+ * FIXME: Use shifts
+ */
+ begin = do_div(index_begin, ll->entries_per_block);
+ end = do_div(end, ll->entries_per_block);
+
+ for (i = index_begin; i < index_end; i++, begin = 0) {
+ struct dm_block *blk;
+ unsigned position;
+ uint32_t bit_end;
+
+ r = ll->load_ie(ll, i, &ie_disk);
+ if (r < 0)
+ return r;
+
+ if (le32_to_cpu(ie_disk.nr_free) == 0)
+ continue;
+
+ r = dm_tm_read_lock(ll->tm, le64_to_cpu(ie_disk.blocknr),
+ &dm_sm_bitmap_validator, &blk);
+ if (r < 0)
+ return r;
+
+ bit_end = (i == index_end - 1) ? end : ll->entries_per_block;
+
+ r = sm_find_free(dm_bitmap_data(blk),
+ max_t(unsigned, begin, le32_to_cpu(ie_disk.none_free_before)),
+ bit_end, &position);
+ if (r == -ENOSPC) {
+ /*
+ * This might happen because we started searching
+ * part way through the bitmap.
+ */
+ dm_tm_unlock(ll->tm, blk);
+ continue;
+
+ } else if (r < 0) {
+ dm_tm_unlock(ll->tm, blk);
+ return r;
+ }
+
+ r = dm_tm_unlock(ll->tm, blk);
+ if (r < 0)
+ return r;
+
+ *result = i * ll->entries_per_block + (dm_block_t) position;
+ return 0;
+ }
+
+ return -ENOSPC;
+}
+
+int sm_ll_insert(struct ll_disk *ll, dm_block_t b,
+ uint32_t ref_count, enum allocation_event *ev)
+{
+ int r;
+ uint32_t bit, old;
+ struct dm_block *nb;
+ dm_block_t index = b;
+ struct disk_index_entry ie_disk;
+ void *bm_le;
+ int inc;
+
+ bit = do_div(index, ll->entries_per_block);
+ r = ll->load_ie(ll, index, &ie_disk);
+ if (r < 0)
+ return r;
+
+ r = dm_tm_shadow_block(ll->tm, le64_to_cpu(ie_disk.blocknr),
+ &dm_sm_bitmap_validator, &nb, &inc);
+ if (r < 0) {
+ DMERR("dm_tm_shadow_block() failed");
+ return r;
+ }
+ ie_disk.blocknr = cpu_to_le64(dm_block_location(nb));
+
+ bm_le = dm_bitmap_data(nb);
+ old = sm_lookup_bitmap(bm_le, bit);
+
+ if (ref_count <= 2) {
+ sm_set_bitmap(bm_le, bit, ref_count);
+
+ r = dm_tm_unlock(ll->tm, nb);
+ if (r < 0)
+ return r;
+
+#if 0
+ /* FIXME: dm_btree_remove doesn't handle this yet */
+ if (old > 2) {
+ r = dm_btree_remove(&ll->ref_count_info,
+ ll->ref_count_root,
+ &b, &ll->ref_count_root);
+ if (r)
+ return r;
+ }
+#endif
+
+ } else {
+ __le32 le_rc = cpu_to_le32(ref_count);
+
+ sm_set_bitmap(bm_le, bit, 3);
+ r = dm_tm_unlock(ll->tm, nb);
+ if (r < 0)
+ return r;
+
+ __dm_bless_for_disk(&le_rc);
+ r = dm_btree_insert(&ll->ref_count_info, ll->ref_count_root,
+ &b, &le_rc, &ll->ref_count_root);
+ if (r < 0) {
+ DMERR("ref count insert failed");
+ return r;
+ }
+ }
+
+ if (ref_count && !old) {
+ *ev = SM_ALLOC;
+ ll->nr_allocated++;
+ ie_disk.nr_free = cpu_to_le32(le32_to_cpu(ie_disk.nr_free) - 1);
+ if (le32_to_cpu(ie_disk.none_free_before) == bit)
+ ie_disk.none_free_before = cpu_to_le32(bit + 1);
+
+ } else if (old && !ref_count) {
+ *ev = SM_FREE;
+ ll->nr_allocated--;
+ ie_disk.nr_free = cpu_to_le32(le32_to_cpu(ie_disk.nr_free) + 1);
+ ie_disk.none_free_before = cpu_to_le32(min(le32_to_cpu(ie_disk.none_free_before), bit));
+ }
+
+ return ll->save_ie(ll, index, &ie_disk);
+}
+
+int sm_ll_inc(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev)
+{
+ int r;
+ uint32_t rc;
+
+ r = sm_ll_lookup(ll, b, &rc);
+ if (r)
+ return r;
+
+ return sm_ll_insert(ll, b, rc + 1, ev);
+}
+
+int sm_ll_dec(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev)
+{
+ int r;
+ uint32_t rc;
+
+ r = sm_ll_lookup(ll, b, &rc);
+ if (r)
+ return r;
+
+ if (!rc)
+ return -EINVAL;
+
+ return sm_ll_insert(ll, b, rc - 1, ev);
+}
+
+int sm_ll_commit(struct ll_disk *ll)
+{
+ return ll->commit(ll);
+}
+
+/*----------------------------------------------------------------*/
+
+static int metadata_ll_load_ie(struct ll_disk *ll, dm_block_t index,
+ struct disk_index_entry *ie)
+{
+ memcpy(ie, ll->mi_le.index + index, sizeof(*ie));
+ return 0;
+}
+
+static int metadata_ll_save_ie(struct ll_disk *ll, dm_block_t index,
+ struct disk_index_entry *ie)
+{
+ memcpy(ll->mi_le.index + index, ie, sizeof(*ie));
+ return 0;
+}
+
+static int metadata_ll_init_index(struct ll_disk *ll)
+{
+ int r;
+ struct dm_block *b;
+
+ r = dm_tm_new_block(ll->tm, &index_validator, &b);
+ if (r < 0)
+ return r;
+
+ memcpy(dm_block_data(b), &ll->mi_le, sizeof(ll->mi_le));
+ ll->bitmap_root = dm_block_location(b);
+
+ return dm_tm_unlock(ll->tm, b);
+}
+
+static int metadata_ll_open(struct ll_disk *ll)
+{
+ int r;
+ struct dm_block *block;
+
+ r = dm_tm_read_lock(ll->tm, ll->bitmap_root,
+ &index_validator, &block);
+ if (r)
+ return r;
+
+ memcpy(&ll->mi_le, dm_block_data(block), sizeof(ll->mi_le));
+ return dm_tm_unlock(ll->tm, block);
+}
+
+static dm_block_t metadata_ll_max_entries(struct ll_disk *ll)
+{
+ return MAX_METADATA_BITMAPS;
+}
+
+static int metadata_ll_commit(struct ll_disk *ll)
+{
+ int r, inc;
+ struct dm_block *b;
+
+ r = dm_tm_shadow_block(ll->tm, ll->bitmap_root, &index_validator, &b, &inc);
+ if (r)
+ return r;
+
+ memcpy(dm_block_data(b), &ll->mi_le, sizeof(ll->mi_le));
+ ll->bitmap_root = dm_block_location(b);
+
+ return dm_tm_unlock(ll->tm, b);
+}
+
+int sm_ll_new_metadata(struct ll_disk *ll, struct dm_transaction_manager *tm)
+{
+ int r;
+
+ r = sm_ll_init(ll, tm);
+ if (r < 0)
+ return r;
+
+ ll->load_ie = metadata_ll_load_ie;
+ ll->save_ie = metadata_ll_save_ie;
+ ll->init_index = metadata_ll_init_index;
+ ll->open_index = metadata_ll_open;
+ ll->max_entries = metadata_ll_max_entries;
+ ll->commit = metadata_ll_commit;
+
+ ll->nr_blocks = 0;
+ ll->nr_allocated = 0;
+
+ r = ll->init_index(ll);
+ if (r < 0)
+ return r;
+
+ r = dm_btree_empty(&ll->ref_count_info, &ll->ref_count_root);
+ if (r < 0)
+ return r;
+
+ return 0;
+}
+
+int sm_ll_open_metadata(struct ll_disk *ll, struct dm_transaction_manager *tm,
+ void *root_le, size_t len)
+{
+ int r;
+ struct disk_sm_root *smr = root_le;
+
+ if (len < sizeof(struct disk_sm_root)) {
+ DMERR("sm_metadata root too small");
+ return -ENOMEM;
+ }
+
+ r = sm_ll_init(ll, tm);
+ if (r < 0)
+ return r;
+
+ ll->load_ie = metadata_ll_load_ie;
+ ll->save_ie = metadata_ll_save_ie;
+ ll->init_index = metadata_ll_init_index;
+ ll->open_index = metadata_ll_open;
+ ll->max_entries = metadata_ll_max_entries;
+ ll->commit = metadata_ll_commit;
+
+ ll->nr_blocks = le64_to_cpu(smr->nr_blocks);
+ ll->nr_allocated = le64_to_cpu(smr->nr_allocated);
+ ll->bitmap_root = le64_to_cpu(smr->bitmap_root);
+ ll->ref_count_root = le64_to_cpu(smr->ref_count_root);
+
+ return ll->open_index(ll);
+}
+
+/*----------------------------------------------------------------*/
+
+static int disk_ll_load_ie(struct ll_disk *ll, dm_block_t index,
+ struct disk_index_entry *ie)
+{
+ return dm_btree_lookup(&ll->bitmap_info, ll->bitmap_root, &index, ie);
+}
+
+static int disk_ll_save_ie(struct ll_disk *ll, dm_block_t index,
+ struct disk_index_entry *ie)
+{
+ __dm_bless_for_disk(ie);
+ return dm_btree_insert(&ll->bitmap_info, ll->bitmap_root,
+ &index, ie, &ll->bitmap_root);
+}
+
+static int disk_ll_init_index(struct ll_disk *ll)
+{
+ return dm_btree_empty(&ll->bitmap_info, &ll->bitmap_root);
+}
+
+static int disk_ll_open(struct ll_disk *ll)
+{
+ /* nothing to do */
+ return 0;
+}
+
+static dm_block_t disk_ll_max_entries(struct ll_disk *ll)
+{
+ return -1ULL;
+}
+
+static int disk_ll_commit(struct ll_disk *ll)
+{
+ return 0;
+}
+
+int sm_ll_new_disk(struct ll_disk *ll, struct dm_transaction_manager *tm)
+{
+ int r;
+
+ r = sm_ll_init(ll, tm);
+ if (r < 0)
+ return r;
+
+ ll->load_ie = disk_ll_load_ie;
+ ll->save_ie = disk_ll_save_ie;
+ ll->init_index = disk_ll_init_index;
+ ll->open_index = disk_ll_open;
+ ll->max_entries = disk_ll_max_entries;
+ ll->commit = disk_ll_commit;
+
+ ll->nr_blocks = 0;
+ ll->nr_allocated = 0;
+
+ r = ll->init_index(ll);
+ if (r < 0)
+ return r;
+
+ r = dm_btree_empty(&ll->ref_count_info, &ll->ref_count_root);
+ if (r < 0)
+ return r;
+
+ return 0;
+}
+
+int sm_ll_open_disk(struct ll_disk *ll, struct dm_transaction_manager *tm,
+ void *root_le, size_t len)
+{
+ int r;
+ struct disk_sm_root *smr = root_le;
+
+ if (len < sizeof(struct disk_sm_root)) {
+ DMERR("sm_metadata root too small");
+ return -ENOMEM;
+ }
+
+ r = sm_ll_init(ll, tm);
+ if (r < 0)
+ return r;
+
+ ll->load_ie = disk_ll_load_ie;
+ ll->save_ie = disk_ll_save_ie;
+ ll->init_index = disk_ll_init_index;
+ ll->open_index = disk_ll_open;
+ ll->max_entries = disk_ll_max_entries;
+ ll->commit = disk_ll_commit;
+
+ ll->nr_blocks = le64_to_cpu(smr->nr_blocks);
+ ll->nr_allocated = le64_to_cpu(smr->nr_allocated);
+ ll->bitmap_root = le64_to_cpu(smr->bitmap_root);
+ ll->ref_count_root = le64_to_cpu(smr->ref_count_root);
+
+ return ll->open_index(ll);
+}
+
+/*----------------------------------------------------------------*/
diff --git a/drivers/md/persistent-data/dm-space-map-common.h b/drivers/md/persistent-data/dm-space-map-common.h
new file mode 100644
index 000000000000..8f220821a9a9
--- /dev/null
+++ b/drivers/md/persistent-data/dm-space-map-common.h
@@ -0,0 +1,126 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef DM_SPACE_MAP_COMMON_H
+#define DM_SPACE_MAP_COMMON_H
+
+#include "dm-btree.h"
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Low level disk format
+ *
+ * Bitmap btree
+ * ------------
+ *
+ * Each value stored in the btree is an index_entry. This points to a
+ * block that is used as a bitmap. Within the bitmap hold 2 bits per
+ * entry, which represent UNUSED = 0, REF_COUNT = 1, REF_COUNT = 2 and
+ * REF_COUNT = many.
+ *
+ * Refcount btree
+ * --------------
+ *
+ * Any entry that has a ref count higher than 2 gets entered in the ref
+ * count tree. The leaf values for this tree is the 32-bit ref count.
+ */
+
+struct disk_index_entry {
+ __le64 blocknr;
+ __le32 nr_free;
+ __le32 none_free_before;
+} __packed;
+
+
+#define MAX_METADATA_BITMAPS 255
+struct disk_metadata_index {
+ __le32 csum;
+ __le32 padding;
+ __le64 blocknr;
+
+ struct disk_index_entry index[MAX_METADATA_BITMAPS];
+} __packed;
+
+struct ll_disk;
+
+typedef int (*load_ie_fn)(struct ll_disk *ll, dm_block_t index, struct disk_index_entry *result);
+typedef int (*save_ie_fn)(struct ll_disk *ll, dm_block_t index, struct disk_index_entry *ie);
+typedef int (*init_index_fn)(struct ll_disk *ll);
+typedef int (*open_index_fn)(struct ll_disk *ll);
+typedef dm_block_t (*max_index_entries_fn)(struct ll_disk *ll);
+typedef int (*commit_fn)(struct ll_disk *ll);
+
+struct ll_disk {
+ struct dm_transaction_manager *tm;
+ struct dm_btree_info bitmap_info;
+ struct dm_btree_info ref_count_info;
+
+ uint32_t block_size;
+ uint32_t entries_per_block;
+ dm_block_t nr_blocks;
+ dm_block_t nr_allocated;
+
+ /*
+ * bitmap_root may be a btree root or a simple index.
+ */
+ dm_block_t bitmap_root;
+
+ dm_block_t ref_count_root;
+
+ struct disk_metadata_index mi_le;
+ load_ie_fn load_ie;
+ save_ie_fn save_ie;
+ init_index_fn init_index;
+ open_index_fn open_index;
+ max_index_entries_fn max_entries;
+ commit_fn commit;
+};
+
+struct disk_sm_root {
+ __le64 nr_blocks;
+ __le64 nr_allocated;
+ __le64 bitmap_root;
+ __le64 ref_count_root;
+} __packed;
+
+#define ENTRIES_PER_BYTE 4
+
+struct disk_bitmap_header {
+ __le32 csum;
+ __le32 not_used;
+ __le64 blocknr;
+} __packed;
+
+enum allocation_event {
+ SM_NONE,
+ SM_ALLOC,
+ SM_FREE,
+};
+
+/*----------------------------------------------------------------*/
+
+int sm_ll_extend(struct ll_disk *ll, dm_block_t extra_blocks);
+int sm_ll_lookup_bitmap(struct ll_disk *ll, dm_block_t b, uint32_t *result);
+int sm_ll_lookup(struct ll_disk *ll, dm_block_t b, uint32_t *result);
+int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
+ dm_block_t end, dm_block_t *result);
+int sm_ll_insert(struct ll_disk *ll, dm_block_t b, uint32_t ref_count, enum allocation_event *ev);
+int sm_ll_inc(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev);
+int sm_ll_dec(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev);
+int sm_ll_commit(struct ll_disk *ll);
+
+int sm_ll_new_metadata(struct ll_disk *ll, struct dm_transaction_manager *tm);
+int sm_ll_open_metadata(struct ll_disk *ll, struct dm_transaction_manager *tm,
+ void *root_le, size_t len);
+
+int sm_ll_new_disk(struct ll_disk *ll, struct dm_transaction_manager *tm);
+int sm_ll_open_disk(struct ll_disk *ll, struct dm_transaction_manager *tm,
+ void *root_le, size_t len);
+
+/*----------------------------------------------------------------*/
+
+#endif /* DM_SPACE_MAP_COMMON_H */
diff --git a/drivers/md/persistent-data/dm-space-map-disk.c b/drivers/md/persistent-data/dm-space-map-disk.c
new file mode 100644
index 000000000000..aeff7852cf79
--- /dev/null
+++ b/drivers/md/persistent-data/dm-space-map-disk.c
@@ -0,0 +1,335 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-space-map-checker.h"
+#include "dm-space-map-common.h"
+#include "dm-space-map-disk.h"
+#include "dm-space-map.h"
+#include "dm-transaction-manager.h"
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/device-mapper.h>
+
+#define DM_MSG_PREFIX "space map disk"
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Space map interface.
+ */
+struct sm_disk {
+ struct dm_space_map sm;
+
+ struct ll_disk ll;
+ struct ll_disk old_ll;
+
+ dm_block_t begin;
+ dm_block_t nr_allocated_this_transaction;
+};
+
+static void sm_disk_destroy(struct dm_space_map *sm)
+{
+ struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
+
+ kfree(smd);
+}
+
+static int sm_disk_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
+{
+ struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
+
+ return sm_ll_extend(&smd->ll, extra_blocks);
+}
+
+static int sm_disk_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
+{
+ struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
+ *count = smd->old_ll.nr_blocks;
+
+ return 0;
+}
+
+static int sm_disk_get_nr_free(struct dm_space_map *sm, dm_block_t *count)
+{
+ struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
+ *count = (smd->old_ll.nr_blocks - smd->old_ll.nr_allocated) - smd->nr_allocated_this_transaction;
+
+ return 0;
+}
+
+static int sm_disk_get_count(struct dm_space_map *sm, dm_block_t b,
+ uint32_t *result)
+{
+ struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
+ return sm_ll_lookup(&smd->ll, b, result);
+}
+
+static int sm_disk_count_is_more_than_one(struct dm_space_map *sm, dm_block_t b,
+ int *result)
+{
+ int r;
+ uint32_t count;
+
+ r = sm_disk_get_count(sm, b, &count);
+ if (r)
+ return r;
+
+ return count > 1;
+}
+
+static int sm_disk_set_count(struct dm_space_map *sm, dm_block_t b,
+ uint32_t count)
+{
+ int r;
+ uint32_t old_count;
+ enum allocation_event ev;
+ struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
+
+ r = sm_ll_insert(&smd->ll, b, count, &ev);
+ if (!r) {
+ switch (ev) {
+ case SM_NONE:
+ break;
+
+ case SM_ALLOC:
+ /*
+ * This _must_ be free in the prior transaction
+ * otherwise we've lost atomicity.
+ */
+ smd->nr_allocated_this_transaction++;
+ break;
+
+ case SM_FREE:
+ /*
+ * It's only free if it's also free in the last
+ * transaction.
+ */
+ r = sm_ll_lookup(&smd->old_ll, b, &old_count);
+ if (r)
+ return r;
+
+ if (!old_count)
+ smd->nr_allocated_this_transaction--;
+ break;
+ }
+ }
+
+ return r;
+}
+
+static int sm_disk_inc_block(struct dm_space_map *sm, dm_block_t b)
+{
+ int r;
+ enum allocation_event ev;
+ struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
+
+ r = sm_ll_inc(&smd->ll, b, &ev);
+ if (!r && (ev == SM_ALLOC))
+ /*
+ * This _must_ be free in the prior transaction
+ * otherwise we've lost atomicity.
+ */
+ smd->nr_allocated_this_transaction++;
+
+ return r;
+}
+
+static int sm_disk_dec_block(struct dm_space_map *sm, dm_block_t b)
+{
+ int r;
+ uint32_t old_count;
+ enum allocation_event ev;
+ struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
+
+ r = sm_ll_dec(&smd->ll, b, &ev);
+ if (!r && (ev == SM_FREE)) {
+ /*
+ * It's only free if it's also free in the last
+ * transaction.
+ */
+ r = sm_ll_lookup(&smd->old_ll, b, &old_count);
+ if (r)
+ return r;
+
+ if (!old_count)
+ smd->nr_allocated_this_transaction--;
+ }
+
+ return r;
+}
+
+static int sm_disk_new_block(struct dm_space_map *sm, dm_block_t *b)
+{
+ int r;
+ enum allocation_event ev;
+ struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
+
+ /* FIXME: we should loop round a couple of times */
+ r = sm_ll_find_free_block(&smd->old_ll, smd->begin, smd->old_ll.nr_blocks, b);
+ if (r)
+ return r;
+
+ smd->begin = *b + 1;
+ r = sm_ll_inc(&smd->ll, *b, &ev);
+ if (!r) {
+ BUG_ON(ev != SM_ALLOC);
+ smd->nr_allocated_this_transaction++;
+ }
+
+ return r;
+}
+
+static int sm_disk_commit(struct dm_space_map *sm)
+{
+ int r;
+ dm_block_t nr_free;
+ struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
+
+ r = sm_disk_get_nr_free(sm, &nr_free);
+ if (r)
+ return r;
+
+ r = sm_ll_commit(&smd->ll);
+ if (r)
+ return r;
+
+ memcpy(&smd->old_ll, &smd->ll, sizeof(smd->old_ll));
+ smd->begin = 0;
+ smd->nr_allocated_this_transaction = 0;
+
+ r = sm_disk_get_nr_free(sm, &nr_free);
+ if (r)
+ return r;
+
+ return 0;
+}
+
+static int sm_disk_root_size(struct dm_space_map *sm, size_t *result)
+{
+ *result = sizeof(struct disk_sm_root);
+
+ return 0;
+}
+
+static int sm_disk_copy_root(struct dm_space_map *sm, void *where_le, size_t max)
+{
+ struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
+ struct disk_sm_root root_le;
+
+ root_le.nr_blocks = cpu_to_le64(smd->ll.nr_blocks);
+ root_le.nr_allocated = cpu_to_le64(smd->ll.nr_allocated);
+ root_le.bitmap_root = cpu_to_le64(smd->ll.bitmap_root);
+ root_le.ref_count_root = cpu_to_le64(smd->ll.ref_count_root);
+
+ if (max < sizeof(root_le))
+ return -ENOSPC;
+
+ memcpy(where_le, &root_le, sizeof(root_le));
+
+ return 0;
+}
+
+/*----------------------------------------------------------------*/
+
+static struct dm_space_map ops = {
+ .destroy = sm_disk_destroy,
+ .extend = sm_disk_extend,
+ .get_nr_blocks = sm_disk_get_nr_blocks,
+ .get_nr_free = sm_disk_get_nr_free,
+ .get_count = sm_disk_get_count,
+ .count_is_more_than_one = sm_disk_count_is_more_than_one,
+ .set_count = sm_disk_set_count,
+ .inc_block = sm_disk_inc_block,
+ .dec_block = sm_disk_dec_block,
+ .new_block = sm_disk_new_block,
+ .commit = sm_disk_commit,
+ .root_size = sm_disk_root_size,
+ .copy_root = sm_disk_copy_root
+};
+
+static struct dm_space_map *dm_sm_disk_create_real(
+ struct dm_transaction_manager *tm,
+ dm_block_t nr_blocks)
+{
+ int r;
+ struct sm_disk *smd;
+
+ smd = kmalloc(sizeof(*smd), GFP_KERNEL);
+ if (!smd)
+ return ERR_PTR(-ENOMEM);
+
+ smd->begin = 0;
+ smd->nr_allocated_this_transaction = 0;
+ memcpy(&smd->sm, &ops, sizeof(smd->sm));
+
+ r = sm_ll_new_disk(&smd->ll, tm);
+ if (r)
+ goto bad;
+
+ r = sm_ll_extend(&smd->ll, nr_blocks);
+ if (r)
+ goto bad;
+
+ r = sm_disk_commit(&smd->sm);
+ if (r)
+ goto bad;
+
+ return &smd->sm;
+
+bad:
+ kfree(smd);
+ return ERR_PTR(r);
+}
+
+struct dm_space_map *dm_sm_disk_create(struct dm_transaction_manager *tm,
+ dm_block_t nr_blocks)
+{
+ struct dm_space_map *sm = dm_sm_disk_create_real(tm, nr_blocks);
+ return dm_sm_checker_create_fresh(sm);
+}
+EXPORT_SYMBOL_GPL(dm_sm_disk_create);
+
+static struct dm_space_map *dm_sm_disk_open_real(
+ struct dm_transaction_manager *tm,
+ void *root_le, size_t len)
+{
+ int r;
+ struct sm_disk *smd;
+
+ smd = kmalloc(sizeof(*smd), GFP_KERNEL);
+ if (!smd)
+ return ERR_PTR(-ENOMEM);
+
+ smd->begin = 0;
+ smd->nr_allocated_this_transaction = 0;
+ memcpy(&smd->sm, &ops, sizeof(smd->sm));
+
+ r = sm_ll_open_disk(&smd->ll, tm, root_le, len);
+ if (r)
+ goto bad;
+
+ r = sm_disk_commit(&smd->sm);
+ if (r)
+ goto bad;
+
+ return &smd->sm;
+
+bad:
+ kfree(smd);
+ return ERR_PTR(r);
+}
+
+struct dm_space_map *dm_sm_disk_open(struct dm_transaction_manager *tm,
+ void *root_le, size_t len)
+{
+ return dm_sm_checker_create(
+ dm_sm_disk_open_real(tm, root_le, len));
+}
+EXPORT_SYMBOL_GPL(dm_sm_disk_open);
+
+/*----------------------------------------------------------------*/
diff --git a/drivers/md/persistent-data/dm-space-map-disk.h b/drivers/md/persistent-data/dm-space-map-disk.h
new file mode 100644
index 000000000000..447a0a9a2d9f
--- /dev/null
+++ b/drivers/md/persistent-data/dm-space-map-disk.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef _LINUX_DM_SPACE_MAP_DISK_H
+#define _LINUX_DM_SPACE_MAP_DISK_H
+
+#include "dm-block-manager.h"
+
+struct dm_space_map;
+struct dm_transaction_manager;
+
+/*
+ * Unfortunately we have to use two-phase construction due to the cycle
+ * between the tm and sm.
+ */
+struct dm_space_map *dm_sm_disk_create(struct dm_transaction_manager *tm,
+ dm_block_t nr_blocks);
+
+struct dm_space_map *dm_sm_disk_open(struct dm_transaction_manager *tm,
+ void *root, size_t len);
+
+#endif /* _LINUX_DM_SPACE_MAP_DISK_H */
diff --git a/drivers/md/persistent-data/dm-space-map-metadata.c b/drivers/md/persistent-data/dm-space-map-metadata.c
new file mode 100644
index 000000000000..e89ae5e7a519
--- /dev/null
+++ b/drivers/md/persistent-data/dm-space-map-metadata.c
@@ -0,0 +1,596 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-space-map.h"
+#include "dm-space-map-common.h"
+#include "dm-space-map-metadata.h"
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/device-mapper.h>
+
+#define DM_MSG_PREFIX "space map metadata"
+
+/*----------------------------------------------------------------*/
+
+/*
+ * Space map interface.
+ *
+ * The low level disk format is written using the standard btree and
+ * transaction manager. This means that performing disk operations may
+ * cause us to recurse into the space map in order to allocate new blocks.
+ * For this reason we have a pool of pre-allocated blocks large enough to
+ * service any metadata_ll_disk operation.
+ */
+
+/*
+ * FIXME: we should calculate this based on the size of the device.
+ * Only the metadata space map needs this functionality.
+ */
+#define MAX_RECURSIVE_ALLOCATIONS 1024
+
+enum block_op_type {
+ BOP_INC,
+ BOP_DEC
+};
+
+struct block_op {
+ enum block_op_type type;
+ dm_block_t block;
+};
+
+struct sm_metadata {
+ struct dm_space_map sm;
+
+ struct ll_disk ll;
+ struct ll_disk old_ll;
+
+ dm_block_t begin;
+
+ unsigned recursion_count;
+ unsigned allocated_this_transaction;
+ unsigned nr_uncommitted;
+ struct block_op uncommitted[MAX_RECURSIVE_ALLOCATIONS];
+};
+
+static int add_bop(struct sm_metadata *smm, enum block_op_type type, dm_block_t b)
+{
+ struct block_op *op;
+
+ if (smm->nr_uncommitted == MAX_RECURSIVE_ALLOCATIONS) {
+ DMERR("too many recursive allocations");
+ return -ENOMEM;
+ }
+
+ op = smm->uncommitted + smm->nr_uncommitted++;
+ op->type = type;
+ op->block = b;
+
+ return 0;
+}
+
+static int commit_bop(struct sm_metadata *smm, struct block_op *op)
+{
+ int r = 0;
+ enum allocation_event ev;
+
+ switch (op->type) {
+ case BOP_INC:
+ r = sm_ll_inc(&smm->ll, op->block, &ev);
+ break;
+
+ case BOP_DEC:
+ r = sm_ll_dec(&smm->ll, op->block, &ev);
+ break;
+ }
+
+ return r;
+}
+
+static void in(struct sm_metadata *smm)
+{
+ smm->recursion_count++;
+}
+
+static int out(struct sm_metadata *smm)
+{
+ int r = 0;
+
+ /*
+ * If we're not recursing then very bad things are happening.
+ */
+ if (!smm->recursion_count) {
+ DMERR("lost track of recursion depth");
+ return -ENOMEM;
+ }
+
+ if (smm->recursion_count == 1 && smm->nr_uncommitted) {
+ while (smm->nr_uncommitted && !r) {
+ smm->nr_uncommitted--;
+ r = commit_bop(smm, smm->uncommitted +
+ smm->nr_uncommitted);
+ if (r)
+ break;
+ }
+ }
+
+ smm->recursion_count--;
+
+ return r;
+}
+
+/*
+ * When using the out() function above, we often want to combine an error
+ * code for the operation run in the recursive context with that from
+ * out().
+ */
+static int combine_errors(int r1, int r2)
+{
+ return r1 ? r1 : r2;
+}
+
+static int recursing(struct sm_metadata *smm)
+{
+ return smm->recursion_count;
+}
+
+static void sm_metadata_destroy(struct dm_space_map *sm)
+{
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ kfree(smm);
+}
+
+static int sm_metadata_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
+{
+ DMERR("doesn't support extend");
+ return -EINVAL;
+}
+
+static int sm_metadata_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
+{
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ *count = smm->ll.nr_blocks;
+
+ return 0;
+}
+
+static int sm_metadata_get_nr_free(struct dm_space_map *sm, dm_block_t *count)
+{
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ *count = smm->old_ll.nr_blocks - smm->old_ll.nr_allocated -
+ smm->allocated_this_transaction;
+
+ return 0;
+}
+
+static int sm_metadata_get_count(struct dm_space_map *sm, dm_block_t b,
+ uint32_t *result)
+{
+ int r, i;
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+ unsigned adjustment = 0;
+
+ /*
+ * We may have some uncommitted adjustments to add. This list
+ * should always be really short.
+ */
+ for (i = 0; i < smm->nr_uncommitted; i++) {
+ struct block_op *op = smm->uncommitted + i;
+
+ if (op->block != b)
+ continue;
+
+ switch (op->type) {
+ case BOP_INC:
+ adjustment++;
+ break;
+
+ case BOP_DEC:
+ adjustment--;
+ break;
+ }
+ }
+
+ r = sm_ll_lookup(&smm->ll, b, result);
+ if (r)
+ return r;
+
+ *result += adjustment;
+
+ return 0;
+}
+
+static int sm_metadata_count_is_more_than_one(struct dm_space_map *sm,
+ dm_block_t b, int *result)
+{
+ int r, i, adjustment = 0;
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+ uint32_t rc;
+
+ /*
+ * We may have some uncommitted adjustments to add. This list
+ * should always be really short.
+ */
+ for (i = 0; i < smm->nr_uncommitted; i++) {
+ struct block_op *op = smm->uncommitted + i;
+
+ if (op->block != b)
+ continue;
+
+ switch (op->type) {
+ case BOP_INC:
+ adjustment++;
+ break;
+
+ case BOP_DEC:
+ adjustment--;
+ break;
+ }
+ }
+
+ if (adjustment > 1) {
+ *result = 1;
+ return 0;
+ }
+
+ r = sm_ll_lookup_bitmap(&smm->ll, b, &rc);
+ if (r)
+ return r;
+
+ if (rc == 3)
+ /*
+ * We err on the side of caution, and always return true.
+ */
+ *result = 1;
+ else
+ *result = rc + adjustment > 1;
+
+ return 0;
+}
+
+static int sm_metadata_set_count(struct dm_space_map *sm, dm_block_t b,
+ uint32_t count)
+{
+ int r, r2;
+ enum allocation_event ev;
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ if (smm->recursion_count) {
+ DMERR("cannot recurse set_count()");
+ return -EINVAL;
+ }
+
+ in(smm);
+ r = sm_ll_insert(&smm->ll, b, count, &ev);
+ r2 = out(smm);
+
+ return combine_errors(r, r2);
+}
+
+static int sm_metadata_inc_block(struct dm_space_map *sm, dm_block_t b)
+{
+ int r, r2 = 0;
+ enum allocation_event ev;
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ if (recursing(smm))
+ r = add_bop(smm, BOP_INC, b);
+ else {
+ in(smm);
+ r = sm_ll_inc(&smm->ll, b, &ev);
+ r2 = out(smm);
+ }
+
+ return combine_errors(r, r2);
+}
+
+static int sm_metadata_dec_block(struct dm_space_map *sm, dm_block_t b)
+{
+ int r, r2 = 0;
+ enum allocation_event ev;
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ if (recursing(smm))
+ r = add_bop(smm, BOP_DEC, b);
+ else {
+ in(smm);
+ r = sm_ll_dec(&smm->ll, b, &ev);
+ r2 = out(smm);
+ }
+
+ return combine_errors(r, r2);
+}
+
+static int sm_metadata_new_block_(struct dm_space_map *sm, dm_block_t *b)
+{
+ int r, r2 = 0;
+ enum allocation_event ev;
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ r = sm_ll_find_free_block(&smm->old_ll, smm->begin, smm->old_ll.nr_blocks, b);
+ if (r)
+ return r;
+
+ smm->begin = *b + 1;
+
+ if (recursing(smm))
+ r = add_bop(smm, BOP_INC, *b);
+ else {
+ in(smm);
+ r = sm_ll_inc(&smm->ll, *b, &ev);
+ r2 = out(smm);
+ }
+
+ if (!r)
+ smm->allocated_this_transaction++;
+
+ return combine_errors(r, r2);
+}
+
+static int sm_metadata_new_block(struct dm_space_map *sm, dm_block_t *b)
+{
+ int r = sm_metadata_new_block_(sm, b);
+ if (r)
+ DMERR("out of metadata space");
+ return r;
+}
+
+static int sm_metadata_commit(struct dm_space_map *sm)
+{
+ int r;
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ r = sm_ll_commit(&smm->ll);
+ if (r)
+ return r;
+
+ memcpy(&smm->old_ll, &smm->ll, sizeof(smm->old_ll));
+ smm->begin = 0;
+ smm->allocated_this_transaction = 0;
+
+ return 0;
+}
+
+static int sm_metadata_root_size(struct dm_space_map *sm, size_t *result)
+{
+ *result = sizeof(struct disk_sm_root);
+
+ return 0;
+}
+
+static int sm_metadata_copy_root(struct dm_space_map *sm, void *where_le, size_t max)
+{
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+ struct disk_sm_root root_le;
+
+ root_le.nr_blocks = cpu_to_le64(smm->ll.nr_blocks);
+ root_le.nr_allocated = cpu_to_le64(smm->ll.nr_allocated);
+ root_le.bitmap_root = cpu_to_le64(smm->ll.bitmap_root);
+ root_le.ref_count_root = cpu_to_le64(smm->ll.ref_count_root);
+
+ if (max < sizeof(root_le))
+ return -ENOSPC;
+
+ memcpy(where_le, &root_le, sizeof(root_le));
+
+ return 0;
+}
+
+static struct dm_space_map ops = {
+ .destroy = sm_metadata_destroy,
+ .extend = sm_metadata_extend,
+ .get_nr_blocks = sm_metadata_get_nr_blocks,
+ .get_nr_free = sm_metadata_get_nr_free,
+ .get_count = sm_metadata_get_count,
+ .count_is_more_than_one = sm_metadata_count_is_more_than_one,
+ .set_count = sm_metadata_set_count,
+ .inc_block = sm_metadata_inc_block,
+ .dec_block = sm_metadata_dec_block,
+ .new_block = sm_metadata_new_block,
+ .commit = sm_metadata_commit,
+ .root_size = sm_metadata_root_size,
+ .copy_root = sm_metadata_copy_root
+};
+
+/*----------------------------------------------------------------*/
+
+/*
+ * When a new space map is created that manages its own space. We use
+ * this tiny bootstrap allocator.
+ */
+static void sm_bootstrap_destroy(struct dm_space_map *sm)
+{
+}
+
+static int sm_bootstrap_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
+{
+ DMERR("boostrap doesn't support extend");
+
+ return -EINVAL;
+}
+
+static int sm_bootstrap_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
+{
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ return smm->ll.nr_blocks;
+}
+
+static int sm_bootstrap_get_nr_free(struct dm_space_map *sm, dm_block_t *count)
+{
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ *count = smm->ll.nr_blocks - smm->begin;
+
+ return 0;
+}
+
+static int sm_bootstrap_get_count(struct dm_space_map *sm, dm_block_t b,
+ uint32_t *result)
+{
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ return b < smm->begin ? 1 : 0;
+}
+
+static int sm_bootstrap_count_is_more_than_one(struct dm_space_map *sm,
+ dm_block_t b, int *result)
+{
+ *result = 0;
+
+ return 0;
+}
+
+static int sm_bootstrap_set_count(struct dm_space_map *sm, dm_block_t b,
+ uint32_t count)
+{
+ DMERR("boostrap doesn't support set_count");
+
+ return -EINVAL;
+}
+
+static int sm_bootstrap_new_block(struct dm_space_map *sm, dm_block_t *b)
+{
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ /*
+ * We know the entire device is unused.
+ */
+ if (smm->begin == smm->ll.nr_blocks)
+ return -ENOSPC;
+
+ *b = smm->begin++;
+
+ return 0;
+}
+
+static int sm_bootstrap_inc_block(struct dm_space_map *sm, dm_block_t b)
+{
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ return add_bop(smm, BOP_INC, b);
+}
+
+static int sm_bootstrap_dec_block(struct dm_space_map *sm, dm_block_t b)
+{
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ return add_bop(smm, BOP_DEC, b);
+}
+
+static int sm_bootstrap_commit(struct dm_space_map *sm)
+{
+ return 0;
+}
+
+static int sm_bootstrap_root_size(struct dm_space_map *sm, size_t *result)
+{
+ DMERR("boostrap doesn't support root_size");
+
+ return -EINVAL;
+}
+
+static int sm_bootstrap_copy_root(struct dm_space_map *sm, void *where,
+ size_t max)
+{
+ DMERR("boostrap doesn't support copy_root");
+
+ return -EINVAL;
+}
+
+static struct dm_space_map bootstrap_ops = {
+ .destroy = sm_bootstrap_destroy,
+ .extend = sm_bootstrap_extend,
+ .get_nr_blocks = sm_bootstrap_get_nr_blocks,
+ .get_nr_free = sm_bootstrap_get_nr_free,
+ .get_count = sm_bootstrap_get_count,
+ .count_is_more_than_one = sm_bootstrap_count_is_more_than_one,
+ .set_count = sm_bootstrap_set_count,
+ .inc_block = sm_bootstrap_inc_block,
+ .dec_block = sm_bootstrap_dec_block,
+ .new_block = sm_bootstrap_new_block,
+ .commit = sm_bootstrap_commit,
+ .root_size = sm_bootstrap_root_size,
+ .copy_root = sm_bootstrap_copy_root
+};
+
+/*----------------------------------------------------------------*/
+
+struct dm_space_map *dm_sm_metadata_init(void)
+{
+ struct sm_metadata *smm;
+
+ smm = kmalloc(sizeof(*smm), GFP_KERNEL);
+ if (!smm)
+ return ERR_PTR(-ENOMEM);
+
+ memcpy(&smm->sm, &ops, sizeof(smm->sm));
+
+ return &smm->sm;
+}
+
+int dm_sm_metadata_create(struct dm_space_map *sm,
+ struct dm_transaction_manager *tm,
+ dm_block_t nr_blocks,
+ dm_block_t superblock)
+{
+ int r;
+ dm_block_t i;
+ enum allocation_event ev;
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ smm->begin = superblock + 1;
+ smm->recursion_count = 0;
+ smm->allocated_this_transaction = 0;
+ smm->nr_uncommitted = 0;
+
+ memcpy(&smm->sm, &bootstrap_ops, sizeof(smm->sm));
+
+ r = sm_ll_new_metadata(&smm->ll, tm);
+ if (r)
+ return r;
+
+ r = sm_ll_extend(&smm->ll, nr_blocks);
+ if (r)
+ return r;
+
+ memcpy(&smm->sm, &ops, sizeof(smm->sm));
+
+ /*
+ * Now we need to update the newly created data structures with the
+ * allocated blocks that they were built from.
+ */
+ for (i = superblock; !r && i < smm->begin; i++)
+ r = sm_ll_inc(&smm->ll, i, &ev);
+
+ if (r)
+ return r;
+
+ return sm_metadata_commit(sm);
+}
+
+int dm_sm_metadata_open(struct dm_space_map *sm,
+ struct dm_transaction_manager *tm,
+ void *root_le, size_t len)
+{
+ int r;
+ struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
+
+ r = sm_ll_open_metadata(&smm->ll, tm, root_le, len);
+ if (r)
+ return r;
+
+ smm->begin = 0;
+ smm->recursion_count = 0;
+ smm->allocated_this_transaction = 0;
+ smm->nr_uncommitted = 0;
+
+ memcpy(&smm->old_ll, &smm->ll, sizeof(smm->old_ll));
+ return 0;
+}
diff --git a/drivers/md/persistent-data/dm-space-map-metadata.h b/drivers/md/persistent-data/dm-space-map-metadata.h
new file mode 100644
index 000000000000..39bba0801cf2
--- /dev/null
+++ b/drivers/md/persistent-data/dm-space-map-metadata.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef DM_SPACE_MAP_METADATA_H
+#define DM_SPACE_MAP_METADATA_H
+
+#include "dm-transaction-manager.h"
+
+/*
+ * Unfortunately we have to use two-phase construction due to the cycle
+ * between the tm and sm.
+ */
+struct dm_space_map *dm_sm_metadata_init(void);
+
+/*
+ * Create a fresh space map.
+ */
+int dm_sm_metadata_create(struct dm_space_map *sm,
+ struct dm_transaction_manager *tm,
+ dm_block_t nr_blocks,
+ dm_block_t superblock);
+
+/*
+ * Open from a previously-recorded root.
+ */
+int dm_sm_metadata_open(struct dm_space_map *sm,
+ struct dm_transaction_manager *tm,
+ void *root_le, size_t len);
+
+#endif /* DM_SPACE_MAP_METADATA_H */
diff --git a/drivers/md/persistent-data/dm-space-map.h b/drivers/md/persistent-data/dm-space-map.h
new file mode 100644
index 000000000000..1cbfc6b1638a
--- /dev/null
+++ b/drivers/md/persistent-data/dm-space-map.h
@@ -0,0 +1,134 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef _LINUX_DM_SPACE_MAP_H
+#define _LINUX_DM_SPACE_MAP_H
+
+#include "dm-block-manager.h"
+
+/*
+ * struct dm_space_map keeps a record of how many times each block in a device
+ * is referenced. It needs to be fixed on disk as part of the transaction.
+ */
+struct dm_space_map {
+ void (*destroy)(struct dm_space_map *sm);
+
+ /*
+ * You must commit before allocating the newly added space.
+ */
+ int (*extend)(struct dm_space_map *sm, dm_block_t extra_blocks);
+
+ /*
+ * Extensions do not appear in this count until after commit has
+ * been called.
+ */
+ int (*get_nr_blocks)(struct dm_space_map *sm, dm_block_t *count);
+
+ /*
+ * Space maps must never allocate a block from the previous
+ * transaction, in case we need to rollback. This complicates the
+ * semantics of get_nr_free(), it should return the number of blocks
+ * that are available for allocation _now_. For instance you may
+ * have blocks with a zero reference count that will not be
+ * available for allocation until after the next commit.
+ */
+ int (*get_nr_free)(struct dm_space_map *sm, dm_block_t *count);
+
+ int (*get_count)(struct dm_space_map *sm, dm_block_t b, uint32_t *result);
+ int (*count_is_more_than_one)(struct dm_space_map *sm, dm_block_t b,
+ int *result);
+ int (*set_count)(struct dm_space_map *sm, dm_block_t b, uint32_t count);
+
+ int (*commit)(struct dm_space_map *sm);
+
+ int (*inc_block)(struct dm_space_map *sm, dm_block_t b);
+ int (*dec_block)(struct dm_space_map *sm, dm_block_t b);
+
+ /*
+ * new_block will increment the returned block.
+ */
+ int (*new_block)(struct dm_space_map *sm, dm_block_t *b);
+
+ /*
+ * The root contains all the information needed to fix the space map.
+ * Generally this info is small, so squirrel it away in a disk block
+ * along with other info.
+ */
+ int (*root_size)(struct dm_space_map *sm, size_t *result);
+ int (*copy_root)(struct dm_space_map *sm, void *copy_to_here_le, size_t len);
+};
+
+/*----------------------------------------------------------------*/
+
+static inline void dm_sm_destroy(struct dm_space_map *sm)
+{
+ sm->destroy(sm);
+}
+
+static inline int dm_sm_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
+{
+ return sm->extend(sm, extra_blocks);
+}
+
+static inline int dm_sm_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
+{
+ return sm->get_nr_blocks(sm, count);
+}
+
+static inline int dm_sm_get_nr_free(struct dm_space_map *sm, dm_block_t *count)
+{
+ return sm->get_nr_free(sm, count);
+}
+
+static inline int dm_sm_get_count(struct dm_space_map *sm, dm_block_t b,
+ uint32_t *result)
+{
+ return sm->get_count(sm, b, result);
+}
+
+static inline int dm_sm_count_is_more_than_one(struct dm_space_map *sm,
+ dm_block_t b, int *result)
+{
+ return sm->count_is_more_than_one(sm, b, result);
+}
+
+static inline int dm_sm_set_count(struct dm_space_map *sm, dm_block_t b,
+ uint32_t count)
+{
+ return sm->set_count(sm, b, count);
+}
+
+static inline int dm_sm_commit(struct dm_space_map *sm)
+{
+ return sm->commit(sm);
+}
+
+static inline int dm_sm_inc_block(struct dm_space_map *sm, dm_block_t b)
+{
+ return sm->inc_block(sm, b);
+}
+
+static inline int dm_sm_dec_block(struct dm_space_map *sm, dm_block_t b)
+{
+ return sm->dec_block(sm, b);
+}
+
+static inline int dm_sm_new_block(struct dm_space_map *sm, dm_block_t *b)
+{
+ return sm->new_block(sm, b);
+}
+
+static inline int dm_sm_root_size(struct dm_space_map *sm, size_t *result)
+{
+ return sm->root_size(sm, result);
+}
+
+static inline int dm_sm_copy_root(struct dm_space_map *sm, void *copy_to_here_le, size_t len)
+{
+ return sm->copy_root(sm, copy_to_here_le, len);
+}
+
+#endif /* _LINUX_DM_SPACE_MAP_H */
diff --git a/drivers/md/persistent-data/dm-transaction-manager.c b/drivers/md/persistent-data/dm-transaction-manager.c
new file mode 100644
index 000000000000..728e89a3f978
--- /dev/null
+++ b/drivers/md/persistent-data/dm-transaction-manager.c
@@ -0,0 +1,400 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+#include "dm-transaction-manager.h"
+#include "dm-space-map.h"
+#include "dm-space-map-checker.h"
+#include "dm-space-map-disk.h"
+#include "dm-space-map-metadata.h"
+#include "dm-persistent-data-internal.h"
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/device-mapper.h>
+
+#define DM_MSG_PREFIX "transaction manager"
+
+/*----------------------------------------------------------------*/
+
+struct shadow_info {
+ struct hlist_node hlist;
+ dm_block_t where;
+};
+
+/*
+ * It would be nice if we scaled with the size of transaction.
+ */
+#define HASH_SIZE 256
+#define HASH_MASK (HASH_SIZE - 1)
+
+struct dm_transaction_manager {
+ int is_clone;
+ struct dm_transaction_manager *real;
+
+ struct dm_block_manager *bm;
+ struct dm_space_map *sm;
+
+ spinlock_t lock;
+ struct hlist_head buckets[HASH_SIZE];
+};
+
+/*----------------------------------------------------------------*/
+
+static int is_shadow(struct dm_transaction_manager *tm, dm_block_t b)
+{
+ int r = 0;
+ unsigned bucket = dm_hash_block(b, HASH_MASK);
+ struct shadow_info *si;
+ struct hlist_node *n;
+
+ spin_lock(&tm->lock);
+ hlist_for_each_entry(si, n, tm->buckets + bucket, hlist)
+ if (si->where == b) {
+ r = 1;
+ break;
+ }
+ spin_unlock(&tm->lock);
+
+ return r;
+}
+
+/*
+ * This can silently fail if there's no memory. We're ok with this since
+ * creating redundant shadows causes no harm.
+ */
+static void insert_shadow(struct dm_transaction_manager *tm, dm_block_t b)
+{
+ unsigned bucket;
+ struct shadow_info *si;
+
+ si = kmalloc(sizeof(*si), GFP_NOIO);
+ if (si) {
+ si->where = b;
+ bucket = dm_hash_block(b, HASH_MASK);
+ spin_lock(&tm->lock);
+ hlist_add_head(&si->hlist, tm->buckets + bucket);
+ spin_unlock(&tm->lock);
+ }
+}
+
+static void wipe_shadow_table(struct dm_transaction_manager *tm)
+{
+ struct shadow_info *si;
+ struct hlist_node *n, *tmp;
+ struct hlist_head *bucket;
+ int i;
+
+ spin_lock(&tm->lock);
+ for (i = 0; i < HASH_SIZE; i++) {
+ bucket = tm->buckets + i;
+ hlist_for_each_entry_safe(si, n, tmp, bucket, hlist)
+ kfree(si);
+
+ INIT_HLIST_HEAD(bucket);
+ }
+
+ spin_unlock(&tm->lock);
+}
+
+/*----------------------------------------------------------------*/
+
+static struct dm_transaction_manager *dm_tm_create(struct dm_block_manager *bm,
+ struct dm_space_map *sm)
+{
+ int i;
+ struct dm_transaction_manager *tm;
+
+ tm = kmalloc(sizeof(*tm), GFP_KERNEL);
+ if (!tm)
+ return ERR_PTR(-ENOMEM);
+
+ tm->is_clone = 0;
+ tm->real = NULL;
+ tm->bm = bm;
+ tm->sm = sm;
+
+ spin_lock_init(&tm->lock);
+ for (i = 0; i < HASH_SIZE; i++)
+ INIT_HLIST_HEAD(tm->buckets + i);
+
+ return tm;
+}
+
+struct dm_transaction_manager *dm_tm_create_non_blocking_clone(struct dm_transaction_manager *real)
+{
+ struct dm_transaction_manager *tm;
+
+ tm = kmalloc(sizeof(*tm), GFP_KERNEL);
+ if (tm) {
+ tm->is_clone = 1;
+ tm->real = real;
+ }
+
+ return tm;
+}
+EXPORT_SYMBOL_GPL(dm_tm_create_non_blocking_clone);
+
+void dm_tm_destroy(struct dm_transaction_manager *tm)
+{
+ kfree(tm);
+}
+EXPORT_SYMBOL_GPL(dm_tm_destroy);
+
+int dm_tm_pre_commit(struct dm_transaction_manager *tm)
+{
+ int r;
+
+ if (tm->is_clone)
+ return -EWOULDBLOCK;
+
+ r = dm_sm_commit(tm->sm);
+ if (r < 0)
+ return r;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(dm_tm_pre_commit);
+
+int dm_tm_commit(struct dm_transaction_manager *tm, struct dm_block *root)
+{
+ if (tm->is_clone)
+ return -EWOULDBLOCK;
+
+ wipe_shadow_table(tm);
+
+ return dm_bm_flush_and_unlock(tm->bm, root);
+}
+EXPORT_SYMBOL_GPL(dm_tm_commit);
+
+int dm_tm_new_block(struct dm_transaction_manager *tm,
+ struct dm_block_validator *v,
+ struct dm_block **result)
+{
+ int r;
+ dm_block_t new_block;
+
+ if (tm->is_clone)
+ return -EWOULDBLOCK;
+
+ r = dm_sm_new_block(tm->sm, &new_block);
+ if (r < 0)
+ return r;
+
+ r = dm_bm_write_lock_zero(tm->bm, new_block, v, result);
+ if (r < 0) {
+ dm_sm_dec_block(tm->sm, new_block);
+ return r;
+ }
+
+ /*
+ * New blocks count as shadows in that they don't need to be
+ * shadowed again.
+ */
+ insert_shadow(tm, new_block);
+
+ return 0;
+}
+
+static int __shadow_block(struct dm_transaction_manager *tm, dm_block_t orig,
+ struct dm_block_validator *v,
+ struct dm_block **result)
+{
+ int r;
+ dm_block_t new;
+ struct dm_block *orig_block;
+
+ r = dm_sm_new_block(tm->sm, &new);
+ if (r < 0)
+ return r;
+
+ r = dm_sm_dec_block(tm->sm, orig);
+ if (r < 0)
+ return r;
+
+ r = dm_bm_read_lock(tm->bm, orig, v, &orig_block);
+ if (r < 0)
+ return r;
+
+ r = dm_bm_unlock_move(orig_block, new);
+ if (r < 0) {
+ dm_bm_unlock(orig_block);
+ return r;
+ }
+
+ return dm_bm_write_lock(tm->bm, new, v, result);
+}
+
+int dm_tm_shadow_block(struct dm_transaction_manager *tm, dm_block_t orig,
+ struct dm_block_validator *v, struct dm_block **result,
+ int *inc_children)
+{
+ int r;
+
+ if (tm->is_clone)
+ return -EWOULDBLOCK;
+
+ r = dm_sm_count_is_more_than_one(tm->sm, orig, inc_children);
+ if (r < 0)
+ return r;
+
+ if (is_shadow(tm, orig) && !*inc_children)
+ return dm_bm_write_lock(tm->bm, orig, v, result);
+
+ r = __shadow_block(tm, orig, v, result);
+ if (r < 0)
+ return r;
+ insert_shadow(tm, dm_block_location(*result));
+
+ return r;
+}
+
+int dm_tm_read_lock(struct dm_transaction_manager *tm, dm_block_t b,
+ struct dm_block_validator *v,
+ struct dm_block **blk)
+{
+ if (tm->is_clone)
+ return dm_bm_read_try_lock(tm->real->bm, b, v, blk);
+
+ return dm_bm_read_lock(tm->bm, b, v, blk);
+}
+
+int dm_tm_unlock(struct dm_transaction_manager *tm, struct dm_block *b)
+{
+ return dm_bm_unlock(b);
+}
+EXPORT_SYMBOL_GPL(dm_tm_unlock);
+
+void dm_tm_inc(struct dm_transaction_manager *tm, dm_block_t b)
+{
+ /*
+ * The non-blocking clone doesn't support this.
+ */
+ BUG_ON(tm->is_clone);
+
+ dm_sm_inc_block(tm->sm, b);
+}
+EXPORT_SYMBOL_GPL(dm_tm_inc);
+
+void dm_tm_dec(struct dm_transaction_manager *tm, dm_block_t b)
+{
+ /*
+ * The non-blocking clone doesn't support this.
+ */
+ BUG_ON(tm->is_clone);
+
+ dm_sm_dec_block(tm->sm, b);
+}
+EXPORT_SYMBOL_GPL(dm_tm_dec);
+
+int dm_tm_ref(struct dm_transaction_manager *tm, dm_block_t b,
+ uint32_t *result)
+{
+ if (tm->is_clone)
+ return -EWOULDBLOCK;
+
+ return dm_sm_get_count(tm->sm, b, result);
+}
+
+struct dm_block_manager *dm_tm_get_bm(struct dm_transaction_manager *tm)
+{
+ return tm->bm;
+}
+
+/*----------------------------------------------------------------*/
+
+static int dm_tm_create_internal(struct dm_block_manager *bm,
+ dm_block_t sb_location,
+ struct dm_block_validator *sb_validator,
+ size_t root_offset, size_t root_max_len,
+ struct dm_transaction_manager **tm,
+ struct dm_space_map **sm,
+ struct dm_block **sblock,
+ int create)
+{
+ int r;
+ struct dm_space_map *inner;
+
+ inner = dm_sm_metadata_init();
+ if (IS_ERR(inner))
+ return PTR_ERR(inner);
+
+ *tm = dm_tm_create(bm, inner);
+ if (IS_ERR(*tm)) {
+ dm_sm_destroy(inner);
+ return PTR_ERR(*tm);
+ }
+
+ if (create) {
+ r = dm_bm_write_lock_zero(dm_tm_get_bm(*tm), sb_location,
+ sb_validator, sblock);
+ if (r < 0) {
+ DMERR("couldn't lock superblock");
+ goto bad1;
+ }
+
+ r = dm_sm_metadata_create(inner, *tm, dm_bm_nr_blocks(bm),
+ sb_location);
+ if (r) {
+ DMERR("couldn't create metadata space map");
+ goto bad2;
+ }
+
+ *sm = dm_sm_checker_create(inner);
+ if (!*sm)
+ goto bad2;
+
+ } else {
+ r = dm_bm_write_lock(dm_tm_get_bm(*tm), sb_location,
+ sb_validator, sblock);
+ if (r < 0) {
+ DMERR("couldn't lock superblock");
+ goto bad1;
+ }
+
+ r = dm_sm_metadata_open(inner, *tm,
+ dm_block_data(*sblock) + root_offset,
+ root_max_len);
+ if (r) {
+ DMERR("couldn't open metadata space map");
+ goto bad2;
+ }
+
+ *sm = dm_sm_checker_create(inner);
+ if (!*sm)
+ goto bad2;
+ }
+
+ return 0;
+
+bad2:
+ dm_tm_unlock(*tm, *sblock);
+bad1:
+ dm_tm_destroy(*tm);
+ dm_sm_destroy(inner);
+ return r;
+}
+
+int dm_tm_create_with_sm(struct dm_block_manager *bm, dm_block_t sb_location,
+ struct dm_block_validator *sb_validator,
+ struct dm_transaction_manager **tm,
+ struct dm_space_map **sm, struct dm_block **sblock)
+{
+ return dm_tm_create_internal(bm, sb_location, sb_validator,
+ 0, 0, tm, sm, sblock, 1);
+}
+EXPORT_SYMBOL_GPL(dm_tm_create_with_sm);
+
+int dm_tm_open_with_sm(struct dm_block_manager *bm, dm_block_t sb_location,
+ struct dm_block_validator *sb_validator,
+ size_t root_offset, size_t root_max_len,
+ struct dm_transaction_manager **tm,
+ struct dm_space_map **sm, struct dm_block **sblock)
+{
+ return dm_tm_create_internal(bm, sb_location, sb_validator, root_offset,
+ root_max_len, tm, sm, sblock, 0);
+}
+EXPORT_SYMBOL_GPL(dm_tm_open_with_sm);
+
+/*----------------------------------------------------------------*/
diff --git a/drivers/md/persistent-data/dm-transaction-manager.h b/drivers/md/persistent-data/dm-transaction-manager.h
new file mode 100644
index 000000000000..6da784871db4
--- /dev/null
+++ b/drivers/md/persistent-data/dm-transaction-manager.h
@@ -0,0 +1,130 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef _LINUX_DM_TRANSACTION_MANAGER_H
+#define _LINUX_DM_TRANSACTION_MANAGER_H
+
+#include "dm-block-manager.h"
+
+struct dm_transaction_manager;
+struct dm_space_map;
+
+/*----------------------------------------------------------------*/
+
+/*
+ * This manages the scope of a transaction. It also enforces immutability
+ * of the on-disk data structures by limiting access to writeable blocks.
+ *
+ * Clients should not fiddle with the block manager directly.
+ */
+
+void dm_tm_destroy(struct dm_transaction_manager *tm);
+
+/*
+ * The non-blocking version of a transaction manager is intended for use in
+ * fast path code that needs to do lookups e.g. a dm mapping function.
+ * You create the non-blocking variant from a normal tm. The interface is
+ * the same, except that most functions will just return -EWOULDBLOCK.
+ * Methods that return void yet may block should not be called on a clone
+ * viz. dm_tm_inc, dm_tm_dec. Call dm_tm_destroy() as you would with a normal
+ * tm when you've finished with it. You may not destroy the original prior
+ * to clones.
+ */
+struct dm_transaction_manager *dm_tm_create_non_blocking_clone(struct dm_transaction_manager *real);
+
+/*
+ * We use a 2-phase commit here.
+ *
+ * i) In the first phase the block manager is told to start flushing, and
+ * the changes to the space map are written to disk. You should interrogate
+ * your particular space map to get detail of its root node etc. to be
+ * included in your superblock.
+ *
+ * ii) @root will be committed last. You shouldn't use more than the
+ * first 512 bytes of @root if you wish the transaction to survive a power
+ * failure. You *must* have a write lock held on @root for both stage (i)
+ * and (ii). The commit will drop the write lock.
+ */
+int dm_tm_pre_commit(struct dm_transaction_manager *tm);
+int dm_tm_commit(struct dm_transaction_manager *tm, struct dm_block *root);
+
+/*
+ * These methods are the only way to get hold of a writeable block.
+ */
+
+/*
+ * dm_tm_new_block() is pretty self-explanatory. Make sure you do actually
+ * write to the whole of @data before you unlock, otherwise you could get
+ * a data leak. (The other option is for tm_new_block() to zero new blocks
+ * before handing them out, which will be redundant in most, if not all,
+ * cases).
+ * Zeroes the new block and returns with write lock held.
+ */
+int dm_tm_new_block(struct dm_transaction_manager *tm,
+ struct dm_block_validator *v,
+ struct dm_block **result);
+
+/*
+ * dm_tm_shadow_block() allocates a new block and copies the data from @orig
+ * to it. It then decrements the reference count on original block. Use
+ * this to update the contents of a block in a data structure, don't
+ * confuse this with a clone - you shouldn't access the orig block after
+ * this operation. Because the tm knows the scope of the transaction it
+ * can optimise requests for a shadow of a shadow to a no-op. Don't forget
+ * to unlock when you've finished with the shadow.
+ *
+ * The @inc_children flag is used to tell the caller whether it needs to
+ * adjust reference counts for children. (Data in the block may refer to
+ * other blocks.)
+ *
+ * Shadowing implicitly drops a reference on @orig so you must not have
+ * it locked when you call this.
+ */
+int dm_tm_shadow_block(struct dm_transaction_manager *tm, dm_block_t orig,
+ struct dm_block_validator *v,
+ struct dm_block **result, int *inc_children);
+
+/*
+ * Read access. You can lock any block you want. If there's a write lock
+ * on it outstanding then it'll block.
+ */
+int dm_tm_read_lock(struct dm_transaction_manager *tm, dm_block_t b,
+ struct dm_block_validator *v,
+ struct dm_block **result);
+
+int dm_tm_unlock(struct dm_transaction_manager *tm, struct dm_block *b);
+
+/*
+ * Functions for altering the reference count of a block directly.
+ */
+void dm_tm_inc(struct dm_transaction_manager *tm, dm_block_t b);
+
+void dm_tm_dec(struct dm_transaction_manager *tm, dm_block_t b);
+
+int dm_tm_ref(struct dm_transaction_manager *tm, dm_block_t b,
+ uint32_t *result);
+
+struct dm_block_manager *dm_tm_get_bm(struct dm_transaction_manager *tm);
+
+/*
+ * A little utility that ties the knot by producing a transaction manager
+ * that has a space map managed by the transaction manager...
+ *
+ * Returns a tm that has an open transaction to write the new disk sm.
+ * Caller should store the new sm root and commit.
+ */
+int dm_tm_create_with_sm(struct dm_block_manager *bm, dm_block_t sb_location,
+ struct dm_block_validator *sb_validator,
+ struct dm_transaction_manager **tm,
+ struct dm_space_map **sm, struct dm_block **sblock);
+
+int dm_tm_open_with_sm(struct dm_block_manager *bm, dm_block_t sb_location,
+ struct dm_block_validator *sb_validator,
+ size_t root_offset, size_t root_max_len,
+ struct dm_transaction_manager **tm,
+ struct dm_space_map **sm, struct dm_block **sblock);
+
+#endif /* _LINUX_DM_TRANSACTION_MANAGER_H */
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 99e3e50b5c57..98f34b886f95 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -10,6 +10,7 @@
#include <linux/bio.h>
#include <linux/blkdev.h>
+#include <linux/ratelimit.h>
struct dm_dev;
struct dm_target;
@@ -127,10 +128,6 @@ void dm_put_device(struct dm_target *ti, struct dm_dev *d);
* Information about a target type
*/
-/*
- * Target features
- */
-
struct target_type {
uint64_t features;
const char *name;
@@ -159,6 +156,30 @@ struct target_type {
struct list_head list;
};
+/*
+ * Target features
+ */
+
+/*
+ * Any table that contains an instance of this target must have only one.
+ */
+#define DM_TARGET_SINGLETON 0x00000001
+#define dm_target_needs_singleton(type) ((type)->features & DM_TARGET_SINGLETON)
+
+/*
+ * Indicates that a target does not support read-only devices.
+ */
+#define DM_TARGET_ALWAYS_WRITEABLE 0x00000002
+#define dm_target_always_writeable(type) \
+ ((type)->features & DM_TARGET_ALWAYS_WRITEABLE)
+
+/*
+ * Any device that contains a table with an instance of this target may never
+ * have tables containing any different target type.
+ */
+#define DM_TARGET_IMMUTABLE 0x00000004
+#define dm_target_is_immutable(type) ((type)->features & DM_TARGET_IMMUTABLE)
+
struct dm_target {
struct dm_table *table;
struct target_type *type;
@@ -375,6 +396,14 @@ void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size);
*---------------------------------------------------------------*/
#define DM_NAME "device-mapper"
+#ifdef CONFIG_PRINTK
+extern struct ratelimit_state dm_ratelimit_state;
+
+#define dm_ratelimit() __ratelimit(&dm_ratelimit_state)
+#else
+#define dm_ratelimit() 0
+#endif
+
#define DMCRIT(f, arg...) \
printk(KERN_CRIT DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
@@ -382,7 +411,7 @@ void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size);
printk(KERN_ERR DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
#define DMERR_LIMIT(f, arg...) \
do { \
- if (printk_ratelimit()) \
+ if (dm_ratelimit()) \
printk(KERN_ERR DM_NAME ": " DM_MSG_PREFIX ": " \
f "\n", ## arg); \
} while (0)
@@ -391,7 +420,7 @@ void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size);
printk(KERN_WARNING DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
#define DMWARN_LIMIT(f, arg...) \
do { \
- if (printk_ratelimit()) \
+ if (dm_ratelimit()) \
printk(KERN_WARNING DM_NAME ": " DM_MSG_PREFIX ": " \
f "\n", ## arg); \
} while (0)
@@ -400,7 +429,7 @@ void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size);
printk(KERN_INFO DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
#define DMINFO_LIMIT(f, arg...) \
do { \
- if (printk_ratelimit()) \
+ if (dm_ratelimit()) \
printk(KERN_INFO DM_NAME ": " DM_MSG_PREFIX ": " f \
"\n", ## arg); \
} while (0)
@@ -410,7 +439,7 @@ void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size);
printk(KERN_DEBUG DM_NAME ": " DM_MSG_PREFIX " DEBUG: " f "\n", ## arg)
# define DMDEBUG_LIMIT(f, arg...) \
do { \
- if (printk_ratelimit()) \
+ if (dm_ratelimit()) \
printk(KERN_DEBUG DM_NAME ": " DM_MSG_PREFIX ": " f \
"\n", ## arg); \
} while (0)
diff --git a/include/linux/dm-ioctl.h b/include/linux/dm-ioctl.h
index 0cb8eff76bd6..75fd5573516e 100644
--- a/include/linux/dm-ioctl.h
+++ b/include/linux/dm-ioctl.h
@@ -267,9 +267,9 @@ enum {
#define DM_DEV_SET_GEOMETRY _IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl)
#define DM_VERSION_MAJOR 4
-#define DM_VERSION_MINOR 21
+#define DM_VERSION_MINOR 22
#define DM_VERSION_PATCHLEVEL 0
-#define DM_VERSION_EXTRA "-ioctl (2011-07-06)"
+#define DM_VERSION_EXTRA "-ioctl (2011-10-19)"
/* Status bits */
#define DM_READONLY_FLAG (1 << 0) /* In/Out */
diff --git a/include/linux/dm-kcopyd.h b/include/linux/dm-kcopyd.h
index 5e54458e920f..47d9d376e4e7 100644
--- a/include/linux/dm-kcopyd.h
+++ b/include/linux/dm-kcopyd.h
@@ -57,5 +57,9 @@ void *dm_kcopyd_prepare_callback(struct dm_kcopyd_client *kc,
dm_kcopyd_notify_fn fn, void *context);
void dm_kcopyd_do_callback(void *job, int read_err, unsigned long write_err);
+int dm_kcopyd_zero(struct dm_kcopyd_client *kc,
+ unsigned num_dests, struct dm_io_region *dests,
+ unsigned flags, dm_kcopyd_notify_fn fn, void *context);
+
#endif /* __KERNEL__ */
#endif /* _LINUX_DM_KCOPYD_H */
diff --git a/include/linux/dm-log-userspace.h b/include/linux/dm-log-userspace.h
index eeace7d3ff15..0678c2adc421 100644
--- a/include/linux/dm-log-userspace.h
+++ b/include/linux/dm-log-userspace.h
@@ -52,15 +52,20 @@
* Payload-to-userspace:
* A single string containing all the argv arguments separated by ' 's
* Payload-to-kernel:
- * None. ('data_size' in the dm_ulog_request struct should be 0.)
+ * A NUL-terminated string that is the name of the device that is used
+ * as the backing store for the log data. 'dm_get_device' will be called
+ * on this device. ('dm_put_device' will be called on this device
+ * automatically after calling DM_ULOG_DTR.) If there is no device needed
+ * for log data, 'data_size' in the dm_ulog_request struct should be 0.
*
* The UUID contained in the dm_ulog_request structure is the reference that
* will be used by all request types to a specific log. The constructor must
- * record this assotiation with instance created.
+ * record this association with the instance created.
*
* When the request has been processed, user-space must return the
- * dm_ulog_request to the kernel - setting the 'error' field and
- * 'data_size' appropriately.
+ * dm_ulog_request to the kernel - setting the 'error' field, filling the
+ * data field with the log device if necessary, and setting 'data_size'
+ * appropriately.
*/
#define DM_ULOG_CTR 1
@@ -377,8 +382,11 @@
* dm_ulog_request or a change in the way requests are
* issued/handled. Changes are outlined here:
* version 1: Initial implementation
+ * version 2: DM_ULOG_CTR allowed to return a string containing a
+ * device name that is to be registered with DM via
+ * 'dm_get_device'.
*/
-#define DM_ULOG_REQUEST_VERSION 1
+#define DM_ULOG_REQUEST_VERSION 2
struct dm_ulog_request {
/*