<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/linux.git/net/ceph/crypto.c, branch linux-4.13.y</title>
<subtitle>Linux kernel stable tree (mirror)</subtitle>
<id>https://git.radix-linux.su/kernel/linux.git/atom?h=linux-4.13.y</id>
<link rel='self' href='https://git.radix-linux.su/kernel/linux.git/atom?h=linux-4.13.y'/>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/'/>
<updated>2017-03-02T07:42:33+00:00</updated>
<entry>
<title>sched/headers: Prepare to move the memalloc_noio_*() APIs to &lt;linux/sched/mm.h&gt;</title>
<updated>2017-03-02T07:42:33+00:00</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@kernel.org</email>
</author>
<published>2017-02-02T19:43:54+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=5b3cc15aff243cb518cbeed8b1a220cbfd023d9c'/>
<id>urn:sha1:5b3cc15aff243cb518cbeed8b1a220cbfd023d9c</id>
<content type='text'>
Update the .c files that depend on these APIs.

Acked-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>libceph: include linux/sched.h into crypto.c directly</title>
<updated>2017-02-20T11:16:06+00:00</updated>
<author>
<name>Ilya Dryomov</name>
<email>idryomov@gmail.com</email>
</author>
<published>2017-01-16T13:35:17+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=7fea24c6d4a553c59937ae4fef95c730a88125cb'/>
<id>urn:sha1:7fea24c6d4a553c59937ae4fef95c730a88125cb</id>
<content type='text'>
Currently crypto.c gets linux/sched.h indirectly through linux/slab.h
from linux/kasan.h.  Include it directly for memalloc_noio_*() inlines.

Signed-off-by: Ilya Dryomov &lt;idryomov@gmail.com&gt;
</content>
</entry>
<entry>
<title>libceph: make sure ceph_aes_crypt() IV is aligned</title>
<updated>2017-01-18T16:58:45+00:00</updated>
<author>
<name>Ilya Dryomov</name>
<email>idryomov@gmail.com</email>
</author>
<published>2017-01-16T18:16:46+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=124f930b8cbc4ac11236e6eb1c5f008318864588'/>
<id>urn:sha1:124f930b8cbc4ac11236e6eb1c5f008318864588</id>
<content type='text'>
... otherwise the crypto stack will align it for us with a GFP_ATOMIC
allocation and a memcpy() -- see skcipher_walk_first().

Signed-off-by: Ilya Dryomov &lt;idryomov@gmail.com&gt;
</content>
</entry>
<entry>
<title>libceph: stop allocating a new cipher on every crypto request</title>
<updated>2016-12-12T22:09:20+00:00</updated>
<author>
<name>Ilya Dryomov</name>
<email>idryomov@gmail.com</email>
</author>
<published>2016-12-02T15:35:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=7af3ea189a9a13f090de51c97f676215dabc1205'/>
<id>urn:sha1:7af3ea189a9a13f090de51c97f676215dabc1205</id>
<content type='text'>
This is useless and more importantly not allowed on the writeback path,
because crypto_alloc_skcipher() allocates memory with GFP_KERNEL, which
can recurse back into the filesystem:

    kworker/9:3     D ffff92303f318180     0 20732      2 0x00000080
    Workqueue: ceph-msgr ceph_con_workfn [libceph]
     ffff923035dd4480 ffff923038f8a0c0 0000000000000001 000000009eb27318
     ffff92269eb28000 ffff92269eb27338 ffff923036b145ac ffff923035dd4480
     00000000ffffffff ffff923036b145b0 ffffffff951eb4e1 ffff923036b145a8
    Call Trace:
     [&lt;ffffffff951eb4e1&gt;] ? schedule+0x31/0x80
     [&lt;ffffffff951eb77a&gt;] ? schedule_preempt_disabled+0xa/0x10
     [&lt;ffffffff951ed1f4&gt;] ? __mutex_lock_slowpath+0xb4/0x130
     [&lt;ffffffff951ed28b&gt;] ? mutex_lock+0x1b/0x30
     [&lt;ffffffffc0a974b3&gt;] ? xfs_reclaim_inodes_ag+0x233/0x2d0 [xfs]
     [&lt;ffffffff94d92ba5&gt;] ? move_active_pages_to_lru+0x125/0x270
     [&lt;ffffffff94f2b985&gt;] ? radix_tree_gang_lookup_tag+0xc5/0x1c0
     [&lt;ffffffff94dad0f3&gt;] ? __list_lru_walk_one.isra.3+0x33/0x120
     [&lt;ffffffffc0a98331&gt;] ? xfs_reclaim_inodes_nr+0x31/0x40 [xfs]
     [&lt;ffffffff94e05bfe&gt;] ? super_cache_scan+0x17e/0x190
     [&lt;ffffffff94d919f3&gt;] ? shrink_slab.part.38+0x1e3/0x3d0
     [&lt;ffffffff94d9616a&gt;] ? shrink_node+0x10a/0x320
     [&lt;ffffffff94d96474&gt;] ? do_try_to_free_pages+0xf4/0x350
     [&lt;ffffffff94d967ba&gt;] ? try_to_free_pages+0xea/0x1b0
     [&lt;ffffffff94d863bd&gt;] ? __alloc_pages_nodemask+0x61d/0xe60
     [&lt;ffffffff94ddf42d&gt;] ? cache_grow_begin+0x9d/0x560
     [&lt;ffffffff94ddfb88&gt;] ? fallback_alloc+0x148/0x1c0
     [&lt;ffffffff94ed84e7&gt;] ? __crypto_alloc_tfm+0x37/0x130
     [&lt;ffffffff94de09db&gt;] ? __kmalloc+0x1eb/0x580
     [&lt;ffffffffc09fe2db&gt;] ? crush_choose_firstn+0x3eb/0x470 [libceph]
     [&lt;ffffffff94ed84e7&gt;] ? __crypto_alloc_tfm+0x37/0x130
     [&lt;ffffffff94ed9c19&gt;] ? crypto_spawn_tfm+0x39/0x60
     [&lt;ffffffffc08b30a3&gt;] ? crypto_cbc_init_tfm+0x23/0x40 [cbc]
     [&lt;ffffffff94ed857c&gt;] ? __crypto_alloc_tfm+0xcc/0x130
     [&lt;ffffffff94edcc23&gt;] ? crypto_skcipher_init_tfm+0x113/0x180
     [&lt;ffffffff94ed7cc3&gt;] ? crypto_create_tfm+0x43/0xb0
     [&lt;ffffffff94ed83b0&gt;] ? crypto_larval_lookup+0x150/0x150
     [&lt;ffffffff94ed7da2&gt;] ? crypto_alloc_tfm+0x72/0x120
     [&lt;ffffffffc0a01dd7&gt;] ? ceph_aes_encrypt2+0x67/0x400 [libceph]
     [&lt;ffffffffc09fd264&gt;] ? ceph_pg_to_up_acting_osds+0x84/0x5b0 [libceph]
     [&lt;ffffffff950d40a0&gt;] ? release_sock+0x40/0x90
     [&lt;ffffffff95139f94&gt;] ? tcp_recvmsg+0x4b4/0xae0
     [&lt;ffffffffc0a02714&gt;] ? ceph_encrypt2+0x54/0xc0 [libceph]
     [&lt;ffffffffc0a02b4d&gt;] ? ceph_x_encrypt+0x5d/0x90 [libceph]
     [&lt;ffffffffc0a02bdf&gt;] ? calcu_signature+0x5f/0x90 [libceph]
     [&lt;ffffffffc0a02ef5&gt;] ? ceph_x_sign_message+0x35/0x50 [libceph]
     [&lt;ffffffffc09e948c&gt;] ? prepare_write_message_footer+0x5c/0xa0 [libceph]
     [&lt;ffffffffc09ecd18&gt;] ? ceph_con_workfn+0x2258/0x2dd0 [libceph]
     [&lt;ffffffffc09e9903&gt;] ? queue_con_delay+0x33/0xd0 [libceph]
     [&lt;ffffffffc09f68ed&gt;] ? __submit_request+0x20d/0x2f0 [libceph]
     [&lt;ffffffffc09f6ef8&gt;] ? ceph_osdc_start_request+0x28/0x30 [libceph]
     [&lt;ffffffffc0b52603&gt;] ? rbd_queue_workfn+0x2f3/0x350 [rbd]
     [&lt;ffffffff94c94ec0&gt;] ? process_one_work+0x160/0x410
     [&lt;ffffffff94c951bd&gt;] ? worker_thread+0x4d/0x480
     [&lt;ffffffff94c95170&gt;] ? process_one_work+0x410/0x410
     [&lt;ffffffff94c9af8d&gt;] ? kthread+0xcd/0xf0
     [&lt;ffffffff951efb2f&gt;] ? ret_from_fork+0x1f/0x40
     [&lt;ffffffff94c9aec0&gt;] ? kthread_create_on_node+0x190/0x190

Allocating the cipher along with the key fixes the issue - as long the
key doesn't change, a single cipher context can be used concurrently in
multiple requests.

We still can't take that GFP_KERNEL allocation though.  Both
ceph_crypto_key_clone() and ceph_crypto_key_decode() are called from
GFP_NOFS context, so resort to memalloc_noio_{save,restore}() here.

Reported-by: Lucas Stach &lt;l.stach@pengutronix.de&gt;
Signed-off-by: Ilya Dryomov &lt;idryomov@gmail.com&gt;
Reviewed-by: Sage Weil &lt;sage@redhat.com&gt;
</content>
</entry>
<entry>
<title>libceph: uninline ceph_crypto_key_destroy()</title>
<updated>2016-12-12T22:09:20+00:00</updated>
<author>
<name>Ilya Dryomov</name>
<email>idryomov@gmail.com</email>
</author>
<published>2016-12-02T15:35:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=6db2304aabb070261ad34923bfd83c43dfb000e3'/>
<id>urn:sha1:6db2304aabb070261ad34923bfd83c43dfb000e3</id>
<content type='text'>
Signed-off-by: Ilya Dryomov &lt;idryomov@gmail.com&gt;
Reviewed-by: Sage Weil &lt;sage@redhat.com&gt;
</content>
</entry>
<entry>
<title>libceph: remove now unused ceph_*{en,de}crypt*() functions</title>
<updated>2016-12-12T22:09:20+00:00</updated>
<author>
<name>Ilya Dryomov</name>
<email>idryomov@gmail.com</email>
</author>
<published>2016-12-02T15:35:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=2b1e1a7cd0a615d57455567a549f9965023321b5'/>
<id>urn:sha1:2b1e1a7cd0a615d57455567a549f9965023321b5</id>
<content type='text'>
Signed-off-by: Ilya Dryomov &lt;idryomov@gmail.com&gt;
Reviewed-by: Sage Weil &lt;sage@redhat.com&gt;
</content>
</entry>
<entry>
<title>libceph: introduce ceph_crypt() for in-place en/decryption</title>
<updated>2016-12-12T22:09:19+00:00</updated>
<author>
<name>Ilya Dryomov</name>
<email>idryomov@gmail.com</email>
</author>
<published>2016-12-02T15:35:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=a45f795c65b479b4ba107b6ccde29b896d51ee98'/>
<id>urn:sha1:a45f795c65b479b4ba107b6ccde29b896d51ee98</id>
<content type='text'>
Starting with 4.9, kernel stacks may be vmalloced and therefore not
guaranteed to be physically contiguous; the new CONFIG_VMAP_STACK
option is enabled by default on x86.  This makes it invalid to use
on-stack buffers with the crypto scatterlist API, as sg_set_buf()
expects a logical address and won't work with vmalloced addresses.

There isn't a different (e.g. kvec-based) crypto API we could switch
net/ceph/crypto.c to and the current scatterlist.h API isn't getting
updated to accommodate this use case.  Allocating a new header and
padding for each operation is a non-starter, so do the en/decryption
in-place on a single pre-assembled (header + data + padding) heap
buffer.  This is explicitly supported by the crypto API:

    "... the caller may provide the same scatter/gather list for the
     plaintext and cipher text. After the completion of the cipher
     operation, the plaintext data is replaced with the ciphertext data
     in case of an encryption and vice versa for a decryption."

Signed-off-by: Ilya Dryomov &lt;idryomov@gmail.com&gt;
Reviewed-by: Sage Weil &lt;sage@redhat.com&gt;
</content>
</entry>
<entry>
<title>libceph: Remove unnecessary ivsize variables</title>
<updated>2016-01-27T12:36:25+00:00</updated>
<author>
<name>Ilya Dryomov</name>
<email>idryomov@gmail.com</email>
</author>
<published>2016-01-26T10:54:55+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=e9f6452e0e55ceea004f5bc0802fb14778d16c42'/>
<id>urn:sha1:e9f6452e0e55ceea004f5bc0802fb14778d16c42</id>
<content type='text'>
This patch removes the unnecessary ivsize variabls as they always
have the value of AES_BLOCK_SIZE.

Signed-off-by: Ilya Dryomov &lt;idryomov@gmail.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>libceph: Use skcipher</title>
<updated>2016-01-27T12:36:05+00:00</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2016-01-24T13:18:40+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=e59dd982d355a40f08a666ce3ee3feea2af86959'/>
<id>urn:sha1:e59dd982d355a40f08a666ce3ee3feea2af86959</id>
<content type='text'>
This patch replaces uses of blkcipher with skcipher.

Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
</entry>
<entry>
<title>KEYS: Merge the type-specific data with the payload data</title>
<updated>2015-10-21T14:18:36+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2015-10-21T13:04:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.radix-linux.su/kernel/linux.git/commit/?id=146aa8b1453bd8f1ff2304ffb71b4ee0eb9acdcc'/>
<id>urn:sha1:146aa8b1453bd8f1ff2304ffb71b4ee0eb9acdcc</id>
<content type='text'>
Merge the type-specific data with the payload data into one four-word chunk
as it seems pointless to keep them separate.

Use user_key_payload() for accessing the payloads of overloaded
user-defined keys.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
cc: linux-cifs@vger.kernel.org
cc: ecryptfs@vger.kernel.org
cc: linux-ext4@vger.kernel.org
cc: linux-f2fs-devel@lists.sourceforge.net
cc: linux-nfs@vger.kernel.org
cc: ceph-devel@vger.kernel.org
cc: linux-ima-devel@lists.sourceforge.net
</content>
</entry>
</feed>
