Age | Commit message (Collapse) | Author | Files | Lines |
|
clk_prepare_enable() may fail, so we should better check its return value
and propagate it in the case of error.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The picoXcell hardware crypto accelerator driver was using an
older version of the clk framework, and not (un)preparing the
clock before enabling/disabling it. This change uses the handy
clk_prepare_enable function to interact with the current clk
framework correctly.
Signed-off-by: Michael van der Westhuizen <michael@smart-africa.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The nx driver reads two crucial paramters from the firmware for
each crypto algorithm, the maximum SG list length and byte limit.
Unfortunately those two parameters may be bogus, or worse they
may be absent altogether. When this happens the algorithms will
still register successfully but will fail when used or tested.
This patch adds checks to report any firmware entries which are
found to be bogus, and avoid registering algorithms which have
bogus parameters. A warning is also printed when an algorithm
is not registered because of this as there may have been no firmware
entries for it at all.
Reported-by: Ondrej Moriš <omoris@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add the Kirkwood and Dove SoC descriptions, and control the allhwsupport
module parameter to avoid probing the CESA IP when the old CESA driver is
enabled (unless it is explicitly requested to do so).
Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add the Orion SoC description, and select this implementation by default
to support non-DT probing: Orion is the only platform where non-DT boards
are declaring the CESA block.
Control the allhwsupport module parameter to avoid probing the CESA IP when
the old CESA driver is enabled (unless it is explicitly requested to do
so).
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The old and new marvell CESA drivers both support Orion and Kirkwood SoCs.
Add a module parameter to choose whether these SoCs should be attached to
the new or the old driver.
The default policy is to keep attaching those IPs to the old driver if it
is enabled, until we decide the new CESA driver is stable/secure enough.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add CESA IP description for all the missing armada SoCs (XP, 375 and 38x).
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add support for SHA256 operations.
Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add support for MD5 operations.
Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add support for Triple-DES operations.
Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add support for DES operations.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The CESA IP supports CPU offload through a dedicated DMA engine (TDMA)
which can control the crypto block.
When you use this mode, all the required data (operation metadata and
payload data) are transferred using DMA, and the results are retrieved
through DMA when possible (hash results are not retrieved through DMA yet),
thus reducing the involvement of the CPU and providing better performances
in most cases (for small requests, the cost of DMA preparation might
exceed the performance gain).
Note that some CESA IPs do not embed this dedicated DMA, hence the
activation of this feature on a per platform basis.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The existing mv_cesa driver supports some features of the CESA IP but is
quite limited, and reworking it to support new features (like involving the
TDMA engine to offload the CPU) is almost impossible.
This driver has been rewritten from scratch to take those new features into
account.
This commit introduce the base infrastructure allowing us to add support
for DMA optimization.
It also includes support for one hash (SHA1) and one cipher (AES)
algorithm, and enable those features on the Armada 370 SoC.
Other algorithms and platforms will be added later on.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
We are about to add a new driver to support new features like using the
TDMA engine to offload the CPU.
Orion, Dove and Kirkwood platforms are already using the mv_cesa driver,
but Orion SoCs do not embed the TDMA engine, which means we will have to
differentiate them if we want to get TDMA support on Dove and Kirkwood.
In the other hand, the migration from the old driver to the new one is not
something all people are willing to do without first auditing the new
driver.
Hence we have to support the new compatible in the mv_cesa driver so that
new platforms with updated DTs can still attach their crypto engine device
to this driver.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The mv_cesa driver currently expects the SRAM memory region to be passed
as a platform device resource.
This approach implies two drawbacks:
- the DT representation is wrong
- the only one that can access the SRAM is the crypto engine
The last point is particularly annoying in some cases: for example on
armada 370, a small region of the crypto SRAM is used to implement the
cpuidle, which means you would not be able to enable both cpuidle and the
CESA driver.
To address that problem, we explicitly define the SRAM device in the DT
and then reference the sram node from the crypto engine node.
Also note that the old way of retrieving the SRAM memory region is still
supported, or in other words, backward compatibility is preserved.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Merge the mvebu/drivers branch of the arm-soc tree which contains
just a single patch bfa1ce5f38938cc9e6c7f2d1011f88eba2b9e2b2 ("bus:
mvebu-mbus: add mv_mbus_dram_info_nooverlap()") that happens to be
a prerequisite of the new marvell/cesa crypto driver.
|
|
Add support to the nx-842-pseries.c driver for running in little endian
mode.
The pSeries platform NX 842 driver currently only works as big endian.
This adds cpu_to_be*() and be*_to_cpu() in the appropriate places to
work in LE mode also.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The new aead_edesc_alloc left out the bit indicating the last
entry on the source SG list. This patch fixes it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
I incorrectly removed DESC_MAX_USED_BYTES when enlarging the size
of the shared descriptor buffers, thus making it four times larger
than what is necessary. This patch restores the division by four
calculation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch fixes a number of problems in crypto driver Kconfig
entries:
1. Select BLKCIPHER instead of BLKCIPHER2. The latter is internal
and should not be used outside of the crypto API itself.
2. Do not select ALGAPI unless you use a legacy type like
CRYPTO_ALG_TYPE_CIPHER.
3. Select the algorithm type that you are implementing, e.g., AEAD.
4. Do not select generic C code such as CBC/ECB unless you use them
as a fallback.
5. Remove default n since that is the default default.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The patch
crypto: caam - Add definition of rd/wr_reg64 for little endian platform
added support for little endian platforms to the CAAM driver. Namely a
write and read function for 64 bit registers.
The only user of this functions is the Job Ring driver (drivers/crypto/caam/jr.c).
It uses the functions to set the DMA addresses for the input/output rings.
However, at least in the default configuration, the least significant 32 bits are
always in the base+0x0004 address; independent of the endianness of the bytes itself.
That means the addresses do not change with the system endianness.
DMA addresses are only 32 bits wide on non-64-bit systems, writing the upper 32 bits
of this value to the register for the least significant bits results in the DMA address
being set to 0.
Fix this by always writing the registers in the same way.
Suggested-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch converts the caam GCM implementations to the new AEAD
interface. This is compile-tested only.
Note that all IV generation for GCM algorithms have been removed.
The reason is that the current generation uses purely random IVs
which is not appropriate for counter-based algorithms where we
first and foremost require uniqueness.
Of course there is no reason why you couldn't implement seqiv or
seqniv within caam since all they do is xor the sequence number
with a salt, but since I can't test this on actual hardware I'll
leave it alone for now.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Currently dma_map_sg_chained does not handle errors from the
underlying dma_map_sg calls. This patch adds rollback in case
of an error by simply calling dma_unmap_sg_chained for the ones
that we've already mapped.
All current callers ignore the return value so this should have
no impact on them.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch converts the nx GCM implementations to the new AEAD
interface. This is compile-tested only.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Fix a "Trying to vfree() nonexistent vm area" error when unloading the CAAM
controller module by providing the correct pointer value to iounmap().
Signed-off-by: Victoria Milhoan <vicki.milhoan@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The CAAM driver uses two data buffers to store data for a hashing operation,
with one buffer defined as active. This change forces switching of the
active buffer when executing a hashing operation to avoid a later DMA unmap
using the length of the opposite buffer.
Signed-off-by: Victoria Milhoan <vicki.milhoan@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch reidents the vmx code-base to the kernel coding style.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The top-level CRYPTO_DEV_VMX option already depends on PPC64 so
there is no need to depend on it again at CRYPTO_DEV_VMX_ENCRYPT.
This patch also removes a redundant "default n".
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Replace the NX842_MEM_COMPRESS define with a function that returns the
specific platform driver's required working memory size.
The common nx-842.c driver refuses to load if there is no platform
driver present, so instead of defining an approximate working memory
size that's the maximum approximate size of both platform driver's
size requirements, the platform driver can directly provide its
specific, i.e. sizeof(struct nx842_workmem), size requirements which
the 842-nx crypto compression driver will use.
This saves memory by both reducing the required size of each driver
to the specific sizeof() amount, as well as using the specific loaded
platform driver's required amount, instead of the maximum of both.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move the contents of the include/linux/nx842.h header file into the
drivers/crypto/nx/nx-842.h header file. Remove the nx842.h header
file and its entry in the MAINTAINERS file.
The include/linux/nx842.h header originally was there because the
crypto/842.c driver needed it to communicate with the nx-842 hw
driver. However, that crypto compression driver was moved into
the drivers/crypto/nx/ directory, and now can directly include the
nx-842.h header. Nothing else needs the public include/linux/nx842.h
header file, as all use of the nx-842 hardware driver will be through
the "842-nx" crypto compression driver, since the direct nx-842 api is
very limited in the buffer alignments and sizes that it will accept,
and the crypto compression interface handles those limitations and
allows any alignment and size buffers.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Currently the driver assumes that the SG list contains exactly
the number of bytes required. This assumption is incorrect.
Up until now this has been harmless. However with the new AEAD
interface this now breaks as the AD SG list contains more bytes
than just the AD.
This patch fixes this by always clamping the AD SG list by the
specified AD length.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch makes use of the new sg_nents_for_len helper to replace
the custom sg_count function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver uses SZ_64K so it should include linux/sizes.h rather
than relying on others to pull it in for it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch removes the kernel blocking API as it has been completely
replaced by the callback API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The get_blocking_random_bytes API is broken because the wait can
be arbitrarily long (potentially forever) so there is no safe way
of calling it from within the kernel.
This patch replaces it with a callback API instead. The callback
is invoked potentially from interrupt context so the user needs
to schedule their own work thread if necessary.
In addition to adding callbacks, they can also be removed as
otherwise this opens up a way for user-space to allocate kernel
memory with no bound (by opening algif_rng descriptors and then
closing them).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Currently caam assumes that the SG list contains exactly the number
of bytes required. This assumption is incorrect.
Up until now this has been harmless. However with the new AEAD
interface this now breaks as the AD SG list contains more bytes
than just the AD.
This patch fixes this by always clamping the AD SG list by the
specified AD length.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch fixes an issue when building an internal AD representation.
We need to check assoclen and not only blindly loop over assoc sgl.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The device doensn't support the default value and will change it to 256, which
will cause performace degradation for biger packets.
Add an explicit write to set it to 1024.
Reported-by: Tianliang Wang <tianliang.wang@intel.com>
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Reduce the nx-842 pSeries driver minimum buffer size from 128 to 8.
Also replace the single use of IO_BUFFER_ALIGN macro with the standard
and correct DDE_BUFFER_ALIGN.
The hw sometimes rejects buffers that contain padding past the end of the
8-byte aligned section where it sees the "end" marker. With the minimum
buffer size set too high, some highly compressed buffers were being padded
and the hw was incorrectly rejecting them; this sets the minimum correctly
so there will be no incorrect padding.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Scatter gather lists can be created with more available entries than are
actually used (e.g. using sg_init_table() to reserve a specific number
of sg entries, but in actuality using something less than that based on
the data length). The caller sometimes fails to mark the last entry
with sg_mark_end(). In these cases, sg_nents() will return the original
size of the sg list as opposed to the actual number of sg entries that
contain valid data.
On arm64, if the sg_nents() value is used in a call to dma_map_sg() in
this situation, then it causes a BUG_ON in lib/swiotlb.c because an
"empty" sg list entry results in dma_capable() returning false and
swiotlb trying to create a bounce buffer of size 0. This occurred in
the userspace crypto interface before being fixed by
0f477b655a52 ("crypto: algif - Mark sgl end at the end of data")
Protect against this by using the new sg_nents_for_len() function which
returns only the number of sg entries required to meet the desired
length and supplying that value to dma_map_sg().
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Change the nx-842 common driver to wait for loading of both platform
drivers, and fail loading if the platform driver pointer is not set.
Add an independent platform driver pointer, that the platform drivers
set if they find they are able to load (i.e. if they find their platform
devicetree node(s)).
The problem is currently, the main nx-842 driver will stay loaded even
if there is no platform driver and thus no possible way it can do any
compression or decompression. This allows the crypto 842-nx driver
to load even if it won't actually work. For crypto compression users
(e.g. zswap) that expect an available crypto compression driver to
actually work, this is bad. This patch fixes that, so the 842-nx crypto
compression driver won't load if it doesn't have the driver and hardware
available to perform the compression.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This commit introduces a variant of the mv_mbus_dram_info() function
called mv_mbus_dram_info_nooverlap(). Both functions are used by
Marvell drivers supporting devices doing DMA, and provide them a
description the DRAM ranges that they need to configure their DRAM
windows.
The ranges provided by the mv_mbus_dram_info() function may overlap
with the I/O windows if there is a lot (>= 4 GB) of RAM
installed. This is not a problem for most of the DMA masters, except
for the upcoming new CESA crypto driver because it does DMA to the
SRAM, which is mapped through an I/O window. For this unit, we need to
have DRAM ranges that do not overlap with the I/O windows.
A first implementation done in commit 1737cac69369 ("bus: mvebu-mbus:
make sure SDRAM CS for DMA don't overlap the MBus bridge window"),
changed the information returned by mv_mbus_dram_info() to match this
requirement. However, it broke the requirement of the other DMA
masters than the DRAM ranges should have power of two sizes.
To solve this situation, this commit introduces a new
mv_mbus_dram_info_nooverlap() function, which returns the same
information as mv_mbus_dram_info(), but guaranteed to not overlap with
the I/O windows.
In the end, it gives us two variants of the mv_mbus_dram_info*()
functions:
- The normal one, mv_mbus_dram_info(), which has been around for many
years. This function returns the raw DRAM ranges, which are
guaranteed to use power of two sizes, but will overlap with I/O
windows. This function will therefore be used by all DMA masters
(SATA, XOR, Ethernet, etc.) except the CESA crypto driver.
- The new 'nooverlap' variant, mv_mbus_dram_info_nooverlap(). This
function returns DRAM ranges after they have been "tweaked" to make
sure they don't overlap with I/O windows. By doing this tweaking,
we remove the power of two size guarantee. This variant will be
used by the new CESA crypto driver.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
|
|
bridge window"
This reverts commit 1737cac69369 ("bus: mvebu-mbus: make sure SDRAM CS
for DMA don't overlap the MBus bridge window"), because it breaks DMA
on platforms having more than 2 GB of RAM.
This commit changed the information reported to DMA masters device
drivers through the mv_mbus_dram_info() function so that the returned
DRAM ranges do not overlap with I/O windows.
This was necessary as a preparation to support the new CESA Crypto
Engine driver, which will use DMA for cryptographic operations. But
since it does DMA with the SRAM which is mapped as an I/O window,
having DRAM ranges overlapping with I/O windows was problematic.
To solve this, the above mentioned commit changed the mvebu-mbus to
adjust the DRAM ranges so that they don't overlap with the I/O
windows. However, by doing this, we re-adjust the DRAM ranges in a way
that makes them have a size that is no longer a power of two. While
this is perfectly fine for the Crypto Engine, which supports DRAM
ranges with a granularity of 64 KB, it breaks basically all other DMA
masters, which expect power of two sizes for the DRAM ranges.
Due to this, if the installed system memory is 4 GB, in two
chip-selects of 2 GB, the second DRAM range will be reduced from 2 GB
to a little bit less than 2 GB to not overlap with the I/O windows, in
a way that results in a DRAM range that doesn't have a power of two
size. This means that whenever you do a DMA transfer with an address
located in the [ 2 GB ; 4 GB ] area, it will freeze the system. Any
serious DMA activity like simply running:
for i in $(seq 1 64) ; do dd if=/dev/urandom of=file$i bs=1M count=16 ; done
in an ext3 partition mounted over a SATA drive will freeze the system.
Since the new CESA crypto driver that uses DMA has not been merged
yet, the easiest fix is to simply revert this commit. A follow-up
commit will introduce a different solution for the CESA crypto driver.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Fixes: 1737cac69369 ("bus: mvebu-mbus: make sure SDRAM CS for DMA don't overlap the MBus bridge window")
Cc: <stable@vger.kernel.org> # v4.0+
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
|
|
Commit a0b5cd4ac2d6 ("bus: mvebu-mbus: use automatic I/O
synchronization barriers") enabled the usage of automatic I/O
synchronization barriers by enabling bit WIN_CTRL_SYNCBARRIER in the
control registers of MBus windows, but on non io-coherent platforms
(orion5x, kirkwood and dove) the WIN_CTRL_SYNCBARRIER bit in
the window control register is either reserved (all windows except 6
and 7) or enables read-only protection (windows 6 and 7).
Signed-off-by: Nicolas Schichan <nschichan@freebox.fr>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: <stable@vger.kernel.org> # v4.0+
Fixes: a0b5cd4ac2d6 ("bus: mvebu-mbus: use automatic I/O synchronization barriers")
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
|
|
Merge the crypto tree for 4.1 to pull in the changeset that disables
algif_aead.
|
|
Remove the length field from the ccp_sg_workarea since it is unused.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The underlying device support will set the device dma_mask pointer
if DMA is set up properly for the device. Remove the check for and
assignment of dma_mask when it is null. Instead, just error out if
the dma_set_mask_and_coherent function fails because dma_mask is null.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The added API calls provide a synchronous function call
get_blocking_random_bytes where the caller is blocked until
the nonblocking_pool is initialized.
CC: Andreas Steffen <andreas.steffen@strongswan.org>
CC: Theodore Ts'o <tytso@mit.edu>
CC: Sandy Harris <sandyinchina@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If more than one application invokes getrandom(2) before the pool
is ready, then all bar one will be stuck forever because we use
wake_up_interruptible which wakes up a single task.
This patch replaces it with wake_up_all.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The mv_cesa driver does not request the CESA registers memory region.
Since we're about to add a new CESA driver, we need to make sure only one
of these drivers probe the CESA device, and requesting the registers memory
region is a good way to achieve that.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|