Age | Commit message (Collapse) | Author | Files | Lines |
|
Users of the locks API commonly call either posix_lock_file_wait() or
flock_lock_file_wait() depending upon the lock type. Add a new function
locks_lock_inode_wait() which will check and call the correct function for
the type of lock passed in.
Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
locks_get_lock_context() uses cmpxchg() to install i_flctx.
cmpxchg() is a release operation which is correct. But it uses
a plain load to load i_flctx. This is incorrect. Subsequent loads
from i_flctx can hoist above the load of i_flctx pointer itself
and observe uninitialized garbage there. This in turn can lead
to corruption of ctx->flc_lock and other members.
Documentation/memory-barriers.txt explicitly requires to use
a barrier in such context:
"A load-load control dependency requires a full read memory barrier".
Use smp_load_acquire() in locks_get_lock_context() and in bunch
of other functions that can proceed concurrently with
locks_get_lock_context().
The data race was found with KernelThreadSanitizer (KTSAN).
Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
Fix kernel-doc warnings in fs/locks.c:
Warning(..//fs/locks.c:1577): No description found for parameter 'flags'
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
They just call file_inode and then the corresponding *_inode_file_wait
function. Just make them static inlines instead.
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
Allow callers to pass in an inode instead of a filp.
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
Reviewed-by: "J. Bruce Fields" <bfields@fieldses.org>
Tested-by: "J. Bruce Fields" <bfields@fieldses.org>
|
|
...and rename it to better describe how it works.
In order to fix a use-after-free in NFS, we need to be able to remove
locks from an inode after the filp associated with them may have already
been freed. flock_lock_file already only dereferences the filp to get to
the inode, so just change it so the callers do that.
All of the callers already pass in a lock request that has the fl_file
set properly, so we don't need to pass it in individually. With that
change it now only dereferences the filp to get to the inode, so just
push that out to the callers.
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
Reviewed-by: "J. Bruce Fields" <bfields@fieldses.org>
Tested-by: "J. Bruce Fields" <bfields@fieldses.org>
|
|
Let's show locks which are associated with a file descriptor in
its fdinfo file.
Currently we don't have a reliable way to determine who holds a lock. We
can find some information in /proc/locks, but PID which is reported there
can be wrong. For example, a process takes a lock, then forks a child and
dies. In this case /proc/locks contains the parent pid, which can be
reused by another process.
$ cat /proc/locks
...
6: FLOCK ADVISORY WRITE 324 00:13:13431 0 EOF
...
$ ps -C rpcbind
PID TTY TIME CMD
332 ? 00:00:00 rpcbind
$ cat /proc/332/fdinfo/4
pos: 0
flags: 0100000
mnt_id: 22
lock: 1: FLOCK ADVISORY WRITE 324 00:13:13431 0 EOF
$ ls -l /proc/332/fd/4
lr-x------ 1 root root 64 Mar 5 14:43 /proc/332/fd/4 -> /run/rpcbind.lock
$ ls -l /proc/324/fd/
total 0
lrwx------ 1 root root 64 Feb 27 14:50 0 -> /dev/pts/0
lrwx------ 1 root root 64 Feb 27 14:50 1 -> /dev/pts/0
lrwx------ 1 root root 64 Feb 27 14:49 2 -> /dev/pts/0
You can see that the process with the 324 pid doesn't hold the lock.
This information is required for proper dumping and restoring file
locks.
Signed-off-by: Andrey Vagin <avagin@openvz.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Acked-by: Jeff Layton <jlayton@poochiereds.net>
Acked-by: "J. Bruce Fields" <bfields@fieldses.org>
Acked-by: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
During the v3.20/v4.0 cycle, I had originally had the code manage the
inode->i_flctx pointer using a compare-and-swap operation instead of the
i_lock.
Sasha Levin though hit a problem while testing with trinity that made me
believe that that wasn't safe. At the time, changing the code to protect
the i_flctx pointer seemed to fix the issue, but I now think that was
just coincidence.
The issue was likely the same race that Kirill Shutemov hit while
testing the pre-rc1 v4.0 kernel and that Linus spotted. Due to the way
that the spinlock was dropped in the middle of flock_lock_file, you
could end up with multiple flock locks for the same struct file on the
inode.
Reinstate the use of a CAS operation to assign this pointer since it's
likely to be more efficient and gets the i_lock completely out of the
file locking business.
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
As Bruce points out, there's no compelling reason to change /proc/locks
output at this point. If we did want to do this, then we'd almost
certainly want to introduce a new file to display this info (maybe via
debugfs?).
Let's remove the dead WE_CAN_BREAK_LSLK_NOW ifdef here and just plan to
stay with the legacy format.
Reported-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
The current prototypes for these operations are somewhat awkward as they
deal with fl_owners but take struct file_lock arguments. In the future,
we'll want to be able to take references without necessarily dealing
with a struct file_lock.
Change them to take fl_owner_t arguments instead and have the callers
deal with assigning the values to the file_lock structs.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
|
|
In the event that we get an F_UNLCK request on an inode that has no lock
context, there is no reason to allocate one. Change
locks_get_lock_context to take a "type" pointer and avoid allocating a
new context if it's F_UNLCK.
Then, fix the callers to return appropriately if that function returns
NULL.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
|
|
Annonate insert, remove and iterate function that we need
blocked_lock_lock held.
Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
We know that the locks being passed into this function are of the
correct type, now that they live on their own lists.
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
Since following change
commit bd61e0a9c852de2d705b6f1bb2cc54c5774db570
Author: Jeff Layton <jlayton@primarydata.com>
Date: Fri Jan 16 15:05:55 2015 -0500
locks: convert posix locks to file_lock_context
all Posix locks are kept on their a separate list, so the test is
redudant.
Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: Jeff Layton <jlayton@primarydata.com>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
locks_delete_lock_ctx() is called inside the loop, so we
should use list_for_each_entry_safe.
Fixes: 8634b51f6ca2 (locks: convert lease handling to file_lock_context)
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
It's possible that "fl" won't point at a valid lock at this point, so
use "victim" instead which is either a valid lock or NULL.
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
Commit 8634b51f6ca2 (locks: convert lease handling to file_lock_context)
introduced a regression in the handling of lease upgrade/downgrades.
In the event that we already have a lease on a file and are going to
either upgrade or downgrade it, we skip doing any list insertion or
deletion and simply re-call lm_setup on the existing lease.
As of commit 8634b51f6ca2 however, we end up calling lm_setup on the
lease that was passed in, instead of on the existing lease. This causes
us to leak the fasync_struct that was allocated in the event that there
was not already an existing one (as it always appeared that there
wasn't one).
Fixes: 8634b51f6ca2 (locks: convert lease handling to file_lock_context)
Reported-and-Tested-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
In the case where we're splitting a lock in two, the current code
the new "left" lock in the incorrect spot. It's inserted just
before "right" when it should instead be inserted just before the
new lock.
When we add a new lock, set "fl" to that value so that we can
add "left" before it.
Reported-by: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
As Linus pointed out:
Say we have an existing flock, and now do a new one that conflicts. I
see what looks like three separate bugs.
- We go through the first loop, find a lock of another type, and
delete it in preparation for replacing it
- we *drop* the lock context spinlock.
- BUG #1? So now there is no lock at all, and somebody can come in
and see that unlocked state. Is that really valid?
- another thread comes in while the first thread dropped the lock
context lock, and wants to add its own lock. It doesn't see the
deleted or pending locks, so it just adds it
- the first thread gets the context spinlock again, and adds the lock
that replaced the original
- BUG #2? So now there are *two* locks on the thing, and the next
time you do an unlock (or when you close the file), it will only
remove/replace the first one.
...remove the "drop the spinlock" code in the middle of this function as
it has always been suspicious. This should eliminate the potential race
that can leave two locks for the same struct file on the list.
He also pointed out another thing as a bug -- namely that you
flock_lock_file removes the lock from the list unconditionally when
doing a lock upgrade, without knowing whether it'll be able to set the
new lock. Bruce pointed out that this is expected behavior and may help
prevent certain deadlock situations.
We may want to revisit that at some point, but it's probably best that
we do so in the context of a different patchset.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
We don't want to remove all leases just because one filp was closed.
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
This reverts commit 9bd0f45b7037fcfa8b575c7e27d0431d6e6dc3bb.
Linus rightly pointed out that I failed to initialize the counters
when adding them, so they don't work as expected. Just revert this
patch for now.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
|
|
This (ab-)uses the file locking code to allow filesystems to recall
outstanding pNFS layouts on a file. This new lease type is similar but
not quite the same as FL_DELEG. A FL_LAYOUT lease can always be granted,
an a per-filesystem lock (XFS iolock for the initial implementation)
ensures not FL_LAYOUT leases granted when we would need to recall them.
Also included are changes that allow multiple outstanding read
leases of different types on the same file as long as they have a
differnt owner. This wasn't a problem until now as nfsd never set
FL_LEASE leases, and no one else used FL_DELEG leases, but given that
nfsd will also issues FL_LAYOUT leases we will have to handle it now.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Just like for other lock types we should allow different owners to have
a read lease on a file. Currently this can't happen, but with the addition
of pNFS layout leases we'll need this feature.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
|
|
We have each of the locks_remove_* variants doing this individually.
Have the caller do it instead, and have locks_remove_flock and
locks_remove_lease just assume that it's a valid pointer.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
|
|
This makes things a bit more efficient in the cifs and ceph lock
pushing code.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Acked-by: Christoph Hellwig <hch@lst.de>
|
|
Now that we use standard list_heads for tracking leases, we can have
lm_change take a pointer to the lease to be modified instead of a
double pointer.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Acked-by: Christoph Hellwig <hch@lst.de>
|
|
We can now add a dedicated spinlock without expanding struct inode.
Change to using that to protect the various i_flctx lists.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Acked-by: Christoph Hellwig <hch@lst.de>
|
|
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Acked-by: Christoph Hellwig <hch@lst.de>
|
|
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Acked-by: Christoph Hellwig <hch@lst.de>
|
|
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Acked-by: Christoph Hellwig <hch@lst.de>
|
|
The current scheme of using the i_flock list is really difficult to
manage. There is also a legitimate desire for a per-inode spinlock to
manage these lists that isn't the i_lock.
Start conversion to a new scheme to eventually replace the old i_flock
list with a new "file_lock_context" object.
We start by adding a new i_flctx to struct inode. For now, it lives in
parallel with i_flock list, but will eventually replace it. The idea is
to allocate a structure to sit in that pointer and act as a locus for
all things file locking.
We allocate a file_lock_context for an inode when the first lock is
added to it, and it's only freed when the inode is freed. We use the
i_lock to protect the assignment, but afterward it should mostly be
accessed locklessly.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Acked-by: Christoph Hellwig <hch@lst.de>
|
|
locks
...instead of open-coding it and removing flock locks directly. This
helps consolidate the flock lock removal logic into a single spot.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
|
|
...that we can use to queue file_locks to per-ctx list_heads. Go ahead
and convert locks_delete_lock and locks_dispose_list to use it instead
of the fl_block list.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Acked-by: Christoph Hellwig <hch@lst.de>
|
|
commit 0efaa7e82f02fe69c05ad28e905f31fc86e6f08e
locks: generic_delete_lease doesn't need a file_lock at all
moves the call to fl->fl_lmops->lm_change() to a place in the
code where fl might be a non-lease lock.
When that happens, fl_lmops is NULL and an Oops ensures.
So add an extra test to restore correct functioning.
Reported-by: Linda Walsh <suse@tlinx.org>
Link: https://bugzilla.suse.com/show_bug.cgi?id=912569
Cc: stable@vger.kernel.org (v3.18)
Fixes: 0efaa7e82f02fe69c05ad28e905f31fc86e6f08e
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
|
|
Eliminate the need for a return pointer.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
Like flock locks, leases are owned by the file description. Now that the
i_have_this_lease check in __break_lease is gone, we don't actually use
the fl_owner for leases for anything. So, it's now safe to set this more
appropriately to the same value as the fl_file.
While we're at it, fix up the comments over the fl_owner_t definition
since they're rather out of date.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
|
|
Christoph suggests:
"Add a return value to lm_break so that the lock manager can tell the
core code "you can delete this lease right now". That gets rid of
the games with the timeout which require all kinds of race avoidance
code in the users."
Do that here and have the nfsd lease break routine use it when it detects
that there was a race between setting up the lease and it being broken.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
Eliminate an unneeded "flock" variable. We can use "fl" as a loop cursor
everywhere. Add a any_leases_conflict helper function as well to
consolidate a bit of code.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
I think that the intent of this code was to ensure that a process won't
deadlock if it has one fd open with a lease on it and then breaks that
lease by opening another fd. In that case it'll treat the __break_lease
call as if it were non-blocking.
This seems wrong -- the process could (for instance) be multithreaded
and managing different fds via different threads. I also don't see any
mention of this limitation in the (somewhat sketchy) documentation.
Remove the check and the non-blocking behavior when i_have_this_lease
is true.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
|
|
There was only one place where we still could free a file_lock while
holding the i_lock -- lease_modify. Add a new list_head argument to the
lm_change operation, pass in a private list when calling it, and fix
those callers to dispose of the list once the lock has been dropped.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
Now that we have a saner internal API for managing leases, we no longer
need to mandate that the inode->i_lock be held over most of the lease
code. Push it down into generic_add_lease and generic_delete_lease.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
...and move the fasync setup into it for fcntl lease calls. At the same
time, change the semantics of how the file_lock double-pointer is
handled. Up until now, on a successful lease return you got a pointer to
the lock on the list. This is bad, since that pointer can no longer be
relied on as valid once the inode->i_lock has been released.
Change the code to instead just zero out the pointer if the lease we
passed in ended up being used. Then the callers can just check to see
if it's NULL after the call and free it if it isn't.
The priv argument has the same semantics. The lm_setup function can
zero the pointer out to signal to the caller that it should not be
freed after the function returns.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
In later patches, we're going to add a new lock_manager_operation to
finish setting up the lease while still holding the i_lock. To do
this, we'll need to pass a little bit of info in the fcntl setlease
case (primarily an fasync structure). Plumb the extra pointer into
there in advance of that.
We declare this pointer as a void ** to make it clear that this is
private info, and that the caller isn't required to set this unless
the lm_setup specifically requires it.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
Some of the latter paragraphs seem ambiguous and just plain wrong.
In particular the break_lease comment makes no sense. We call
break_lease (and break_deleg) from all sorts of vfs-layer functions,
so there is clearly such a method.
Also get rid of some of the other comments about what's needed for
a full implementation.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
Ensure that it's OK to pass in a NULL file_lock double pointer on
a F_UNLCK request and convert the vfs_setlease F_UNLCK callers to
do just that.
Finally, turn the BUG_ON in generic_setlease into a WARN_ON_ONCE
with an error return. That's a problem we can handle without
crashing the box if it occurs.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
lease_get_mtime is called without the i_lock held, so there's no
guarantee about the stability of the list. Between the time when we
assign "flock" and then dereference it to check whether it's a lease
and for write, the lease could be freed.
Ensure that that doesn't occur by taking the i_lock before trying
to check the lease.
Cc: J. Bruce Fields <bfields@fieldses.org>
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
security_file_set_fowner always returns 0, so make it f_setown and
__f_setown void return functions and fix up the error handling in the
callers.
Cc: linux-security-module@vger.kernel.org
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
There are no callers of these functions.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
|