summaryrefslogtreecommitdiffstats
path: root/drivers/md/bcache/alloc.c
Commit message (Collapse)AuthorAgeFilesLines
* bcache: Better alloc tracepointsKent Overstreet2014-03-181-4/+11
| | | | | | | Change the invalidate tracepoint to indicate how much data we're invalidating, and change the alloc tracepoints to indicate what offset they're for. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Improve bucket_prio() calculationKent Overstreet2014-01-081-2/+15
| | | | | | | | | When deciding what order to reuse buckets we take into account both the bucket's priority (which indicates lru order) and also the amount of live data in that bucket. The way they were scaled together wasn't as correct as it could be... this patch improves and documents it. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Rework allocator reservesKent Overstreet2014-01-081-28/+44
| | | | | | | | | | | | We need a reserve for allocating buckets for new btree nodes - and now that we've got multiple btrees, it really needs to be per btree. This reworks the reserves so we've got separate freelists for each reserve instead of watermarks, which seems to make things a bit cleaner, and it adds some code so that btree_split() can make sure the reserve is available before it starts. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: bugfix - moving_gc now moves only correct bucketsNicholas Swenson2013-12-161-0/+2
| | | | | | | | | | | | | Removed gc_move_threshold because picking buckets only by threshold could lead moving extra buckets (ei. if there are buckets at the threshold that aren't supposed to be moved do to space considerations). This is replaced by a GC_MOVE bit in the gc_mark bitmask. Now only marked buckets get moved. Signed-off-by: Nicholas Swenson <nks@daterainc.com> Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Don't bother with bucket refcount for btree node allocationsKent Overstreet2013-11-101-2/+2
| | | | | | | | | | | | | The bucket refcount (dropped with bkey_put()) is only needed to prevent the newly allocated bucket from being garbage collected until we've added a pointer to it somewhere. But for btree node allocations, the fact that we have btree nodes locked is enough to guard against races with garbage collection. Eventually the per bucket refcount is going to be replaced with something specific to bch_alloc_sectors(). Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Debug code improvementsKent Overstreet2013-11-101-3/+2
| | | | | | | | | | | | | | | Couple changes: * Consolidate bch_check_keys() and bch_check_key_order(), and move the checks that only check_key_order() could do to bch_btree_iter_next(). * Get rid of CONFIG_BCACHE_EDEBUG - now, all that code is compiled in when CONFIG_BCACHE_DEBUG is enabled, and there's now a sysfs file to flip on the EDEBUG checks at runtime. * Dropped an old not terribly useful check in rw_unlock(), and refactored/improved a some of the other debug code. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Move sector allocator to alloc.cKent Overstreet2013-11-101-0/+180
| | | | | | Just reorganizing things a bit. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Convert gc to a kthreadKent Overstreet2013-11-101-3/+3
| | | | | | | | | We needed a dedicated rescuer workqueue for gc anyways... and gc was conceptually a dedicated thread, just one that wasn't running all the time. Switch it to a dedicated thread to make the code a bit more straightforward. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Convert bucket_wait to wait_queue_head_tKent Overstreet2013-11-101-35/+47
| | | | | | | | At one point we did do fancy asynchronous waiting stuff with bucket_wait, but that's all gone (and bucket_wait is used a lot less than it used to be). So use the standard primitives. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Use blkdev_issue_discard()Kent Overstreet2013-11-101-103/+11
| | | | | | | | | The old asynchronous discard code was really a relic from when all the allocation code was asynchronous - now that allocation runs out of a dedicated thread there's no point in keeping around all that complicated machinery. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Allocation kthread fixesKent Overstreet2013-07-121-10/+8
| | | | | | | | The alloc kthread should've been using try_to_freeze() - and also there was the potential for the alloc kthread to get woken up after it had shut down, which would have been bad. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
* bcache: Fix/revamp tracepointsKent Overstreet2013-06-261-7/+3
| | | | | | | | | | | | | | | | The tracepoints were reworked to be more sensible, and fixed a null pointer deref in one of the tracepoints. Converted some of the pr_debug()s to tracepoints - this is partly a performance optimization; it used to be that with DEBUG or CONFIG_DYNAMIC_DEBUG pr_debug() was an empty macro; but at some point it was changed to an empty inline function. Some of the pr_debug() statements had rather expensive function calls as part of the arguments, so this code was getting run unnecessarily even on non debug kernels - in some fast paths, too. Signed-off-by: Kent Overstreet <koverstreet@google.com>
* bcache: Convert allocator thread to kthreadKent Overstreet2013-06-261-12/+22
| | | | | | Using a workqueue when we just want a single thread is a bit silly. Signed-off-by: Kent Overstreet <koverstreet@google.com>
* bcache: Allocator cleanup/fixesKent Overstreet2013-04-301-23/+49
| | | | | | | | | The main fix is that bch_allocator_thread() wasn't waiting on garbage collection to finish (if invalidate_buckets had set ca->invalidate_needs_gc); we need that to make sure the allocator doesn't spin and potentially block gc from finishing. Signed-off-by: Kent Overstreet <koverstreet@google.com>
* bcache: Style/checkpatch fixesKent Overstreet2013-03-251-16/+6
| | | | | | | | | Took out some nested functions, and fixed some more checkpatch complaints. Signed-off-by: Kent Overstreet <koverstreet@google.com> Cc: linux-bcache@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
* bcache: A block layer cacheKent Overstreet2013-03-231-0/+583
Does writethrough and writeback caching, handles unclean shutdown, and has a bunch of other nifty features motivated by real world usage. See the wiki at http://bcache.evilpiepirate.org for more. Signed-off-by: Kent Overstreet <koverstreet@google.com>
OpenPOWER on IntegriCloud