summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'cfq-2.6.33' into for-2.6.33Jens Axboe2009-11-031-52/+321
|\
| * cfq-iosched: fix style issue in cfq_get_avg_queues()Jens Axboe2009-10-281-2/+2
| | | | | | | | | | | | Line breaks and bad brace placement. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: fairness for sync no-idle queuesCorrado Zoccolo2009-10-281-32/+168
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently no-idle queues in cfq are not serviced fairly: even if they can only dispatch a small number of requests at a time, they have to compete with idling queues to be serviced, experiencing large latencies. We should notice, instead, that no-idle queues are the ones that would benefit most from having low latency, in fact they are any of: * processes with large think times (e.g. interactive ones like file managers) * seeky (e.g. programs faulting in their code at startup) * or marked as no-idle from upper levels, to improve latencies of those requests. This patch improves the fairness and latency for those queues, by: * separating sync idle, sync no-idle and async queues in separate service_trees, for each priority * service all no-idle queues together * and idling when the last no-idle queue has been serviced, to anticipate for more no-idle work * the timeslices allotted for idle and no-idle service_trees are computed proportionally to the number of processes in each set. Servicing all no-idle queues together should have a performance boost for NCQ-capable drives, without compromising fairness. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: enable idling for last queue on priority classCorrado Zoccolo2009-10-281-3/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cfq can disable idling for queues in various circumstances. When workloads of different priorities are competing, if the higher priority queue has idling disabled, lower priority queues may steal its disk share. For example, in a scenario with an RT process performing seeky reads vs a BE process performing sequential reads, on an NCQ enabled hardware, with low_latency unset, the RT process will dispatch only the few pending requests every full slice of service for the BE process. The patch solves this issue by always performing idle on the last queue at a given priority class > idle. If the same process, or one that can pre-empt it (so at the same priority or higher), submits a new request within the idle window, the lower priority queue won't dispatch, saving the disk bandwidth for higher priority ones. Note: this doesn't touch the non_rotational + NCQ case (no hardware to test if this is a benefit in that case). Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: reimplement priorities using different service treesCorrado Zoccolo2009-10-281-34/+82
| | | | | | | | | | | | | | | | | | We use different service trees for different priority classes. This allows a simplification in the service tree insertion code, that no longer has to consider priority while walking the tree. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: preparation to handle multiple service treesCorrado Zoccolo2009-10-281-11/+19
| | | | | | | | | | | | | | | | | | | | | | We embed a pointer to the service tree in each queue, to handle multiple service trees easily. Service trees are enriched with a counter. cfq_add_rq_rb is invoked after putting the rq in the fifo, to ensure that all fields in rq are properly initialized. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: adapt slice to number of processes doing I/OCorrado Zoccolo2009-10-281-2/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the number of processes performing I/O concurrently increases, a fixed time slice per process will cause large latencies. This patch, if low_latency mode is enabled, will scale the time slice assigned to each process according to a 300ms target latency. In order to keep fairness among processes: * The number of active processes is computed using a special form of running average, that quickly follows sudden increases (to keep latency low), and decrease slowly (to have fairness in spite of rapid decreases of this value). To safeguard sequential bandwidth, we impose a minimum time slice (computed using 2*cfq_slice_idle as base, adjusted according to priority and async-ness). Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | Do not __always_inline bvec_kmap_irq() and bvec_kunmap_irq()Alberto Bertogli2009-11-021-6/+2
| | | | | | | | | | | | | | | | So remove both the comment and the inline requirement, going back to the inline hint. Signed-off-by: Alberto Bertogli <albertito@blitiri.com.ar> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | cfq-iosched: simplify prio-unboost codeCorrado Zoccolo2009-11-021-5/+3
| | | | | | | | | | | | | | Eliminate redundant checks. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | blkdev: flush disk cache on ->fsyncChristoph Hellwig2009-10-291-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently there is no barrier support in the block device code. That means we cannot guarantee any sort of data integerity when using the block device node with dis kwrite caches enabled. Using the raw block device node is a typical use case for virtualization (and I assume databases, too). This patch changes block_fsync to issue a cache flush and thus make fsync on block device nodes actually useful. Note that in mainline we would also need to add such code to the ->aio_write method for O_SYNC handling, but assuming that Jan's patch series for the O_SYNC rewrite goes in it will also call into ->fsync for 2.6.32. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | block: move bdi/address_space unplug functions to backing-dev.hJens Axboe2009-10-293-13/+14
| | | | | | | | | | | | | | | | There's nothing block related about them, the backing device is used by things like NFS etc as well. This gets rid of the need to protect such calls by CONFIG_BLOCK. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | drbd: fix in_flight rw indexingJens Axboe2009-10-281-2/+2
| | | | | | | | Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | aio: implement request batchingJeff Moyer2009-10-282-6/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Hi, Some workloads issue batches of small I/O, and the performance is poor due to the call to blk_run_address_space for every single iocb. Nathan Roberts pointed this out, and suggested that by deferring this call until all I/Os in the iocb array are submitted to the block layer, we can realize some impressive performance gains (up to 30% for sequential 4k reads in batches of 16). Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | block: get rid of the WRITE_ODIRECT flagJeff Moyer2009-10-282-3/+1
|/ | | | | | | | | | | | | | | Hi, The WRITE_ODIRECT flag is only used in one place, and that code path happens to also call blk_run_address_space. The introduction of this flag, then, could result in the device being unplugged twice for every I/O. Further, with the batching changes in the next patch, we don't want an O_DIRECT write to imply a queue unplug. Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* cfq-iosched: improve hw_tag detectionShaohua Li2009-10-271-0/+12
| | | | | | | | | | If active queue hasn't enough requests and idle window opens, cfq will not dispatch sufficient requests to hardware. In such situation, current code will zero hw_tag. But this is because cfq doesn't dispatch enough requests instead of hardware queue doesn't work. Don't zero hw_tag in such case. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* cfq: break apart merged cfqqs if they stop cooperatingJeff Moyer2009-10-261-3/+76
| | | | | | | | | cfq_queues are merged if they are issuing requests within the mean seek distance of one another. This patch detects when the coopearting stops and breaks the queues back up. Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* cfq: change the meaning of the cfqq_coop flagJeff Moyer2009-10-261-14/+6
| | | | | | | | | | | | The flag used to indicate that a cfqq was allowed to jump ahead in the scheduling order due to submitting a request close to the queue that just executed. Since closely cooperating queues are now merged, the flag holds little meaning. Change it to indicate that multiple queues were merged. This will later be used to allow the breaking up of merged queues when they are no longer cooperating. Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* cfq: merge cooperating cfq_queuesJeff Moyer2009-10-261-2/+87
| | | | | | | | | | | | When cooperating cfq_queues are detected currently, they are allowed to skip ahead in the scheduling order. It is much more efficient to automatically share the cfq_queue data structure between cooperating processes. Performance of the read-test2 benchmark (which is written to emulate the dump(8) utility) went from 12MB/s to 90MB/s on my SATA disk. NFS servers with multiple nfsd threads also saw performance increases. Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* cfq: calculate the seek_mean per cfq_queue not per cfq_io_contextJeff Moyer2009-10-262-40/+33
| | | | | | | | | | async cfq_queue's are already shared between processes within the same priority, and forthcoming patches will change the mapping of cic to sync cfq_queue from 1:1 to 1:N. So, calculate the seekiness of a process based on the cfq_queue instead of the cfq_io_context. Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* Merge branch 'for-linus' into for-2.6.33Jens Axboe2009-10-1315-201/+228
|\
| * cciss: Add cciss_allow_hpsa module parameterStephen M. Cameron2009-10-131-40/+34
| | | | | | | | | | | | | | | | | | Add cciss_allow_hpsa module parameter. This parameter causes the cciss driver to ignore any Smart Array devices known to be supported by the hpsa driver. Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cciss: Fix multiple calls to pci_release_regionsStephen M. Cameron2009-10-131-2/+3
| | | | | | | | | | | | | | | | | | | | Fix multiple calls to pci_release_regions. If cciss_pci_init fails, it already does any necessary call to pci_release_regions, so this does not need to be done again in cciss_init_one in that case. Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * blk-settings: fix function parameter kernel-doc notationRandy Dunlap2009-10-121-1/+1
| | | | | | | | | | | | | | Fix kernel-doc notation in blk-settings.c::blk_queue_max_discard_sectors(). Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * writeback: kill space in debugfs item nameWu Fengguang2009-10-091-1/+1
| | | | | | | | | | | | | | The space is not script friendly, kill it. Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * writeback: account IO throttling wait as iowaitWu Fengguang2009-10-092-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It makes sense to do IOWAIT when someone is blocked due to IO throttle, as suggested by Kame and Peter. There is an old comment for not doing IOWAIT on throttle, however it has been mismatching the code for a long time. If we stop accounting IOWAIT for 2.6.32, it could be an undesirable behavior change. So restore the io_schedule. CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> CC: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * elv_iosched_store(): fix strstrip() misuseKOSAKI Motohiro2009-10-091-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | elv_iosched_store() ignore the return value of strstrip(). It makes small inconsistent behavior. This patch fixes it. <before> ==================================== # cd /sys/block/{blockdev}/queue case1: # echo "anticipatory" > scheduler # cat scheduler noop [anticipatory] deadline cfq case2: # echo "anticipatory " > scheduler # cat scheduler noop [anticipatory] deadline cfq case3: # echo " anticipatory" > scheduler bash: echo: write error: Invalid argument <after> ==================================== # cd /sys/block/{blockdev}/queue case1: # echo "anticipatory" > scheduler # cat scheduler noop [anticipatory] deadline cfq case2: # echo "anticipatory " > scheduler # cat scheduler noop [anticipatory] deadline cfq case3: # echo " anticipatory" > scheduler noop [anticipatory] deadline cfq Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: avoid probable slice overrun when idlingCorrado Zoccolo2009-10-081-0/+9
| | | | | | | | | | | | | | | | | | | | | | If the average think time is larger than the remaining time slice for any given queue, don't allow it to idle. A succesful idle also means that we need to dispatch and complete a request, so if we don't even have time left for the idle process, we would overrun the slice in any case. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: apply bool value where we return 0/1Jens Axboe2009-10-071-37/+31
| | | | | | | | | | | | | | | | Saves 16 bytes of text, woohoo. But the more important point is that it makes the code more readable when returning bool for 0/1 cases. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: fix think time allowed for seekersCorrado Zoccolo2009-10-071-1/+4
| | | | | | | | | | | | | | | | | | CFQ enables idle only for processes that think less than the allowed idle time. Since idle time is lower for seeky queues, we should use the correct value in the comparison. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: fix the slice residual signJens Axboe2009-10-061-1/+7
| | | | | | | | | | | | | | | | | | | | We should subtract the slice residual from the rb tree key, since a negative residual count indicates that the cfqq overran its slice the last time. Hence we want to add the overrun time, to position it a bit further away in the service tree. Reported-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: abstract out the 'may this cfqq dispatch' logicJens Axboe2009-10-061-54/+67
| | | | | | | | | | | | | | Makes the whole thing easier to read, cfq_dispatch_requests() was a bit messy before. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * block: use proper BLK_RW_ASYNC in blk_queue_start_tag()Jens Axboe2009-10-061-1/+1
| | | | | | | | | | | | Makes it easier to read than the 0. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * block: Seperate read and write statistics of in_flight requests v2Nikanth Karthikesan2009-10-066-20/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a9327cac440be4d8333bba975cbbf76045096275 added seperate read and write statistics of in_flight requests. And exported the number of read and write requests in progress seperately through sysfs. But Corrado Zoccolo <czoccolo@gmail.com> reported getting strange output from "iostat -kx 2". Global values for service time and utilization were garbage. For interval values, utilization was always 100%, and service time is higher than normal. So this was reverted by commit 0f78ab9899e9d6acb09d5465def618704255963b The problem was in part_round_stats_single(), I missed the following: if (now == part->stamp) return; - if (part->in_flight) { + if (part_in_flight(part)) { __part_stat_add(cpu, part, time_in_queue, part_in_flight(part) * (now - part->stamp)); __part_stat_add(cpu, part, io_ticks, (now - part->stamp)); With this chunk included, the reported regression gets fixed. Signed-off-by: Nikanth Karthikesan <knikanth@suse.de> -- Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * block: get rid of kblock_schedule_delayed_work()Jens Axboe2009-10-053-25/+11
| | | | | | | | | | | | | | | | | | | | It was briefly introduced to allow CFQ to to delayed scheduling, but we ended up removing that feature again. So lets kill the function and export, and just switch CFQ back to the normal work schedule since it is now passing in a '0' delay from all call sites. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: fix possible problem with jiffies wraparoundCorrado Zoccolo2009-10-051-3/+6
| | | | | | | | | | | | | | | | | | | | | | The RR service tree is indexed by a key that is relative to current jiffies. This can cause problems on jiffies wraparound. The patch fixes it using time_before comparison, and changing the add_front path to use a relative number, too. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * cfq-iosched: fix issue with rq-rq merging and fifo list orderingJens Axboe2009-10-051-8/+7
| | | | | | | | | | | | | | | | | | | | cfq uses rq->start_time as the fifo indicator, but that field may get modified prior to cfq doing it's fifo list adjustment when a request gets merged with another request. This can cause the fifo list to become unordered. Reported-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | drbd: needs __ratelimit()Randy Dunlap2009-10-071-0/+1
| | | | | | | | | | | | | | | | | | | | drbd_int.h uses __ratelimit(), so it needs to #include ratelimit.h: drivers/block/drbd/drbd_int.h:1765: error: implicit declaration of function '__ratelimit' Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: drbd-dev@lists.linbit.com Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | drbd: Work on permission enforcementPhilipp Reisner2009-10-062-1/+7
| | | | | | | | | | | | | | | | Now we have the capabilities of the sending process available, use them to enforce CAP_SYS_ADMIN. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | drbd: fixup for reverted dual in_flight patchJens Axboe2009-10-051-2/+2
| | | | | | | | Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | Merge branch 'master' into for-2.6.33Jens Axboe2009-10-05561-15164/+18935
|\ \ | |/
| * Linux 2.6.32-rc3v2.6.32-rc3Linus Torvalds2009-10-041-1/+1
| | | | | | | | | | | | | | I'm skipping -rc2 because the -rc1 Makefile mistakenly said -rc2, so in order to avoid confusion, I'm jumping from -rc1 to -rc3. That way, when 'uname' (or an oops report) says 2.6.32-rc2, there's no confusion about whether people perhaps meant -rc1 or -rc2.
| * headers: remove sched.h from poll.hAlexey Dobriyan2009-10-0426-1/+27
| | | | | | | | | | Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * Merge branch 'acpi-pad' of ↵Linus Torvalds2009-10-044-0/+535
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6 * 'acpi-pad' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: acpi_pad: build only on X86 ACPI: create Processor Aggregator Device driver Fixup trivial conflicts in MAINTAINERS file.
| | * acpi_pad: build only on X86Len Brown2009-09-271-0/+1
| | | | | | | | | | | | | | | | | | X86_FEATURE_MWAIT doesn't exist on ia64... Signed-off-by: Len Brown <len.brown@intel.com>
| | * ACPI: create Processor Aggregator Device driverShaohua Li2009-07-314-0/+535
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ACPI 4.0 created the logical "processor aggregator device" as a mechinism for platforms to ask the OS to force otherwise busy processors to enter (power saving) idle. The intent is to lower power consumption to ride-out transient electrical and thermal emergencies, rather than powering off the server. On platforms that can save more power/performance via P-states, the platform will first exhaust P-states before forcing idle. However, the relative benefit of P-states vs. idle states is platform dependent, and thus this driver need not know or care about it. This driver does not use the kernel's CPU hot-plug mechanism because after the transient emergency is over, the system must be returned to its normal state, and hotplug would permanently break both cpusets and binding. So to force idle, the driver creates a power saving thread. The scheduler will migrate the thread to the preferred CPU. The thread has max priority and has SCHED_RR policy, so it can occupy one CPU. To save power, the thread will invoke the deep C-state entry instructions. To avoid starvation, the thread will sleep 5% of the time time for every second (current RT scheduler has threshold to avoid starvation, but if other CPUs are idle, the CPU can borrow CPU timer from other, which makes the mechanism not work here) Vaidyanathan Srinivasan has proposed scheduler enhancements to allow injecting idle time into the system. This driver doesn't depend on those enhancements, but could cut over to them when they are available. Peter Z. does not favor upstreaming this driver until the those scheduler enhancements are in place. However, we favor upstreaming this driver now because it is useful now, and can be enhanced over time. Signed-off-by: Shaohua Li <shaohua.li@intel.com> NACKed-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Signed-off-by: Len Brown <len.brown@intel.com>
| * | Merge branch 'sfi-release' of ↵Linus Torvalds2009-10-041-4/+13
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-sfi-2.6 * 'sfi-release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-sfi-2.6: SFI: remove __init from sfi_verify_table SFI: fix section mismatch warnings in sfi_core.c
| | * | SFI: remove __init from sfi_verify_tableArjan van de Ven2009-10-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | sfi_verify_table() is called at runtime, and thus cannot be __init Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Len Brown <len.brown@intel.com>
| | * | SFI: fix section mismatch warnings in sfi_core.cRakib Mullick2009-10-031-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function sfi_map_memory/sfi_unmap_memory uses early_ioremap/early_iounmap respectively, which refers to a __init function. And function sfi_check_table also refers to a __init function sfi_verify_table. Since the references are valid, so use __ref to get rid of the warnings. We were warned by the following warnings: LD vmlinux.o MODPOST vmlinux.o WARNING: vmlinux.o(.text+0xb6ba3a): Section mismatch in reference from the function sfi_map_memory() to the function .init.text:early_ioremap() The function sfi_map_memory() references the function __init early_ioremap(). This is often because sfi_map_memory lacks a __init annotation or the annotation of early_ioremap is wrong. WARNING: vmlinux.o(.text+0xb6bab6): Section mismatch in reference from the function sfi_unmap_memory() to the function .init.text:early_iounmap() The function sfi_unmap_memory() references the function __init early_iounmap(). This is often because sfi_unmap_memory lacks a __init annotation or the annotation of early_iounmap is wrong. WARNING: vmlinux.o(.text+0xb6be30): Section mismatch in reference from the function sfi_check_table() to the function .init.text:sfi_verify_table() The function sfi_check_table() references the function __init sfi_verify_table(). This is often because sfi_check_table lacks a __init annotation or the annotation of sfi_verify_table is wrong. Signed-off-by: Rakib Mullick <rakib.mullick@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com>
| * | | Merge branch 'release' of ↵Linus Torvalds2009-10-046-103/+112
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: ACPI: EC: Don't parse DSDT for EC early init on Compal ACPI: EC: Rewrite DMI checks ACPI: dock: fix "sibiling" typo ACPI: kill overly verbose "throttling states" log messages ACPI: Fix bound checks for copy_from_user in the acpi /proc code ACPI: fix bus scanning memory leaks ACPI: EC: Restart command even if no interrupts from EC sony-laptop: Don't unregister the SPIC driver if it wasn't registered sony-laptop: remove _INI call at init time sony-laptop: SPIC unset IRQF_SHARED, set IRQF_DISABLED sony-laptop: remove device_ctrl and the SPIC mini drivers
| | * \ \ Merge branch 'misc' into releaseLen Brown2009-10-033-15/+10
| | |\ \ \
OpenPOWER on IntegriCloud