| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
With ordering requirements dropped, barrier and ordered are misnomers.
Now all block layer does is sequencing FLUSH and FUA. Rename them to
flush.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
|
|
|
|
|
|
|
|
|
| |
Without ordering requirements, barrier and ordering are minomers.
Rename block/blk-barrier.c to block/blk-flush.c. Rename of symbols
will follow.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Filesystems will take all the responsibilities for ordering requests
around commit writes and will only indicate how the commit writes
themselves should be handled by block layers. This patch drops
barrier ordering by queue draining from block layer. Ordering by
draining implementation was somewhat invasive to request handling.
List of notable changes follow.
* Each queue has 1 bit color which is flipped on each barrier issue.
This is used to track whether a given request is issued before the
current barrier or not. REQ_ORDERED_COLOR flag and coloring
implementation in __elv_add_request() are removed.
* Requests which shouldn't be processed yet for draining were stalled
by returning -EAGAIN from blk_do_ordered() according to the test
result between blk_ordered_req_seq() and blk_blk_ordered_cur_seq().
This logic is removed.
* Draining completion logic in elv_completed_request() removed.
* All barrier sequence requests were queued to request queue and then
trckled to lower layer according to progress and thus maintaining
request orders during requeue was necessary. This is replaced by
queueing the next request in the barrier sequence only after the
current one is complete from blk_ordered_complete_seq(), which
removes the need for multiple proxy requests in struct request_queue
and the request sorting logic in the ELEVATOR_INSERT_REQUEUE path of
elv_insert().
* As barriers no longer have ordering constraints, there's no need to
dump the whole elevator onto the dispatch queue on each barrier.
Insert barriers at the front instead.
* If other barrier requests come to the front of the dispatch queue
while one is already in progress, they are stored in
q->pending_barriers and restored to dispatch queue one-by-one after
each barrier completion from blk_ordered_complete_seq().
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make the following cleanups in preparation of barrier/flush update.
* blk_do_ordered() declaration is moved from include/linux/blkdev.h to
block/blk.h.
* blk_do_ordered() now returns pointer to struct request, with %NULL
meaning "try the next request" and ERR_PTR(-EAGAIN) "try again
later". The third case will be dropped with further changes.
* In the initialization of proxy barrier request, data direction is
already set by init_request_from_bio(). Drop unnecessary explicit
REQ_WRITE setting and move init_request_from_bio() above REQ_FUA
flag setting.
* add_request() is collapsed into __make_request().
These changes don't make any functional difference.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
REQ_HARDBARRIER is deprecated. Remove spurious uses in the following
users. Please note that other than osdblk, all other uses were
already spurious before deprecation.
* osdblk: osdblk_rq_fn() won't receive any request with
REQ_HARDBARRIER set. Remove the test for it.
* pktcdvd: use of REQ_HARDBARRIER in pkt_generic_packet() doesn't mean
anything. Removed.
* aic7xxx_old: Setting MSG_ORDERED_Q_TAG on REQ_HARDBARRIER is
spurious. Removed.
* sas_scsi_host: Setting TASK_ATTR_ORDERED on REQ_HARDBARRIER is
spurious. Removed.
* scsi_tcq: The ordered tag path wasn't being used anyway. Removed.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Boaz Harrosh <bharrosh@panasas.com>
Cc: James Bottomley <James.Bottomley@suse.de>
Cc: Peter Osterlund <petero2@telia.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Barrier is deemed too heavy and will soon be replaced by FLUSH/FUA
requests. Deprecate barrier. All REQ_HARDBARRIERs are failed with
-EOPNOTSUPP and blk_queue_ordered() is replaced with simpler
blk_queue_flush().
blk_queue_flush() takes combinations of REQ_FLUSH and FUA. If a
device has write cache and can flush it, it should set REQ_FLUSH. If
the device can handle FUA writes, it should also set REQ_FUA.
All blk_queue_ordered() users are converted.
* ORDERED_DRAIN is mapped to 0 which is the default value.
* ORDERED_DRAIN_FLUSH is mapped to REQ_FLUSH.
* ORDERED_DRAIN_FLUSH_FUA is mapped to REQ_FLUSH | REQ_FUA.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Boaz Harrosh <bharrosh@panasas.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Chris Wright <chrisw@sous-sol.org>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Alasdair G Kergon <agk@redhat.com>
Cc: Pierre Ossman <drzeus@drzeus.cx>
Cc: Stefan Weinhuber <wein@de.ibm.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Nobody is making meaningful use of ORDERED_BY_TAG now and queue
draining for barrier requests will be removed soon which will render
the advantage of tag ordering moot. Kill ORDERED_BY_TAG. The
following users are affected.
* brd: converted to ORDERED_DRAIN.
* virtio_blk: ORDERED_TAG path was already marked deprecated. Removed.
* xen-blkfront: ORDERED_TAG case dropped.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
|
|
|
|
|
|
|
|
|
| |
loop implements FLUSH using fsync but was incorrectly setting its
ordered mode to DRAIN. Change it to DRAIN_FLUSH. In practice, this
doesn't change anything as loop doesn't make use of the block layer
ordered implementation.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unplugging from a request function doesn't really help much (it's
already in the request_fn) and soon block layer will be updated to mix
barrier sequence with other commands, so there's no need to treat
queue flushing any differently.
ide was the only user of blk_queue_flushing(). Remove it.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
| |
|
|\
| |
| |
| |
| |
| |
| | |
* 'kvm-updates/2.6.36' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: PIT: free irq source id in handling error path
KVM: destroy workqueue on kvm_create_pit() failures
KVM: fix poison overwritten caused by using wrong xstate size
|
| |
| |
| |
| |
| |
| |
| | |
Free irq source id if create pit workqueue fail
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
kernel needs to destroy workqueue if kvm_create_pit() fails, otherwise
after pit is freed, the workqueue is leaked.
Signed-off-by: Xiaotian Feng <dfeng@redhat.com>
Cc: Avi Kivity <avi@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
fpu.state is allocated from task_xstate_cachep, the size of task_xstate_cachep
is xstate_size. xstate_size is set from cpuid instruction, which is often
smaller than sizeof(struct xsave_struct). kvm is using sizeof(struct xsave_struct)
to fill in/out fpu.state.xsave, as what we allocated for fpu.state is
xstate_size, kernel will write out of memory and caused poison/redzone/padding
overwritten warnings.
Signed-off-by: Xiaotian Feng <dfeng@redhat.com>
Reviewed-by: Sheng Yang <sheng@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Avi Kivity <avi@redhat.com>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Sheng Yang <sheng@linux.intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel: (58 commits)
drm/i915,intel_agp: Add support for Sandybridge D0
drm/i915: fix render pipe control notify on sandybridge
agp/intel: set 40-bit dma mask on Sandybridge
drm/i915: Remove the conflicting BUG_ON()
drm/i915/suspend: s/IS_IRONLAKE/HAS_PCH_SPLIT/
drm/i915/suspend: Flush register writes before busy-waiting.
i915: disable DAC on Ironlake also when doing CRT load detection.
drm/i915: wait for actual vblank, not just 20ms
drm/i915: make sure eDP PLL is enabled at the right time
drm/i915: fix VGA plane disable for Ironlake+
drm/i915: eDP mode set sequence corrections
drm/i915: add panel reset workaround
drm/i915: Enable RC6 on Ironlake.
drm/i915/sdvo: Only set is_lvds if we have a valid fixed mode.
drm/i915: Set up a render context on Ironlake
drm/i915 invalidate indirect state pointers at end of ring exec
drm/i915: Wake-up wait_request() from elapsed hang-check (v2)
drm/i915: Apply i830 errata for cursor alignment
drm/i915: Only update i845/i865 CURBASE when disabled (v2)
drm/i915: FBC is updated within set_base() so remove second call in mode_set()
...
|
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This one is missed in last pipe control fix for sandybridge,
that really unmask interrupt bit for notify in render engine IMR.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We now attempt to free "active" objects following a GPU hang as either
the GPU will be reset or the hang is permenant. In either case, the GPU
writes will not be flushed to main memory and it should be safe to
return that memory back to the system.
The BUG_ON(active) is thus overkill and can erroneously fire after a
EIO.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
For the shared paths on the next generation chipsets.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Like on Sandybridge, disabling the DAC here when doing CRT load detect
avoids forever hangs waiting on the hardware.
test procedure on HP 2740p:
boot with no VGA plugged in, start X,
plug in VGA monitor (1280x1024)
chvt 3
machine hangs waiting forever.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Waiting for a hard coded 20ms isn't always enough to make sure a vblank
period has actually occurred, so add code to make sure we really have
passed through a vblank period (or that the pipe is off when disabling).
This prevents problems with mode setting and link training, and seems to
fix a bug like https://bugs.freedesktop.org/show_bug.cgi?id=29278, but
on an HP 8440p instead. Hopefully also fixes
https://bugs.freedesktop.org/show_bug.cgi?id=29141.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We need to make sure the eDP PLL is enabled before the pipes or planes,
so do it as part of the DP prepare mode set function.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We need to use I/O port instructions to access VGA registers on
Ironlake+, and it doesn't hurt on other platforms, so switch the VGA
plane disable function over to using them. Move it to init time as well
while we're at it, no need to repeatedly disable the VGA plane with
every mode set and DPMS event.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We should disable the panel first when shutting down an eDP link. And
when turning one on, the panel needs to be enabled before link training
or eDP I/O won't be enabled.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Ironlake requires that we clear the reset panel bit during power
sequences and restore it afterwards. Uncondtionally add code to do that
since it should be harmless on SNB+.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
RC6 allows the GPU to enter a lower power state when the GPU is idle.
Signed-off-by: Zou Nan hai <nanhai.zou@intel.com>
[anholt: Fixed the !renderctx error path to actually not enable RC6.]
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If we have failed to ascertain the fixed mode for the LVDS panel, then
trust the pixel clock ranges reported for the connection when determing
valid modes. This makes intel_sdvo_mode_valid() consistent with
intel_lvds_mode_valid() which is also a no-op is there is no fixed mode
defined. (Since the mode is both validated by SDVO and LVDS, why are
checking against an LVDS fixed mode in SDVO...)
By only defining is_lvds to be true when we actually have an LVDS output
with a fixed mode, we avoid various potential NULL deferences where the
assumption is made that all LVDS outputs have a fixed mode.
References:
Bug 29449 - [Q35] failure to read EDID/vbios for LVDS, no mode => no output
https://bugs.freedesktop.org/show_bug.cgi?id=29449
The primary failure in this bug is not finding the EDID and determining
the correct fixed panel mode. However, this patch should fix the
secondary issue of not enabling any of the standard modes for the panel
either.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
RC6 power state requires a logical render context in place for saving
render context.
Signed-off-by: Zou Nan hai <nanhai.zou@intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is required by the spec, and without this some 3D programs will
hang after resume from RC6 we enable that.
Signed-off-by: Zou Nan hai <nanhai.zou@intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If our watchdog fires and we see that the GPU is idle, but that we
are still waiting on an interrupt, forcibly wake-up the waiter.
i915_do_wait_request() should not be racy, yet there are persistent
reports that 945GM hangs whilst the GPU is idle. This implies that the
hardware is not quite as coherent as the documentation claims - a write
followed by a flush is supposed to be coherent in main memory before the
flush is retired and the irq is emitted. This seems to be a sensible and
elegant guard to force the wait to timeout.
v2: Daniel Vetter pointed out that a warning would be useful to explain
why the machine appeared to stall.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
i830 requires 32bpp cursors to be aligned to 16KB, so we have to expose
the alignment parameter to i915_gem_attach_phys_object().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The i845 and i865 have a peculiarlity in that CURBASE is not the trigger
for the vsync update of the cursor registers but instead the
modification of that register is prohibited whilst the cursor is
enabled. Reorder the write sequence for CURPOS, CURCNTR and CURBASE on
i845 to i865 to match.
v2: Remove the checks for i845/i865 from within i9xx_cursor_update()
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The FBC is dependent upon a few details of the framebuffer so it is
required to be updated within set_base(), so remove the redundant call
from mode_set().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add a new macro, wait_for, to simplify the act of waiting on a register
to change state. wait_for() takes three arguments, the condition to
inspect on every loop, the maximum amount of time to wait and whether to
yield the cpu for a length of time after each check.
v2: Upgrade failure messages to DRM_ERROR on the suggestion of
Eric Anholt. We do not expect to hit these conditions as they reflect
programming errors, so if we do we want to be notified.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The cleanup path for early abort failed to nullify the gem_buffer. The
likely consequence of this is zero, since a failure here should mean
aborting the module load.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously, we only remembered to update the watermarks for i9xx, and
incorrectly assumed that the crtc->enabled flag was valid at that point
in the dpms cycle.
Note that on my x201s this makes a SR bug on pipe 1 much easier to hit.
(Since before this patch when disabling pipe 0, we either didn't update
the watermarks at all, or when we did we still thought we had two pipes
enabled and so disabled SR.)
References:
Bug 28969 - [Arrandale] Screen flickers, suspect Self-Refresh
https://bugs.freedesktop.org/show_bug.cgi?id=28969
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Within i915_opregion.c there are two blocks of semantically identical
ASLE response codes defined. Only one of those matches the ACPI IGD
OpRegion Specification 0.1, use those.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Matthew Garrett <mjg59@srcf.ucam.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
shmfs doesn't actually implement i_ops->truncate() so we were not
immedatiately releasing the backing pages when shrinking the gfx cache
under OOM. Instead use a combination of truncate_inode_pages() and
i_ops->truncate_range() as is used by shmem_delete_inode().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Writing to the DSPBASE register triggers the double-buffered update to
all the control registers, so always write it last in the update
sequence.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
v2: Hook in DP paths to keep FULLSCREEN panel fitting on eDP.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Directly read the GTT mapping for the contents of the batch buffers
rather than relying on possibly stale CPU caches. Also for completeness
scan the flushing/inactive lists for the current buffers - we are
collecting error state after all.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In order to reduce the penalty of fallbacks under memory pressure and to
avoid a potential immediate ping-pong of evicting a mmaped buffer, we
move the object to the tail of the inactive list when a page is freshly
faulted or the object is moved into the CPU domain.
We choose not to protect the CPU objects from casual eviction,
preferring to keep the GPU active for as long as possible.
v2: Daniel Vetter found a bug where I forgot that pinned objects are
kept off the inactive list.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Based in a large part upon Daniel Vetter's implementation and adapted
for handling multiple rings in a single pass.
This should lead to better gtt usage and fixes the page-fault-of-doom
triggered. The fairness is provided by scanning through the GTT space
amalgamating space in rendering order. As soon as we have a contiguous
space in the GTT large enough for the new object (and its alignment),
evict any object which lies within that space. This should keep more
objects resident in the GTT.
Doing throughput testing on a PineView machine with cairo-perf-trace
indicates that there is very little difference with the new LRU scan,
perhaps a small improvement... Except oddly for the poppler trace.
Reference:
Bug 15911 - Intermittent X crash (freeze)
https://bugzilla.kernel.org/show_bug.cgi?id=15911
Bug 20152 - cannot view JPG in firefox when running UXA
https://bugs.freedesktop.org/show_bug.cgi?id=20152
Bug 24369 - Hang when scrolling firefox page with window in front
https://bugs.freedesktop.org/show_bug.cgi?id=24369
Bug 28478 - Intermittent graphics lockups due to overflow/loop
https://bugs.freedesktop.org/show_bug.cgi?id=28478
v2: Attempt to clarify the logic and order of eviction through the use
of comments and macros.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The eviction code is the gnarly underbelly of memory management, and is
clearer if kept separated from the normal domain management in GEM.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This will be used by the eviction logic to maintain fairness between the
rings.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
|