summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/i915/i915_gem_request.c
Commit message (Collapse)AuthorAgeFilesLines
* drm/i915: Make global seqno known in i915_gem_request_execute tracepointTvrtko Ursulin2018-02-281-2/+2
| | | | | | | | | | | | | | | | | | | | | | Commit fe49789fab97 ("drm/i915: Deconstruct execute fence") re-arranged the code and moved the i915_gem_request_execute tracepoint to before the global seqno is assigned to the request. We need to move the tracepoint a bit later so this information is once again available. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Fixes: fe49789fab97 ("drm/i915: Deconstruct execute fence") Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: intel-gfx@lists.freedesktop.org Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180220104742.565-1-tvrtko.ursulin@linux.intel.com (cherry picked from commit 158863fb50968c0ae85e87a401221425c941b9f0) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
* drm/i915: Always run hangcheck while the GPU is busyChris Wilson2018-02-011-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we relied on only running the hangcheck while somebody was waiting on the GPU, in order to minimise the amount of time hangcheck had to run. (If nobody was watching the GPU, nobody would notice if the GPU wasn't responding -- eventually somebody would care and so kick hangcheck into action.) However, this falls apart from around commit 4680816be336 ("drm/i915: Wait first for submission, before waiting for request completion"), as not all waiters declare themselves to hangcheck and so we could switch off hangcheck and miss GPU hangs even when waiting under the struct_mutex. If we enable hangcheck from the first request submission, and let it run until the GPU is idle again, we forgo all the complexity involved with only enabling around waiters. We just have to remember to be careful that we do not declare a GPU hang when idly waiting for the next request to be come ready, as we will run hangcheck continuously even when the engines are stalled waiting for external events. This should be true already as we should only be tracking requests submitted to hardware for execution as an indicator that the engine is busy. Fixes: 4680816be336 ("drm/i915: Wait first for submission, before waiting for request completion" Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104840 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180129144104.3921-1-chris@chris-wilson.co.uk Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> (cherry picked from commit 889230489b6b138ba97ba2f13fc9644a3d16d0d2) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
* drm/i915: Assert that the request is on the execution queue before being removedChris Wilson2017-12-221-0/+2
| | | | | | | | | | | | | | | | | | | | We should only attempt to remove requests from the execution queue that are on the execution queue. These are the requests that have been assigned a global_seqno, so we can assert that we only attempt to remove requests with a nonzero global_seqno. Afterwards we assert that we remove them in order, i.e. the global_seqno matches the engine's seqno, but that leaves a small loophole for an unattached request on an unused engine. We can then make the same assertion on queuing the request to the execution engine, it must have a zero global_seqno or else we are queuing the same request twice. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171222141959.3006-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
* drm/i915: Ratelimit request allocation under oomChris Wilson2017-12-131-4/+15
| | | | | | | | | | | | | If we fail to allocate a request, we can reap the outstanding requests and push them to the request's slab's freelist before trying again. This forces us to ratelimit malicious clients that tie up all of the system resources in requests, instead of causing a system-wide oom. Testcase: igt/gem_shrink/execbuf1 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171212180652.22061-3-chris@chris-wilson.co.uk
* drm/i915: Allow fence allocations to failChris Wilson2017-12-131-3/+3
| | | | | | | | | | | | | | If a fence allocation fails in a blocking context, we will sleep on the fence as a last resort. We can therefore allow ourselves to fail and sleep on the fence instead of triggering a system-wide oom. This allows us to throttle malicious clients that are consuming lots of system resources by capping the amount of memory used by fences. Testcase: igt/gem_shrink/execbufX Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171212180652.22061-2-chris@chris-wilson.co.uk
* drm/i915: Restore GT performance in headless mode with DMC loadedTvrtko Ursulin2017-12-081-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It seems that the DMC likes to transition between the DC states a lot when there are no connected displays (no active power domains) during command submission. This activity on DC states has a negative impact on the performance of the chip with huge latencies observed in the interrupt handlers and elsewhere. Simple tests like igt/gem_latency -n 0 are slowed down by a factor of eight. Work around it by introducing a new power domain named, POWER_DOMAIN_GT_IRQ, associtated with the "DC off" power well, which is held for the duration of command submission activity. CNL has the same problem which will be addressed as a follow-up. Doing that requires a fix for a DC6 context corruption problem in the CNL DMC firmware which is yet to be released. v2: * Add commit text as comment in i915_gem_mark_busy. (Chris Wilson) * Protect macro body with braces. (Jani Nikula) v3: * Add dedicated power domain for clarity. (Chris, Imre) * Commit message and comment text updates. * Apply to all big-core GEN9 parts apart for Skylake which is pending DMC firmware release. v4: * Power domain should be inner to device runtime pm. (Chris) * Simplify NEEDS_CSR_GT_PERF_WA macro. (Chris) * Handle async DMC loading by moving the GT_IRQ power domain logic into intel_runtime_pm. (Daniel, Chris) * Include small core GEN9 as well. (Imre) v5 * Special handling for async DMC load is not needed since on failure the power domain reference is kept permanently taken. (Imre) v6: * Drop the NEEDS_CSR_GT_PERF_WA macro since all firmwares have now been deployed. (Imre, Chris) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100572 Testcase: igt/gem_exec_nop/headless Cc: Imre Deak <imre.deak@intel.com> Acked-by: Chris Wilson <chris@chris-wilson.co.uk> (v2) Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> (v5) Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> [Imre: Add note about applying the WA on CNL as a follow-up] Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171205132854.26380-1-tvrtko.ursulin@linux.intel.com
* drm/i915: Unwind incomplete legacy context switchesChris Wilson2017-11-231-12/+6
| | | | | | | | | | | | | | | | | The legacy context switch for ringbuffer submission is multistaged, where each of those stages may fail. However, we were updating global state after some stages, and so we had to force the incomplete request to be submitted because we could not unwind. Save the global state before performing the switches, and so enable us to unwind back to the previous global state should any phase fail. We then must cancel the request instead of submitting it should the construction fail. v2: s/saved_ctx/from_ctx/; s/ctx/to_ctx/ etc. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171123152631.31385-1-chris@chris-wilson.co.uk
* drm/i915/pmu: Suspend sampling when GPU is idleTvrtko Ursulin2017-11-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | If only a subset of events is enabled we can afford to suspend the sampling timer when the GPU is idle and so save some cycles and power. v2: Rebase and limit timer even more. v3: Rebase. v4: Rebase. v5: Skip action if perf PMU failed to register. v6: Checkpatch cleanup. v7: * Add a common helper to start the timer if needed. (Chris Wilson) * Add comment explaining bitwise logic in pmu_needs_timer. v8: Fix some comments styles. (Chris Wilson) v9: Rebase. v10: Move function declarations to i915_pmu.h. v11: Rename functions to i915_pmu_gt_(un)parked. (Chris Wilson) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171121181852.16128-3-tvrtko.ursulin@linux.intel.com
* drm/i915: Automatic i915_switch_context for legacyChris Wilson2017-11-201-0/+4
| | | | | | | | | | | | | During request construction, after pinning the context we know whether or not we have to emit a context switch. So move this common operation from every caller into i915_gem_request_alloc() itself. v2: Always submit the request if we emitted some commands during request construction, as typically it also involves changes in global state. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171120102002.22254-2-chris@chris-wilson.co.uk
* drm/i915: Pull the unconditional GPU cache invalidation into request ↵Chris Wilson2017-11-201-5/+19
| | | | | | | | | | | | | | | | | construction As the request will, in the following patch, implicitly invoke a context-switch on construction, we should precede that with a GPU TLB invalidation. Also, even before using GGTT, we always want to invalidate the TLBs for any updates (as well as the ppgtt invalidates that are unconditionally applied by execbuf). Since we almost always require the TLB invalidate, do it unconditionally on request allocation and so we can remove it from all other paths. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171120102002.22254-1-chris@chris-wilson.co.uk Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
* drm/i915: Add a hook for making the engines idle (parking) and unparkingChris Wilson2017-10-251-0/+2
| | | | | | | | | | | | | | In the next patch, we will want to install a callback when the engines (GT as a whole) become idle and similarly when they first become busy. To enable that callback, first rename intel_engines_mark_idle() to intel_engines_park() and provide the companion intel_engines_unpark(). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171025143943.7661-2-chris@chris-wilson.co.uk
* drm/i915: Use rcu instead of stop_machine in set_wedgedDaniel Vetter2017-10-111-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | stop_machine is not really a locking primitive we should use, except when the hw folks tell us the hw is broken and that's the only way to work around it. This patch tries to address the locking abuse of stop_machine() from commit 20e4933c478a1ca694b38fa4ac44d99e659941f5 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Tue Nov 22 14:41:21 2016 +0000 drm/i915: Stop the machine as we install the wedged submit_request handler Chris said parts of the reasons for going with stop_machine() was that it's no overhead for the fast-path. But these callbacks use irqsave spinlocks and do a bunch of MMIO, and rcu_read_lock is _real_ fast. To stay as close as possible to the stop_machine semantics we first update all the submit function pointers to the nop handler, then call synchronize_rcu() to make sure no new requests can be submitted. This should give us exactly the huge barrier we want. I pondered whether we should annotate engine->submit_request as __rcu and use rcu_assign_pointer and rcu_dereference on it. But the reason behind those is to make sure the compiler/cpu barriers are there for when you have an actual data structure you point at, to make sure all the writes are seen correctly on the read side. But we just have a function pointer, and .text isn't changed, so no need for these barriers and hence no need for annotations. Unfortunately there's a complication with the call to intel_engine_init_global_seqno: - Without stop_machine we must hold the corresponding spinlock. - Without stop_machine we must ensure that all requests are marked as having failed with dma_fence_set_error() before we call it. That means we need to split the nop request submission into two phases, both synchronized with rcu: 1. Only stop submitting the requests to hw and mark them as failed. 2. After all pending requests in the scheduler/ring are suitably marked up as failed and we can force complete them all, also force complete by calling intel_engine_init_global_seqno(). This should fix the followwing lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 4.14.0-rc3-CI-CI_DRM_3179+ #1 Tainted: G U ------------------------------------------------------ kworker/3:4/562 is trying to acquire lock: (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff8113d4bc>] stop_machine+0x1c/0x40 but task is already holding lock: (&dev->struct_mutex){+.+.}, at: [<ffffffffa0136588>] i915_reset_device+0x1e8/0x260 [i915] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #6 (&dev->struct_mutex){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_interruptible_nested+0x1b/0x20 i915_mutex_lock_interruptible+0x51/0x130 [i915] i915_gem_fault+0x209/0x650 [i915] __do_fault+0x1e/0x80 __handle_mm_fault+0xa08/0xed0 handle_mm_fault+0x156/0x300 __do_page_fault+0x2c5/0x570 do_page_fault+0x28/0x250 page_fault+0x22/0x30 -> #5 (&mm->mmap_sem){++++}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __might_fault+0x68/0x90 _copy_to_user+0x23/0x70 filldir+0xa5/0x120 dcache_readdir+0xf9/0x170 iterate_dir+0x69/0x1a0 SyS_getdents+0xa5/0x140 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #4 (&sb->s_type->i_mutex_key#5){++++}: down_write+0x3b/0x70 handle_create+0xcb/0x1e0 devtmpfsd+0x139/0x180 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #3 ((complete)&req.done){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 wait_for_common+0x58/0x210 wait_for_completion+0x1d/0x20 devtmpfs_create_node+0x13d/0x160 device_add+0x5eb/0x620 device_create_groups_vargs+0xe0/0xf0 device_create+0x3a/0x40 msr_device_create+0x2b/0x40 cpuhp_invoke_callback+0xc9/0xbf0 cpuhp_thread_fun+0x17b/0x240 smpboot_thread_fn+0x18a/0x280 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #2 (cpuhp_state-up){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpuhp_issue_call+0x133/0x1c0 __cpuhp_setup_state_cpuslocked+0x139/0x2a0 __cpuhp_setup_state+0x46/0x60 page_writeback_init+0x43/0x67 pagecache_init+0x3d/0x42 start_kernel+0x3a8/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #1 (cpuhp_state_mutex){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 __cpuhp_setup_state_cpuslocked+0x53/0x2a0 __cpuhp_setup_state+0x46/0x60 page_alloc_init+0x28/0x30 start_kernel+0x145/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #0 (cpu_hotplug_lock.rw_sem){++++}: check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpus_read_lock+0x3d/0xb0 stop_machine+0x1c/0x40 i915_gem_set_wedged+0x1a/0x20 [i915] i915_reset+0xb9/0x230 [i915] i915_reset_device+0x1f6/0x260 [i915] i915_handle_error+0x2d8/0x430 [i915] hangcheck_declare_hang+0xd3/0xf0 [i915] i915_hangcheck_elapsed+0x262/0x2d0 [i915] process_one_work+0x233/0x660 worker_thread+0x4e/0x3b0 kthread+0x152/0x190 ret_from_fork+0x27/0x40 other info that might help us debug this: Chain exists of: cpu_hotplug_lock.rw_sem --> &mm->mmap_sem --> &dev->struct_mutex Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&dev->struct_mutex); lock(&mm->mmap_sem); lock(&dev->struct_mutex); lock(cpu_hotplug_lock.rw_sem); *** DEADLOCK *** 3 locks held by kworker/3:4/562: #0: ("events_long"){+.+.}, at: [<ffffffff8109c64a>] process_one_work+0x1aa/0x660 #1: ((&(&i915->gpu_error.hangcheck_work)->work)){+.+.}, at: [<ffffffff8109c64a>] process_one_work+0x1aa/0x660 #2: (&dev->struct_mutex){+.+.}, at: [<ffffffffa0136588>] i915_reset_device+0x1e8/0x260 [i915] stack backtrace: CPU: 3 PID: 562 Comm: kworker/3:4 Tainted: G U 4.14.0-rc3-CI-CI_DRM_3179+ #1 Hardware name: /NUC7i5BNB, BIOS BNKBL357.86A.0048.2017.0704.1415 07/04/2017 Workqueue: events_long i915_hangcheck_elapsed [i915] Call Trace: dump_stack+0x68/0x9f print_circular_bug+0x235/0x3c0 ? lockdep_init_map_crosslock+0x20/0x20 check_prev_add+0x430/0x840 ? irq_work_queue+0x86/0xe0 ? wake_up_klogd+0x53/0x70 __lock_acquire+0x1420/0x15e0 ? __lock_acquire+0x1420/0x15e0 ? lockdep_init_map_crosslock+0x20/0x20 lock_acquire+0xb0/0x200 ? stop_machine+0x1c/0x40 ? i915_gem_object_truncate+0x50/0x50 [i915] cpus_read_lock+0x3d/0xb0 ? stop_machine+0x1c/0x40 stop_machine+0x1c/0x40 i915_gem_set_wedged+0x1a/0x20 [i915] i915_reset+0xb9/0x230 [i915] i915_reset_device+0x1f6/0x260 [i915] ? gen8_gt_irq_ack+0x170/0x170 [i915] ? work_on_cpu_safe+0x60/0x60 i915_handle_error+0x2d8/0x430 [i915] ? vsnprintf+0xd1/0x4b0 ? scnprintf+0x3a/0x70 hangcheck_declare_hang+0xd3/0xf0 [i915] ? intel_runtime_pm_put+0x56/0xa0 [i915] i915_hangcheck_elapsed+0x262/0x2d0 [i915] process_one_work+0x233/0x660 worker_thread+0x4e/0x3b0 kthread+0x152/0x190 ? process_one_work+0x660/0x660 ? kthread_create_on_node+0x40/0x40 ret_from_fork+0x27/0x40 Setting dangerous option reset - tainting kernel i915 0000:00:02.0: Resetting chip after gpu hang Setting dangerous option reset - tainting kernel i915 0000:00:02.0: Resetting chip after gpu hang v2: Have 1 global synchronize_rcu() barrier across all engines, and improve commit message. v3: We need to protect the seqno update with the timeline spinlock (in set_wedged) to avoid racing with other updates of the seqno, like we already do in nop_submit_request (Chris). v4: Use two-phase sequence to plug the race Chris spotted where we can complete requests before they're marked up with -EIO. v5: Review from Chris: - simplify nop_submit_request. - Add comment to rcu_read_lock section. - Align comments with the new style. v6: Remove unused variable to appease CI. Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102886 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103096 Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Marta Lofstedt <marta.lofstedt@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171011091019.1425-1-daniel.vetter@ffwll.ch
* drm/i915: Name structure in dev_priv that contains RPS/RC6 state as "gt_pm"Sagar Arun Kamble2017-10-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Prepared substructure rps for RPS related state. autoenable_work is used for RC6 too hence it is defined outside rps structure. As we do this lot many functions are refactored to use intel_rps *rps to access rps related members. Hence renamed intel_rps_client pointer variables to rps_client in various functions. v2: Rebase. v3: s/pm/gt_pm (Chris) Refactored access to rps structure by declaring struct intel_rps * in many functions. Signed-off-by: Sagar Arun Kamble <sagar.a.kamble@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Imre Deak <imre.deak@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Radoslaw Szwichtenberg <radoslaw.szwichtenberg@intel.com> #1 Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/1507360055-19948-9-git-send-email-sagar.a.kamble@intel.com Acked-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171010213010.7415-8-chris@chris-wilson.co.uk
* drm/i915: Introduce a preempt contextChris Wilson2017-10-041-0/+7
| | | | | | | | | | | | | | | | | | | | | | | Add another perma-pinned context for using for preemption at any time. We cannot just reuse the existing kernel context, as first and foremost we need to ensure that we can preempt the kernel context itself, so require a distinct context id. Similar to the kernel context, we may want to interrupt execution and switch to the preempt context at any time, and so it needs to be permanently pinned and available. To compensate for yet another permanent allocation, we shrink the existing context and the new context by reducing their ringbuffer to the minimum. v2: Assert that we never allocate a request from the preemption context. v3: Limit perma-pin to engines that may preempt. v4: Onion cleanup for early driver death v5: Onion ordering in main driver cleanup as well. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171003203453.15692-4-chris@chris-wilson.co.uk
* drm/i915: Give the invalid priority a magic nameChris Wilson2017-09-291-1/+1
| | | | | | | | | | | | We use INT_MIN to denote the priority of a request that has not been submitted to the scheduler; we treat INT_MIN as an invalid priority and initialise the request to it. Give the value a name so it stands out. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170928193910.17988-3-chris@chris-wilson.co.uk Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com
* drm/i915: Make i915_spin_request() staticChris Wilson2017-09-221-6/+21
| | | | | | | | | | | | | | No users now outside of i915_wait_request(), so we can make it private to i915_gem_request.c, and assume the caller knows the seqno. In the process, also remove i915_gem_request_started() as that was only ever used by i915_spin_request(). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michal Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170922120333.25535-1-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
* drm/i915: Confirm request->global_seqno after spin completionChris Wilson2017-09-221-4/+1
| | | | | | | | | | | | | After we see our target seqno has been completed by the hw, we need to confirm that it still matches the request (as it may have been preempted before the spin completes). If the request no longer matches the target seqno, we need to restart the wait to reacquire that seqno. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michal Winiarski <michal.winiarski@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170921210903.18337-1-chris@chris-wilson.co.uk Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
* drm/i915: Mark the GT as busy before idling the previous requestChris Wilson2017-08-181-43/+45
| | | | | | | | | | | | | | | | | | | In a synchronous setup, we may retire the last request before we complete allocating the next request. As the last request is retired, we queue a timer to mark the device as idle, and promptly have to execute ad cancel that timer once we complete allocating the request and need to keep the device awake. If we rearrange the mark_busy() to occur before we retire the previous request, we can skip this ping-pong. v2: Joonas pointed out that unreserve_seqno() was now doing more than doing seqno handling and should be renamed to reflect its wider purpose. That also highlighted the new asymmetry with reserve_seqno(), so fixup that and rename both to [un]reserve_engine(). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170817144719.10968-1-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
* drm/i915: Squelch reset messages during selftestsChris Wilson2017-07-271-1/+1
| | | | | | | | | | | | During our selftests, we try reseting the GPU tens of thousands of times, flooding the dmesg with our reset spam drowning out any potential warnings. Add an option to i915_reset()/i915_reset_engine() to specify a quiet reset for selftesting. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20170721123238.16428-19-chris@chris-wilson.co.uk Reviewed-by: Michel Thierry <michel.thierry@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
* drm/i915: Make i915_gem_context_mark_guilty() safe for unlocked updatesChris Wilson2017-07-271-2/+1
| | | | | | | | | | | | | Since we make call i915_gem_context_mark_guilty() concurrently when resetting different engines in parallel, we need to make sure that our updates are safe for the unlocked access. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michel Thierry <michel.thierry@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170721123238.16428-12-chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
* drm/i915: Move idle checks before intel_engine_init_global_seqno()Chris Wilson2017-07-271-0/+4
| | | | | | | | | | | | | | | intel_engine_init_globa_seqno() may be called from an uncontrolled set-wedged path where we have given up waiting for broken hw and declare it defunct. Along that path, any sanity checks that the hw is idle before we adjust its state will expectedly fail, so we simply cannot. Instead of asserting inside init_global_seqno, we move them to the normal caller reset_all_global_seqno() as it handles runtime seqno wraparound. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170721123238.16428-8-chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
* drm/i915: Avoid keeping waitboost active for signaling threadsChris Wilson2017-06-281-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once a client has requested a waitboost, we keep that waitboost active until all clients are no longer waiting. This is because we don't distinguish which waiter deserves the boost. However, with the advent of fence signaling, the signaler threads appear as waiters to the RPS interrupt handler. So instead of using a single boolean to track when to keep the waitboost active, use a counter of all outstanding waitboosted requests. At this point, I have removed all vestiges of the rate limiting on clients. Whilst this means that compositors should remain more fluid, it also means that boosts are more prevalent. See commit b29c19b64528 ("drm/i915: Boost RPS frequency for CPU stalls") for a longer discussion on the pros and cons of both approaches. A drawback of this implementation is that it requires constant request submission to keep the waitboost trimmed (as it is now cancelled when the request is completed). This will be fine for a busy system, but near idle the boosts may be kept for longer than desired (effectively tens of vblanks worstcase) and there is a reliance on rc6 instead. v2: Remove defunct rps.client_lock Reported-by: Michał Winiarski <michal.winiarski@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170628123548.9236-1-chris@chris-wilson.co.uk
* drm/i915: Remove the spin-request during execbuf await_requestChris Wilson2017-06-081-12/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Originally we would enable and disable the breadcrumb interrupt immediately on demand. This was slow enough to have a large impact (>30%) on tasks that hopped between engines. However, by using a shadow to keep the irq alive for an extra interrupt (see commit 67b807a89230 ("drm/i915: Delay disabling the user interrupt for breadcrumbs")) and by recently reducing the cost in adding ourselves to the signal tree, we no longer need to spin-request during await_request to avoid delays in throughput tests. Without the earlier patches to stop the wakeup when signaling if the irq was already active, we saw no improvement in execbuf overhead (and corresponding contention in other clients) despite the removal of the spinner in a simple test like glxgears. This means there will be scenarios where now we spend longer enabling the interrupt than we would have spent spinning, but these are not likely to have as noticeable an impact as the high frequency test cases (where there should not be any regression). Ulterior motive: generalising the engine->sync_to to handle different types of semaphores and non-semaphores. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Oscar Mateo <oscar.mateo@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170608111405.16466-4-chris@chris-wilson.co.uk
* drm/i915: Check signaled state after enabling signalingChris Wilson2017-06-081-1/+1
| | | | | | | | | | | | | | Setting up the irq to signal the request completion takes a finite amount of time, during which it is possible that the request already completed. Check afterwards, just in case, so that we can respond immediately. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170608111405.16466-1-chris@chris-wilson.co.uk
* drm/i915: Remove misleading comment in request_allocMichał Winiarski2017-05-231-3/+0
| | | | | | | | | | | Passing NULL ctx to request_alloc would lead to null-ptr-deref. v2: Let's not replace the comment with a BUG_ON Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170523102400.9614-1-michal.winiarski@intel.com Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
* drm/i915: Split execlist priority queue into rbtree + linked listChris Wilson2017-05-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All the requests at the same priority are executed in FIFO order. They do not need to be stored in the rbtree themselves, as they are a simple list within a level. If we move the requests at one priority into a list, we can then reduce the rbtree to the set of priorities. This should keep the height of the rbtree small, as the number of active priorities can not exceed the number of active requests and should be typically only a few. Currently, we have ~2k possible different priority levels, that may increase to allow even more fine grained selection. Allocating those in advance seems a waste (and may be impossible), so we opt for allocating upon first use, and freeing after its requests are depleted. To avoid the possibility of an allocation failure causing us to lose a request, we preallocate the default priority (0) and bump any request to that priority if we fail to allocate it the appropriate plist. Having a request (that is ready to run, so not leading to corruption) execute out-of-order is better than leaking the request (and its dependency tree) entirely. There should be a benefit to reducing execlists_dequeue() to principally using a simple list (and reducing the frequency of both rbtree iteration and balancing on erase) but for typical workloads, request coalescing should be small enough that we don't notice any change. The main gain is from improving PI calls to schedule, and the explicit list within a level should make request unwinding simpler (we just need to insert at the head of the list rather than the tail and not have to make the rbtree search more complicated). v2: Avoid use-after-free when deleting a depleted priolist v3: Michał found the solution to handling the allocation failure gracefully. If we disable all priority scheduling following the allocation failure, those requests will be executed in fifo and we will ensure that this request and its dependencies are in strict fifo (even when it doesn't realise it is only a single list). Normal scheduling is restored once we know the device is idle, until the next failure! Suggested-by: Michał Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170517121007.27224-8-chris@chris-wilson.co.uk
* drm/i915: Use engine->context_pin() to report the intel_ringChris Wilson2017-05-041-3/+6
| | | | | | | | | | | | | | | | | | | Since unifying ringbuffer/execlist submission to use engine->pin_context, we ensure that the intel_ring is available before we start constructing the request. We can therefore move the assignment of the request->ring to the central i915_gem_request_alloc() and not require it in every engine->request_alloc() callback. Another small step towards simplification (of the core, but at a cost of handling error pointers in less important callers of engine->pin_context). v2: Rearrange a few branches to reduce impact of PTR_ERR() on gcc's code generation. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Oscar Mateo <oscar.mateo@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170504093308.4137-1-chris@chris-wilson.co.uk
* drm/i915: Switch the global i915.semaphores check to a local predicateChris Wilson2017-05-031-1/+3
| | | | | | | | | Rather than use a global modparam, we can just check to see if the engine has semaphores configured upon it. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-7-chris@chris-wilson.co.uk
* drm/i915: Do not record a successful syncpoint for a dma-awaitChris Wilson2017-05-031-18/+19
| | | | | | | | | | | | As we may unwind the requests, even though the request we are awaiting has a global_seqno that seqno may be revoked during the await and so we can not reliably use it as a barrier for all future awaits on the same timeline. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-6-chris@chris-wilson.co.uk
* drm/i915: Rename intel_timeline.sync_seqno[] to .global_sync[]Chris Wilson2017-05-031-4/+4
| | | | | | | | | | | With the addition of the inter-context intel_time.sync map, having a very similar sync_seqno[] is confusing. Aide the reader by denoting that this is a pre-allocated array for storing semaphore sync points wrt to the global seqno. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-5-chris@chris-wilson.co.uk
* drm/i915: Squash repeated awaits on the same fenceChris Wilson2017-05-031-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Track the latest fence waited upon on each context, and only add a new asynchronous wait if the new fence is more recent than the recorded fence for that context. This requires us to filter out unordered timelines, which are noted by DMA_FENCE_NO_CONTEXT. However, in the absence of a universal identifier, we have to use our own i915->mm.unordered_timeline token. v2: Throw around the debug crutches v3: Inline the likely case of the pre-allocation cache being full. v4: Drop the pre-allocation support, we can lose the most recent fence in case of allocation failure -- it just means we may emit more awaits than strictly necessary but will not break. v5: Trim allocation size for leaf nodes, they only need an array of u32 not pointers. v6: Create mock_timeline to tidy selftest writing v7: s/intel_timeline_sync_get/intel_timeline_sync_is_later/ (Tvrtko) v8: Prune the stale sync points when we idle. v9: Include a small benchmark in the kselftests v10: Separate the idr implementation into its own compartment. (Tvrkto) v11: Refactor igt_sync kselftests to avoid deep nesting (Tvrkto) v12: __sync_leaf_idx() to assert that p->height is 0 when checking leaves v13: kselftests to investigate struct i915_syncmap itself (Tvrtko) v14: Foray into ascii art graphs v15: Take into account that the random lookup/insert does 2 prng calls, not 1, when benchmarking, and use for_each_set_bit() (Tvrtko) v16: Improved ascii art Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-4-chris@chris-wilson.co.uk
* drm/i915: Lift timeline ordering to await_dma_fenceChris Wilson2017-05-031-3/+9
| | | | | | | | | | | | | | Currently we filter out repeated use of the same timeline in the low level i915_gem_request_await_request(), after having added the dependency on the old request. However, we can lift this to i915_gem_request_await_dma_fence() (before the dependency is added) using the observation that requests along the same timeline are explicitly ordered via i915_add_request (along with the dependencies). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-3-chris@chris-wilson.co.uk
* drm/i915: Unwrap top level fence-arrayChris Wilson2017-05-031-23/+18
| | | | | | | | | | | | | | | | By first unwrapping an incoming fence-array into its child fences, we can simplify the internal branching, and so avoid triggering a potential bug in the next patch when not squashing the child fences on the same timeline. It will also have the advantage of keeping the (top-level) fence arrays out of any fence/timeline caching since these are unordered timelines but with a random context id. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-2-chris@chris-wilson.co.uk
* drm/i915: Skip waking the signaler when enabling before request submissionChris Wilson2017-04-261-2/+2
| | | | | | | | | | | | | | If we are enabling the breadcrumbs signaling prior to submitting the request, we know that we cannot have missed the interrupt and can therefore skip immediately waking the signaler to check. This reduces a significant chunk of the __i915_gem_request_submit() overhead for inter-engine synchronisation, for example in gem_exec_whisper. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170426080659.28771-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
* drm/i915: Differentiate between sw write location into ring and last hw readChris Wilson2017-04-251-4/+12
| | | | | | | | | | | | | | | | | | | | We need to keep track of the last location we ask the hw to read up to (RING_TAIL) separately from our last write location into the ring, so that in the event of a GPU reset we do not tell the HW to proceed into a partially written request (which can happen if that request is waiting for an external signal before being executed). v2: Refactor intel_ring_reset() (Mika) Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100144 Testcase: igt/gem_exec_fence/await-hang Fixes: 821ed7df6e2a ("drm/i915: Update reset path to fix incomplete requests") Fixes: d55ac5bf97c6 ("drm/i915: Defer transfer onto execution timeline to actual hw submission") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170425130049.26147-1-chris@chris-wilson.co.uk Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
* drm/i915: Confirm the request is still active before adding it to the awaitChris Wilson2017-04-221-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although we do check the completion-status of the request before actually adding a wait on it (either to its submit fence or its completion dma-fence), we currently do not check before adding it to the dependency lists. In fact, without checking for a completed request we may try to use the signaler after it has been retired and its dependency tree freed: [ 60.044057] BUG: KASAN: use-after-free in __list_add_valid+0x1d/0xd0 at addr ffff880348c9e6a0 [ 60.044118] Read of size 8 by task gem_exec_fence/530 [ 60.044164] CPU: 1 PID: 530 Comm: gem_exec_fence Tainted: G E 4.11.0-rc7+ #46 [ 60.044226] Hardware name: ��������������������������������� ���������������������������������/���������������������������������, BIOS RYBDWi35.86A.0246.2 [ 60.044290] Call Trace: [ 60.044337] dump_stack+0x4d/0x6a [ 60.044383] kasan_object_err+0x21/0x70 [ 60.044435] kasan_report+0x225/0x4e0 [ 60.044488] ? __list_add_valid+0x1d/0xd0 [ 60.044534] ? kasan_kmalloc+0xad/0xe0 [ 60.044587] __asan_load8+0x5e/0x70 [ 60.044639] __list_add_valid+0x1d/0xd0 [ 60.044788] __i915_priotree_add_dependency+0x67/0x130 [i915] [ 60.044895] i915_gem_request_await_request+0xa8/0x370 [i915] [ 60.044974] i915_gem_request_await_dma_fence+0x129/0x140 [i915] [ 60.045049] i915_gem_do_execbuffer.isra.37+0xb0a/0x26b0 [i915] [ 60.045077] ? save_stack+0xb1/0xd0 [ 60.045105] ? save_stack_trace+0x1b/0x20 [ 60.045132] ? save_stack+0x46/0xd0 [ 60.045158] ? kasan_kmalloc+0xad/0xe0 [ 60.045184] ? __kmalloc+0xd8/0x670 [ 60.045229] ? drm_ioctl+0x359/0x640 [drm] [ 60.045256] ? SyS_ioctl+0x41/0x70 [ 60.045330] ? i915_vma_move_to_active+0x540/0x540 [i915] [ 60.045360] ? tty_insert_flip_string_flags+0xa1/0xf0 [ 60.045387] ? tty_flip_buffer_push+0x63/0x70 [ 60.045414] ? remove_wait_queue+0xa9/0xc0 [ 60.045441] ? kasan_unpoison_shadow+0x35/0x50 [ 60.045467] ? kasan_kmalloc+0xad/0xe0 [ 60.045494] ? kasan_check_write+0x14/0x20 [ 60.045568] i915_gem_execbuffer2+0xdb/0x2a0 [i915] [ 60.045616] drm_ioctl+0x359/0x640 [drm] [ 60.045705] ? i915_gem_execbuffer+0x5a0/0x5a0 [i915] [ 60.045751] ? drm_version+0x150/0x150 [drm] [ 60.045778] ? compat_start_thread+0x60/0x60 [ 60.045805] ? plist_del+0xda/0x1a0 [ 60.045833] do_vfs_ioctl+0x12e/0x910 [ 60.045860] ? ioctl_preallocate+0x130/0x130 [ 60.045886] ? pci_mmcfg_check_reserved+0xc0/0xc0 [ 60.045913] ? vfs_write+0x196/0x240 [ 60.045939] ? __fget_light+0xa7/0xc0 [ 60.045965] SyS_ioctl+0x41/0x70 [ 60.045991] entry_SYSCALL_64_fastpath+0x17/0x98 [ 60.046017] RIP: 0033:0x7feb2baefc47 [ 60.046042] RSP: 002b:00007fff56d28e58 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 60.046075] RAX: ffffffffffffffda RBX: 00007fff56d290a8 RCX: 00007feb2baefc47 [ 60.046102] RDX: 00007fff56d29050 RSI: 00000000c0406469 RDI: 0000000000000003 [ 60.046129] RBP: 00007fff56d29050 R08: 000055ecc4cd27d0 R09: 00007feb2bda8600 [ 60.046154] R10: 0000000000000073 R11: 0000000000000246 R12: 00000000c0406469 [ 60.046177] R13: 0000000000000003 R14: 000000000000000f R15: 0000000000000099 [ 60.046203] Object at ffff880348c9e680, in cache i915_dependency size: 64 [ 60.046225] Allocated: [ 60.046246] PID = 530 [ 60.046269] save_stack_trace+0x1b/0x20 [ 60.046292] save_stack+0x46/0xd0 [ 60.046318] kasan_kmalloc+0xad/0xe0 [ 60.046343] kasan_slab_alloc+0x12/0x20 [ 60.046368] kmem_cache_alloc+0xab/0x650 [ 60.046445] i915_gem_request_await_request+0x88/0x370 [i915] [ 60.046559] i915_gem_request_await_dma_fence+0x129/0x140 [i915] [ 60.046705] i915_gem_do_execbuffer.isra.37+0xb0a/0x26b0 [i915] [ 60.046849] i915_gem_execbuffer2+0xdb/0x2a0 [i915] [ 60.046936] drm_ioctl+0x359/0x640 [drm] [ 60.046987] do_vfs_ioctl+0x12e/0x910 [ 60.047038] SyS_ioctl+0x41/0x70 [ 60.047090] entry_SYSCALL_64_fastpath+0x17/0x98 [ 60.047139] Freed: [ 60.047179] PID = 530 [ 60.047223] save_stack_trace+0x1b/0x20 [ 60.047269] save_stack+0x46/0xd0 [ 60.047317] kasan_slab_free+0x72/0xc0 [ 60.047366] kmem_cache_free+0x39/0x160 [ 60.047512] i915_gem_request_retire+0x83f/0x930 [i915] [ 60.047657] i915_gem_request_alloc+0x166/0x600 [i915] [ 60.047799] i915_gem_do_execbuffer.isra.37+0xad8/0x26b0 [i915] [ 60.047897] i915_gem_execbuffer2+0xdb/0x2a0 [i915] [ 60.047942] drm_ioctl+0x359/0x640 [drm] [ 60.047968] do_vfs_ioctl+0x12e/0x910 [ 60.047993] SyS_ioctl+0x41/0x70 [ 60.048019] entry_SYSCALL_64_fastpath+0x17/0x98 [ 60.048044] Memory state around the buggy address: [ 60.048066] ffff880348c9e580: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc [ 60.048105] ffff880348c9e600: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc [ 60.048138] >ffff880348c9e680: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc [ 60.048170] ^ [ 60.048191] ffff880348c9e700: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc [ 60.048225] ffff880348c9e780: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc Note to hit the use-after-free requires us to be passed back a request via a fence-array, that is from explicit fencing accumulated into a sync-file fence-array. Fixes: 52e542090701 ("drm/i915/scheduler: Record all dependencies upon request construction") Testcase: igt/gem_exec_fence/expired-history Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170422081537.6468-1-chris@chris-wilson.co.uk
* drm/i915: Copy user requested buffers into the error stateChris Wilson2017-04-151-0/+16
| | | | | | | | | | | | | | | | | | | | | Introduce a new execobject.flag (EXEC_OBJECT_CAPTURE) that userspace may use to indicate that it wants the contents of this buffer preserved in the error state (/sys/class/drm/cardN/error) following a GPU hang involving this batch. Use this at your discretion, the contents of the error state. although compressed, are allocated with GFP_ATOMIC (i.e. limited) and kept for all eternity (until the error state is destroyed). Based on an earlier patch by Ben Widawsky <ben@bwidawsk.net> Testcase: igt/gem_exec_capture Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ben Widawsky <ben@bwidawsk.net> Cc: Matt Turner <mattst88@gmail.com> Acked-by: Ben Widawsky <ben@bwidawsk.net> Acked-by: Matt Turner <mattst88@gmail.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170415093902.22581-1-chris@chris-wilson.co.uk
* drm/i915: Advance ring->head fully when idleChris Wilson2017-04-071-10/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we retire the last request on the ring, before we ever access that ring again we know it will be completely idle and so we can advance the ring->head fully to the end (i.e. ring->tail) and not just to the start of the breadcrumb. This allows us to skip re-emitting the breadcrumb after resetting the GPU if the ring was entirely idle. This prevents us from overwriting a seqno wraparound by re-executing a stale breadcrumb, i.e. submit_request(1) intel_engine_init_global_seqno(0) i915_reset() would then leave 1 in the HWS, but the next request to execute would also be with seqno 1. The sanity checks upon submission detect this as a timewarp and explode. By setting the ring as empty, upon reset the HWS is left as 0, leaving it consistent with the timeline. v2: Fix check for deleting last element of list. We know that this request is always the first element of the ring, so only if next points back to the start will this be the only request in flight. v3: Remove opencoding of list_is_last() v4: Move the block to its own function for some clarity. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100144 Testcase: igt/gem_exec_whisper/hang-* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170406170028.26871-1-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
* drm/i915: Assert the engine is idle before overwiting the HWSChris Wilson2017-04-071-1/+1
| | | | | | | | | | | | When we update the global seqno (on the engine timeline), we modify HW state (both registers and mapped pages). As we do this, we should be sure that the HW is idle and we are not causing a conflict. The caller is supposed to wait_for_idle before calling us to update the seqno, so let's assert they have and the engine is indeed idle. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170405153055.28123-1-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
* drm/i915: Combine reset_all_global_seqno() loops into oneChris Wilson2017-03-311-9/+5
| | | | | | | | | | We can merge the pair of loops over the engines and their timelines into a single loop, making it easier to read and more consistent with the commentary. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170330145041.9005-6-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
* drm/i915: Remove redudant wait for each engine to idle from seqno wrapChris Wilson2017-03-311-3/+0
| | | | | | | | | | | | Having added the wait upon each engine to idle into the central i915_gem_wait_for_idle(), we can remove the now redundant wait from reset_all_global_seqno(). This has the advantage of removing the late detection of an error (an engine still busy) which left the seqno reset only partially complete (though it should be safe enough!). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170330145041.9005-5-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
* drm/i915: Move retire-requests into i915_gem_wait_for_idle()Chris Wilson2017-03-311-3/+0
| | | | | | | | | | | As we now distinguish everywhere that can call i915_gem_retire_requests() following a successful wait_for_idle, we can remove the duplication by moving that call into i915_gem_wait_for_idle() itself. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170330145041.9005-3-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
* drm/i915: Use a dummy timeline name for a signaled fenceChris Wilson2017-03-301-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | Michał Winiarski pointed out that the debugging infrastructure (such as trace_dma_fence_release) likes to pretty print the timeline name, long after we have freed the timeline. Our timelines currently live as part of the GTT (due to the strict ordering we currently use through each) which belong to the context. We aim to free the context and release its hardware resources as soon as we able to (i.e. when the last fence/request using it has been signaled and retired). As the .get_timeline_name is purely a debug feature, rather than extending the lifetime of the context, or splitting it into many different release phases just to keep the name around, replace the timeline name with a constant after the fence has been signaled. This avoids the potential use-after-free. Reported-by: Krzysztof Olinski <krzysztof.e.olinski@intel.com> Fixes: 80b204bce8f2 ("drm/i915: Enable multiple timelines") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <stable@vger.kernel.org> # v4.10+ Link: http://patchwork.freedesktop.org/patch/msgid/20170330111614.29757-1-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
* drm/i915: Remove intel_ring.last_retired_headChris Wilson2017-03-211-1/+1
| | | | | | | | | | Storing the position of the breadcrumb of the last retired request as a separate last_retired_head is superfluous as we always copy that into head prior to recalculation of the intel_ring.space. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170321102552.24357-1-chris@chris-wilson.co.uk
* drm/i915: Split I915_RESET_IN_PROGRESS into two flagsChris Wilson2017-03-161-1/+1
| | | | | | | | | | | | | | | I915_RESET_IN_PROGRESS is being used for both signaling the requirement to i915_mutex_lock_interruptible() to avoid taking the struct_mutex and to instruct a waiter (already holding the struct_mutex) to perform the reset. To allow for a little more coordination, split these two meaning into a couple of distinct flags. I915_RESET_BACKOFF tells i915_mutex_lock_interruptible() not to acquire the mutex and I915_RESET_HANDOFF tells the waiter to call i915_reset(). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Acked-by: Michel Thierry <michel.thierry@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170316171305.12972-1-chris@chris-wilson.co.uk
* Merge tag 'drm-intel-next-2017-03-06' of ↵Dave Airlie2017-03-081-243/+256
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://anongit.freedesktop.org/git/drm-intel into drm-next 4 weeks worth of stuff since I was traveling&lazy: - lspcon improvements (Imre) - proper atomic state for cdclk handling (Ville) - gpu reset improvements (Chris) - lots and lots of polish around fences, requests, waiting and everything related all over (both gem and modeset code), from Chris - atomic by default on gen5+ minus byt/bsw (Maarten did the patch to flip the default, really this is a massive joint team effort) - moar power domains, now 64bit (Ander) - big pile of in-kernel unit tests for various gem subsystems (Chris), including simple mock objects for i915 device and and the ggtt manager. - i915_gpu_info in debugfs, for taking a snapshot of the current gpu state. Same thing as i915_error_state, but useful if the kernel didn't notice something is stick. From Chris. - bxt dsi fixes (Umar Shankar) - bxt w/a updates (Jani) - no more struct_mutex for gem object unreference (Chris) - some execlist refactoring (Tvrtko) - color manager support for glk (Ander) - improve the power-well sync code to better take over from the firmware (Imre) - gem tracepoint polish (Tvrtko) - lots of glk fixes all around (Ander) - ctx switch improvements (Chris) - glk dsi support&fixes (Deepak M) - dsi fixes for vlv and clanups, lots of them (Hans de Goede) - switch to i915.ko types in lots of our internal modeset code (Ander) - byt/bsw atomic wm update code, yay (Ville) * tag 'drm-intel-next-2017-03-06' of git://anongit.freedesktop.org/git/drm-intel: (432 commits) drm/i915: Update DRIVER_DATE to 20170306 drm/i915: Don't use enums for hardware engine id drm/i915: Split breadcrumbs spinlock into two drm/i915: Refactor wakeup of the next breadcrumb waiter drm/i915: Take reference for signaling the request from hardirq drm/i915: Add FIFO underrun tracepoints drm/i915: Add cxsr toggle tracepoint drm/i915: Add VLV/CHV watermark/FIFO programming tracepoints drm/i915: Add plane update/disable tracepoints drm/i915: Kill level 0 wm hack for VLV/CHV drm/i915: Workaround VLV/CHV sprite1->sprite0 enable underrun drm/i915: Sanitize VLV/CHV watermarks properly drm/i915: Only use update_wm_{pre,post} for pre-ilk platforms drm/i915: Nuke crtc->wm.cxsr_allowed drm/i915: Compute proper intermediate wms for vlv/cvh drm/i915: Skip useless watermark/FIFO related work on VLV/CHV when not needed drm/i915: Compute vlv/chv wms the atomic way drm/i915: Compute VLV/CHV FIFO sizes based on the PM2 watermarks drm/i915: Plop vlv/chv fifo sizes into crtc state drm/i915: Plop vlv wm state into crtc_state ...
| * drm/i915: Ensure the engine is idle before manually changing HWSChris Wilson2017-03-031-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During reset_all_global_seqno() on seqno rollover, we have to update the HWS. This causes all in flight requests to be completed, so first we wait. However, we were only waiting for the requests themselves to be completed and clearing out the waiter rbtrees - what I had missed was the extra reference in execlists->port[]. Since commit fe9ae7a3bfdb ("drm/i915/execlists: Detect an out-of-order context switch") we can detect when the request is retired before the context switch interrupt is completed. The impact should be neglible outside of debugging. Testcase: igt/gem_exec_whisper Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170303121947.20482-1-chris@chris-wilson.co.uk Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
| * drm/i915: Drop spinlocks around adding to the client request listChris Wilson2017-03-021-28/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Adding to the tail of the client request list as the only other user is in the throttle ioctl that iterates forwards over the list. It only needs protection against deletion of a request as it reads it, it simply won't see a new request added to the end of the list, or it would be too early and rejected. We can further reduce the number of spinlocks required when throttling by removing stale requests from the client_list as we throttle. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170302122525.19675-1-chris@chris-wilson.co.uk
| * drm/i915: s/assert_spin_locked/lockdep_assert_held/Chris Wilson2017-03-021-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | assert_spin_locked() becomes an unconditionally compiled BUG_ON(), adding debug code right into the heart of critical routines like interrupt handlers. text data bss dec hex 1296480 19944 2272 1318696 141f28 before (lockdep disabled) 1295984 19944 2272 1318200 141d38 after 1336261 21139 3208 1360608 14c2e0 before (lockdep enabled) 1339920 21139 3208 1364267 14d12b after Small saving for release; hopefully more instructive in debug. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170302132801.599-1-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
| * drm/i915: Assert that fence->lock is held in an irq-safe mannerChris Wilson2017-03-021-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Everytime we take the fence->lock (aka request->lock), we must do so with irqs disabled since it may be used from within an hardirq context. As sometimes we are taking the lock in a nested manner, assert that the caller did disable the irqs for us. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170302115130.28434-1-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
OpenPOWER on IntegriCloud