| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
The buffers are created associated with the context, so they should be
destroyed before the context is. This is enforced by the iHD driver.
|
|
|
|
|
|
|
|
|
| |
This removes the arbitrary limit on the allowed number of slices and
parameter buffers.
From ffmpeg commit e4a6eb70f471eda36592078e8fa1bad87fc9df73.
Signed-off-by: Mark Thompson <sw@jkqxz.net>
|
|
|
|
|
|
| |
This is an ABI change in libva2: previously the Intel driver had this
behaviour and it was implemented as a driver quirk, but now it is part
of the specification so all drivers must do it.
|
|
|
|
|
|
|
|
|
| |
Use AVCodecContext.compression_level rather than a private option,
replacing the H.264-specific quality option (which stays only for
compatibility).
This now works with the H.265 encoder in the i965 driver, as well as
the existing cases with the H.264 encoder.
|
|
|
|
|
|
|
|
| |
The non-H.26[45] codecs already use this form. Since we don't
currently generate I frames for codecs which support them separately
to IDR, the p_per_i variable is set to infinity by default so that it
doesn't interfere with any other calculation. (All the code for I
frames still exists, and it works for H.264 if set manually.)
|
|
|
|
|
| |
Previously this was leaking, though it actually hit an assert making
sure that the buffer had already been cleared when freeing the picture.
|
| |
|
|
|
|
|
|
|
| |
Only do this when building for a recent VAAPI version - initial
driver implementations were confused about the interpretation of the
framerate field, but hopefully this will be consistent everywhere
once 0.40.0 is released.
|
|
|
|
|
|
| |
This includes a backward-compatibility hack to choose CBR anyway on
old drivers which have no CBR support, so that existing programs will
continue to work their options now map to VBR.
|
| |
|
| |
|
|
|
|
|
|
| |
This change makes the configured GOP size be respected exactly -
previously the value could be exceeded slightly due to flaws in the
frame type selection logic.
|
|
|
|
|
| |
Only works if packed headers are supported, where we can know the
output before generating the first frame.
|
|
|
|
|
| |
This was always too late; several fields related to it have been incorrectly
zero since the encoder was added.
|
|
|
|
|
|
|
|
|
| |
While outwardly bizarre, this change makes the behaviour consistent
with other VAAPI encoders which sync to the encode /input/ picture in
order to wait for /output/ from the encoder. It is not harmful on
i965 (because synchronisation already happens in vaRenderPicture(),
so it has no effect there), and it allows the encoder to work on
mesa/gallium which assumes this behaviour.
|
|
|
|
|
| |
This improves behaviour with drivers which do not support packed
headers, such as AMD VCE on mesa/gallium.
|
|
|
|
|
|
|
|
| |
This allows better checking of capabilities and will make it easier
to add more functionality later.
It also commonises some duplicated code around rate control setup
and adds more comments explaining the internals.
|
|
|
|
|
| |
No longer leaks memory when used with a driver with the "render does
not destroy param buffers" quirk (i.e. Intel i965).
|
|
|
|
|
|
|
|
|
| |
Previously we would allocate a new one for every frame. This instead
maintains an AVBufferPool of them to use as-needed.
Also makes the maximum size of an output buffer adapt to the frame
size - the fixed upper bound was a bit too easy to hit when encoding
large pictures at high quality.
|
|
|
|
| |
Just a typo. Add a comment to make it clearer what it's doing.
|
| |
|
|
|
|
|
|
| |
This prevents attempts to use unsupported modes, such as low-power
H.264 mode on non-Skylake targets. Also fixes a crash on invalid
configuration, when trying to destroy an invalid VA config/context.
|
|
|
|
| |
Signed-off-by: Diego Biurrun <diego@biurrun.de>
|
|
|
|
| |
Signed-off-by: Anton Khirnov <anton@khirnov.net>
|
|
|
|
| |
Signed-off-by: Anton Khirnov <anton@khirnov.net>
|
|
Signed-off-by: Anton Khirnov <anton@khirnov.net>
|