| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Signed-off-by: Lou Logan <lou@lrcd.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
dnn networks
This filter accepts all the dnn networks which do image processing.
Currently, frame with formats rgb24 and bgr24 are supported. Other
formats such as gray and YUV will be supported next. The dnn network
can accept data in float32 or uint8 format. And the dnn network can
change frame size.
The following is a python script to halve the value of the first
channel of the pixel. It demos how to setup and execute dnn model
with python+tensorflow. It also generates .pb file which will be
used by ffmpeg.
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('in.bmp')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
filter_data = np.array([0.5, 0, 0, 0, 1., 0, 0, 0, 1.]).reshape(1,1,3,3).astype(np.float32)
filter = tf.Variable(filter_data)
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
output = sess.run(y, feed_dict={x: in_data})
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'halve_first_channel.pb', as_text=False)
output = output * 255.0
output = output.astype(np.uint8)
imageio.imsave("out.bmp", np.squeeze(output))
To do the same thing with ffmpeg:
- generate halve_first_channel.pb with the above script
- generate halve_first_channel.model with tools/python/convert.py
- try with following commands
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=native -y out.native.png
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.pb:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=tensorflow -y out.tf.png
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Paul B Mahol <onemda@gmail.com>
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Eugene Lyapustin <unishifft@gmail.com>
|
|
|
|
| |
This can be used to add region of interest side data to video frames.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it is expected that there will be more files to support native mode,
so put all the dnn codes under libavfilter/dnn
The main change of this patch is to move the file location, see below:
modified: libavfilter/Makefile
new file: libavfilter/dnn/Makefile
renamed: libavfilter/dnn_backend_native.c -> libavfilter/dnn/dnn_backend_native.c
renamed: libavfilter/dnn_backend_native.h -> libavfilter/dnn/dnn_backend_native.h
renamed: libavfilter/dnn_backend_tf.c -> libavfilter/dnn/dnn_backend_tf.c
renamed: libavfilter/dnn_backend_tf.h -> libavfilter/dnn/dnn_backend_tf.h
renamed: libavfilter/dnn_interface.c -> libavfilter/dnn/dnn_interface.c
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
Remove the rain in the input image/video by applying the derain
methods based on convolutional neural networks. Training scripts
as well as scripts for model generation are provided in the
repository at https://github.com/XueweiMeng/derain_filter.git.
Signed-off-by: Xuewei Meng <xwmeng96@gmail.com>
|
| |
|
|
|
|
|
| |
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: Ruiling Song <ruiling.song@intel.com>
|
| |
|
| |
|
|
|
|
| |
Fixes #7671.
|
| |
|
|
|
|
|
|
|
| |
This is a direct port of the CPU filter.
Signed-off-by: Jarek Samic <cldfire3@gmail.com>
Signed-off-by: Mark Thompson <sw@jkqxz.net>
|
| |
|
|
|
|
| |
Why did this not break compilation?
|
|
|
|
|
|
|
|
|
|
|
|
| |
Swap width and height when do clock/cclock rotation
Add reversal/hflip/vflip options
ffmpeg -hwaccel vaapi -vaapi_device /dev/dri/renderD128
-hwaccel_output_format vaapi -i input.264 -vf "transpose_vaapi=clock_flip"
-c:v h264_vaapi output.h264
Signed-off-by: Zachary Zhou <zachary.zhou@intel.com>
Signed-off-by: Mark Thompson <sw@jkqxz.net>
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Paul B Mahol <onemda@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Ruiling Song <ruiling.song@intel.com>
Signed-off-by: Mark Thompson <sw@jkqxz.net>
|
|
|
|
| |
Signed-off-by: Marton Balint <cus@passwd.hu>
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have a pattern of wrapping CUDA calls to print errors and
normalise return values that is used in a couple of places. To
avoid duplication and increase consistency, let's put the wrapper
implementation in a shared place and use it everywhere.
Affects:
* avcodec/cuviddec
* avcodec/nvdec
* avcodec/nvenc
* avfilter/vf_scale_cuda
* avfilter/vf_scale_npp
* avfilter/vf_thumbnail_cuda
* avfilter/vf_transpose_npp
* avfilter/vf_yadif_cuda
|
|
|
|
|
|
|
|
|
| |
frame
Also add SIMD which works on lines because it is faster then calculating it on
8x8 blocks using pixelutils.
Signed-off-by: Marton Balint <cus@passwd.hu>
|
|
|
|
|
|
| |
This is a cuda implementation of yadif, which gives us a way to
do deinterlacing when using the nvdec hwaccel. In that scenario
we don't have access to the nvidia deinterlacer.
|
|
|
|
|
|
|
|
|
|
|
| |
I'm writing a cuda implementation of yadif, and while this
obviously has a very different implementation of the actual
filtering, all the frame management is unchanged. To avoid
duplicating that logic, let's make it shareable.
From the perspective of the existing filter, the only real change
is introducing a function pointer for the filter() function so it
can be specified for the specific filter.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
setfield and setrange filters are kept.
|
| |
|
|
|
|
| |
Signed-off-by: Paul B Mahol <onemda@gmail.com>
|
| |
|
|
|
|
| |
Signed-off-by: Paul B Mahol <onemda@gmail.com>
|
|
|
|
| |
Signed-off-by: Paul B Mahol <onemda@gmail.com>
|
| |
|