Comment by somat
I was pretty happy because I was able to actually do something in ffmpeg recently. It is this amazingly powerfully tool, but every time I try to use it I get scared off by the inscrutable syntax. But this time as the mental miasma that usually kills my ffmpeg attempts was setting in I noticed something in the filter docs, a single throw away line about including files and the formatting of filters
Anyway long story short, instead of the usual terrifying inline ffmpeg filter tangle. the filter can be structured however you want and you can include it from a dedicated file. It sounds petty, but I really think it was the thing that finally let me "crack" ffmpeg
The secret sauce is the "/", "-/filter_complex file_name" will include the file as the filter.
As I am pretty happy with it I am going to inflect it on everyone here.
In motion_detect.filter
[0:v]
split
[motion]
[original];
[motion]
scale=
w=iw/4:
h=-1,
format=
gbrp,
tmix=
frames=2
[camera];
[1:v]
[camera]
blend=
all_mode=darken,
tblend=
all_mode=difference,
boxblur=
lr=20,
maskfun=
low=3:
high=3,
negate,
blackframe=
amount=1,
nullsink;
[original]
null
And then some python glue logic around the command ffmpeg -nostats -an -i ip_camera -i zone_mask.png -/filter_complex motion_display.filter -f mpegts udp://127.0.0.1:8888
And there you have it, motion detection while staying in a single ffmpeg process, the glue logic watches stdout for the blackframe messages and saves the video.explanation:
"[]" are named inputs and outputs
"," are pipes
";" ends a pipeline
take input 0 split it into two streams "motion" and "original". the motion stream gets scaled down, converted to gbrp(later blends were not working on yuv data) then temporally mixed with the previous two frames(remove high frequency motion), and sent to stream "camera". Take the zone mask image provided as input 1 and the "camera" stream, mask the camera stream, find the difference with the previous frame to bring out motion, blur to expand the motion pixels and then mask to black/white, invert the image for correct blackframe analyses which will print messages on stdout when too many motion pixels are present. The "original" stream get sent to the output for capture.
One odd thing is the mpegts, I tried a few more modern formats but none "stream" as well as mpegts. I will have to investigate further.
I could, and probably should have, used opencv to do the same. But I wanted to see if ffmpeg could do it.