This document provides details on how to use the transflow
module for performing various effects based on optical flow transfer.
- Basic Flow Transfer
- Detailed Example
- Flow Estimation Methods
- Using Motion Vectors
- Flow Direction
- Flow Preprocessing
- Flow Transformations
- Multiple Flow Sources
- Accumulation Methods
- Accumulator Heatmap
- Accumulator Visualization
- Resetting Accumulator
- Generative Bitmap Sources
- Webcam Sources
- Bitmap Alteration
- Live Visualization
- Interrupting Processing
- Restart From Checkpoint
- Seek, Duration and Repeat
The simplest process consists in taking the motion from a video file and applying it to an image:
transflow flow.mp4 -b image.jpg -o output.mp4
The first argument flow.mp4
is the video to extract the optical flow from. The -b, --bitmap
argument specifies the "bitmap" source to apply the flow to, an image in this case. The -o, --output
argument specifies the path to the output video. The output video will match the framerate and duration of the flow source (minus the first frame, as computation requires two frames). When done, the output file is automatically opened, unless the -nx, --no-execute
flag is passed.
Video Bitmap Source. If the bitmap source is a video, the output will halt when the first source is exhausted. Output framerate is determined by the flow source, hence you might want to make sure both sources have matching framerates. FFmpeg's fps filter will help you with that.
Dimensions. Flow and bitmap sources should have the same dimensions. Again, FFmpeg's scale filter will help you with that. If the flow source is smaller than the bitmap source by an integer factor, it is scaled accordingly. Thus, a 320x180 video can be used as a flow source for a 1920x1080 bitmap. Non-integer scaling is not supported. The bitmap can not be smaller than the flow.
Output Filename. Unless the -re, --replace
flag is passed, the program automatically generates unique filenames to output files to avoid naming conflicts, by adding a numerical suffix.
Output Format. Output codec can be specified with the -vc, --vcodec
argument. Default value is h264
. Possible values can be listed with the ffmpeg -codecs
command.
Flow Source | Bitmap Source | Result |
---|---|---|
River.mp4 | Deer.jpg | Output.mp4 |
![]() |
![]() |
![]() |
The first step is to use a graphic editor to add a white frame around the deer. This creates the erasing effect when the white pixels move onto the colored ones.
With the editor, create a new image where you cut out the deer, color it white, and color everything else black. This will create the reset mask: while all pixels will get displaced, the pixels forming the deer will get healed again and again. Applying a linear gradient can be used to make the head more resilient than the body.
Modified Bitmap Source | Reset Mask |
---|---|
Frame.png | Mask.png |
![]() |
![]() |
Then, all you need is this one command (assuming input files are in the assets folder, as in the basic repository structure):
transflow assets/River.mp4 -d forward -b assets/Frame.png -rm random -rk assets/Mask.png -ha 0:0:0:0 -o Output.mp4
- The first argument is the flow source, the river video.
- The
-d
argument switches flow direction toforward
, for a more grainy result (see Flow Direction section). - The
-b
argument specifies the bitmap source, the deer image with the white frame. - The
-rm
argument specifies the reset method to random (see Resetting Accumulator section). - The
-rk
argument specifies the path to the mask image created: the brightest its pixels are, the more likely they will heal. - The
-ha
argument is required for the effect to work. It forces the heatmap to be always zero, ensuring the reset effect occurs everywhere everytime (see Accumulator Heatmap section).
A final detail could be to control the flow scale to ease the start and the end, and add a peak flow time. This can be achieved with the -ff
argument (see Flow Filters section). Simply add the following to the above command:
-ff "scale=max(0, -.0000061191*t**5+.0003680860*t**4-.0075620960*t**3+.0609758832*t**2-.0717236701*t+.0079797631)"
The formula is based on time t
. The river video lasts for about 30 seconds. Such formulas can be obtained via Lagrange interpolation, I published a hacky tool for that, the Online Lagrange Polynomial Editor:
By default, Optical flow extraction relies on OpenCV's implementation of Gunnar Farneback's algorithm. Other methods can be used, such as Lukas-Kanade, Horn-Schunck or LiteFlowNet. To use a different method or change their parameters, the path to a JSON configuration file can be passed with the -cc, --cv-config
argument. If the keyword window
is passed to this argument, a Qt window shows up to tune parameters live, which combines nicely with live visualization.
Note
On Linux, you may have to install the libxcb-cursor0
package for the Qt window to work.
You may find sample config files in the configs folder. They follow the following format:
{
"method": "farneback",
"fb_pyr_scale": 0.5,
"fb_levels": 3,
"fb_winsize": 15,
"fb_iterations": 3,
"fb_poly_n": 5,
"fb_poly_sigma": 1.2
}
Default method, fast, precise. Uses OpenCV implementation. To use it, add the "method": "farneback"
attribute in the config file.
Parameter | Default | Description |
---|---|---|
fb_pyr_scale |
0.5 | the image scale (<1) to build pyramids for each image; pyr_scale=0.5 means a classical pyramid, where each next layer is twice smaller than the previous one |
fb_levels |
3 | number of pyramid layers including the initial image; levels=1 means that no extra layers are created and only the original images are used |
fb_winsize |
15 | averaging window size; larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field |
fb_iterations |
3 | number of iterations the algorithm does at each pyramid level |
fb_poly_n |
5 | size of the pixel neighborhood used to find polynomial expansion in each pixel; larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field, typically poly_n =5 or 7 |
fb_poly_sigma |
1.2 | standard deviation of the Gaussian that is used to smooth derivatives used as a basis for the polynomial expansion; for poly_n=5, you can set poly_sigma=1.1, for poly_n=7, a good value would be poly_sigma=1.5 |
Slow, grainy (unless letting the algorithm converge, which is very slow). Custom implementation of the Horn-Schunck method. To use it, add the "method": "horn-schunck"
attribute in the config file.
Parameter | Default | Description |
---|---|---|
hs_alpha |
1 | regularization constant; larger values lead to smoother flow |
hs_iterations |
3 | maximum number of iterations the algorithm does; may stop earlier if convergence is achieved (see hs_delta parameter); large value (>100) required for precise computations |
hs_decay |
0 | initial flow estimation (before any iteration) is based on previous flow scaled by hs_decay; set hs_decay=0 for no initialization; set hs_decay=1 for re-using whole previous flow; set hs_decay=0.95 for a geometric decay; using hs_decay>0 introduces an inertia effect |
hs_delta |
1 | convergence threshold; stops when the L2 norm of the difference of the flows between two consecutive iterations drops below |
Slow if dense, (really) fast if sparse. Adapted from OpenCV implementation. To use it, add the "method": "lukas-kanade"
attribute in the config file.
Tip
Normally, Lukas-Kanade only computes optical flow for a fixed set of points. To obtain a dense field, we simply pass every pixel as a target, which is slow. Performance can balanced with sparsity, by only computing the flow one every 2/3/4/… pixel and broadcasting the result to the whole macroblock. To do this, see the lk_step
parameter.
Parameter | Default | Description |
---|---|---|
lk_window_size |
15 | size of the search window at each pyramid level |
lk_max_level |
2 | 0-based maximal pyramid level number; if set to 0, pyramids are not used (single level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel |
lk_step |
1 | size of macroblocks for estimating the flow; set lk_step=1 for a dense flow; set lk_step=16 for 16*16 macroblocks |
Requires the CuPy and PyTorch modules.
Statistical approach based on neural networks. Very accurate results, medium performances. Adapted from sniklaus/pytorch-liteflownet, which is adapted from twhui/LiteFlowNet, which is the official implementation of LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation by Tak-Wai Hui, Xiaoou Tang and Chen Change Loy for CVPR 2018.
Tip
Tested on Debian 12 (bookworm) and Windows 11 with Python 3.12, cupy==13.3.0
and torch==2.7.0
. Torch must be compiled with CUDA enabled: first download CUDA Toolkit then run the appropriate command from PyTorch Shortcuts. Make sure to select the correct CUDA version.
To fasten computation, you can use H264 motion vectors as a flow field. For this, you have to set the -mv, --use-mvs
flag, and the video must be encoded in a specific way, to make sure frames are encoded relative to the previous frame only. Using FFmpeg, this can be achieved with the following command:
ffmpeg -i input.mp4 -refs 1 -bf 0 -g 9999 output.mp4
-refs 1
forces only one parent per frame-bf 0
removes any B frame (frames predicted from the future)-g 9999
reduces the amount of I-frames (reference frames) in the video
Note
Using motion vectors this way can only produce a forward flow (see Flow Direction section).
OpenCV optical flow computation follows the following equation (where
This is forward flow computation, ie. we known where to move pixels from the past frame to rebuild the next frame. This causes various issues: displacements must be rounded, and conflicts must be arbitrarily solved when a pixel leaves its place or when two collide.
Therefore, by default, the program uses backward flow computation, ie. computes the flow from the next frame to the previous frame, ie. solving:
This way, we compute the origin for every pixel in the next frame, solving most of the issues. Results look cleaner, more continuous, with backward flow. Forward flow looks more grainy, more dirty.
You can specify the flow direction you want with -d, --direction {forward,backward}
. If nothing is specified, backward
is the default. If using motion vectors as flow source, then the direction will be forced to forward
.
Note
Forward direction is not compatible with the sum accumulator (see Accumulation Methods section).
By adding the -ef, --export-flow
flag, the program will save the extracted optical flow to a file with the .flow.zip
extension. This file can later be used as a flow source, instead of a video file:
transflow myvideo.flow.zip -b image.jpg -o output.mp4
If the -rf, --round-flow
argument is specified, the output flow will be integers instead of floats. This greatly reduces file size and processing speed, at the cost of quantization artefacts.
The flow matrix can be modified by applying several filters, with the -ff, --flow-filters
argument. You may specify a filter with the syntax name=arg1:arg2:arg3:…
. Mulitple filters can be specified, simply separate them with a semicolon ;
.
Filter | Arguments | Description |
---|---|---|
scale |
lambda(t) |
Multiply the whole matrix with a value |
threshold |
lambda(t) |
Every flow vector with a magnitude below the threshold (in pixels) is set to 0 |
clip |
lambda(t) |
Every flow vector with a magnitude above the threshold (in pixels) is scaled to this maximum magnitude |
polar |
lambda(t,r,a) , lambda(t,r,a) |
Directly set flow values in polar coordinates; first argument determines the radius; second determines the angle, in radians |
Arguments can be floating constants, or Pythonic expressions based on some variables, evaluated at runtime. t
is the current frame timestamp in seconds. r
is the flow vector magnitude (the radius in polar coordinates). a
is the flow vector angle in radians (the angle in polar coordinates). Python's math
, random
and numpy
modules are available when evaluating the expression.
A few examples:
-ff threshold=2
nullifies any vector with magnitude lower or equal to 2 pixels.-ff scale=2;clip=5
multiplies the flow by 2 and scales down vectors with magnitude greater than 5 pixels.-ff scale=1-math.exp(-.5*t)
fakes a slow start in the first seconds.-ff polar=r:a
does nothing.-ff polar=r:0
forces horizontal movement, to the left with backward flow, to the right with forward flow.
Note
The magnitude of flow vectors is computed as their L2 norm. Values are in pixels.
You can lock the flow (to mimic the P-frames duplication effect) with the -lo, --lock
argument. Two modes are available, which can be specified with the -lm, --lock-mode
argument.
stay
(default): the-lo
argument is a list of couples of the form(time_start, duration)
. Timings correspond to the output frame timestamp, and are all expressed in seconds. When locked, the flow source is paused. When unlocked, it resumes from where it was paused. For instance, if you want to lock the flow twice for one second, at two different moments, you may use the following argument:-lm stay -lo "(1,1),(4,1)"
.skip
: the-lo
argument is a Pythonic expression based on the variablet
, the time in seconds.math
andrandom
modules are available during evaluation. When locked, the flow source is still iterated. When unlocked, it skips all frames encountered while locked. The output of the epxression is evaluated with Python's if statement: integer value 0 means the flow in unlocked, value 1 means the flow is locked. For instance, if you want to lock the flow after two seconds for one second, you may use the following argument:-lm skip -lo "t>=2 and t<=3"
.
You can pass the path to an image file with the -fm, --flow-mask
. The image luminance will be scaled between 0 (black) and 1 (white) and multiplied element-wise to the flow array.
Flow can be filtered with convolution kernels by specifying the path to a kernel file with the -fk, --flow-kernel
argument. A kernel file is a NumPy export of an ndarray
, in NumPy .npy
format. For instance, the following script generates a basic blur filter:
import numpy
numpy.save("blur.npy", numpy.ones((3, 3)) / 9)
The script kernels.py can be used to generate basic kernels.
Multiple flow sources can be specified and merged. A first source must be specified as the first positional argument, as usually, and will be used to determine defaults metrics such as output framerate. Extra flow sources can be specified using the -f, --extra-flow
argument. You may provide one or more paths to valid flow inputs (video files, preprocessed archives, webcam indices). For each pair of frame, flows from all sources will be merged according to the operation specified with the -sm, --flows-merging-function
argument, which, by default, is the sum.
Mergin Function | Description |
---|---|
absmax |
Take the greatest (in magnitude) values accross flows |
average |
Scaled sum of all flows |
difference |
Take the difference between the first flow and the sum of the others |
first |
Only return the first flow (for debugging) |
maskbin |
Multiply the first flow by others where values are mapped to either 0 (no movement) or 1 (some movement) – for reference, a threshold of 0.2 pixel in magnitude is used as threshold |
masklin |
Multiply the first flow by others where values are converted to absolute values (so they serve as scaling factors) |
product |
Product of all flows |
sum |
Sum of all flows (default) |
Flow are accumulated accross time. You can specify the accumulation method with the -m, --acc-method
argument. Possible values are map
(default), sum
, stack
, crumble
or canvas
. The accumulator is in charge of transforming frames from the bitmap source.
Method | Description | Example |
---|---|---|
map |
Flows are applied to a quantized UV map. Looks grainy. | |
sum |
Flows (backward only, see Flow Direction section) are summed in a floating array. Looks smooth. | |
stack |
Pixels are considered as particles moving on a grid. Computation is VERY slow. | |
crumble |
Moved pixels leave an empty spot behind them. | |
canvas |
Paste moving pixels from the bitmap to a canvas, and applying the flow on the canvas |
Background Color. stack
and crumble
accumulator can contain empty spots, which are assigned a background color set with the -ab, --accumulator-background
argument (white by default). If provided, the color must be expressed as an HEX value. For instance, for a green chroma key, one may use the -ab 00ff00
argument.
Stack Parameters. An empty cell has a color defined by the . A non-empty cell has a color determined by the function passed to the -sc, --stack-composer
argument. Possible values are:
top
: color of the last arrived pixel (default)add
: all pixels in stack are summed, value is clippedsub
: subtractive composition, as in paintingavg
: average all pixels in stack
Canvas Parameters. The canvas accumulator has potential for generalizing a lot of features, though it is not ready yet. So far, you may specify the following arguments:
Argument | Default | Description |
---|---|---|
-ic, --initial-canvas |
White | Either a HEX color or a path to an image. Will define the initial canvas image. |
-bm, --bitmap-mask |
None |
(Optionnal) A path to a black and white image. If set, only bitmap pixels from white regions in the mask will be introduced, otherwise, every moving bitmap pixels are considered. |
-bi, --bitmap-introduction-flags |
5 | See below.If 1, moving bitmap pixels are pasted onto the canvas. If 2, bitmap pixels from the mask are pasted onto the canvas. If 3, the two previous effects apply. |
-cr, --crumble |
False |
Enable the crumbling effect: a moving pixels leaves an empty spot behind it. Moving pixels from the bitmap bypass this behavior. |
Bitmap Introduction Flags:
MOTION
(1): moving bitmap pixels are pasted onto the canvas.STATIC
(2): all bitmap pixels from the bitmap mask are pasted onto the canvas.NO_OVERWRITE
(4): moving bitmap pixels cannot be placed over a pixel introduced earlier, only in black space.
Default is MOTION | NO_OVERWRITE
, ie. 5.
Accumulators performs some processing with the computed flow before applying it to the bitmap source. Most importantly, it computes a heatmap to know which parts of the image is moving or not (roughly). This can be used for visualization (see Accumulator Visualization section) or to reduce computation time for reset effects (see Resetting Accumulator section).
Heatmap can either be discrete or continuous. Heatmap mode can be specified with the -hm, --heatmap-mode {discrete,continuous}
argument. Default value is discrete
. Depending on the mode, the value of the -ha, --heatmap-args
is parsed differently.
Discrete Mode. The argument follows the form min:max:add:sub
, all with integers. Heatmap is a 2D array of integers, values are clipped between min
and max
(inclusive). At each frame, every pixel where the flow is non-zero increases by add
. Then, all pixels are decreased by sub
. Default is 0:4:2:1
.
Continuous Mode. The argument follows the form max:decay:threshold
. Heatmap is a 2D array of floats. At each frame, the heatmap is multiplied by decay
(<1). Then, values below threshold
are set to 0. Then, the magnitude of the flow is added to the heatmap, and values are clipped between 0 and max
.
Instead of outputting a transformed bitmap, you can visualize several internal streams. For this, you must NOT provide a -b, --bitmap
argument, and instead set one of the following flags:
Flag | Description |
---|---|
-oi, --output-intensity |
Flow magnitude |
-oh, --output-heatmap |
Accumulator heatmap (see Accumulator Heatmap section) |
-oa, --output-accumulator |
Representation of accumulator internal state |
Scale. As values are absolute in length of pixels, they are scaled to be rendered. Values for -oi
and -oh
are expected between 0 and 1. Values for -oa
are expected between -1 and 1. Thus, you may specify a scaling factor with the -rs, --render-scale
argument. Default is 0.1.
Colors. Color palettes are specified with the -rc, --render-colors
attribute as hex values separated by commas. 1D renderings (flow magnitude and heatmap) are rendered using 2 colors, black and white by default. 2D renderings (accumulator internals) are rendered using 4 colors (think of it as north, east, south, west), that are, by default, yellow, blue, magenta and green.
Quantization. For 1D renderings, you can force output to be binary (either one color or the other, without shades), by setting the -rb, --render-binary
flag. This may help for some postprocessing operations.
Pixels can be reset (or healed, or reintroduced) overtime. This does not work with the stack accumulator (see Accumulation Methods section). This setting is off by default, but you may enable it by specifying a reset mode with the -rm, --reset-mode
argument. Possible values are off
(default), random
or linear
. For the last two, two arguments can be specified to control their behavior:
-ra, --reset-arg
, a float (default to 0.1)-rk, --reset-mask
, a path to an image
If specified, the reset mask loads an image and converts it to an array of floats by mapping luminance between 0 (black) and 1 (white).
Random Reset. At each frame, values where the heatmap (see Accumulator Heatmap section) is at 0 roll a random number between 0 and 1. If the value is below a threshold (either the -ra
argument or the corresponding value in the reset mask if passed), the pixel at that place gets its original value back.
Linear Reset. At each frame, the difference between the current location of a pixel and its original location is computed. Static pixels are moved back in the direction of their original location with a speed controlled by the -ra
argument or the value in the reset mask array, if passed. 0 means no reset, 1 means instant reset.
Note
Linear reset will not work with the crumble
and the canvas
accumulators (see Accumulation Methods)
The bitmap source argument (-b
) can also be one of the following keywords:
Keyword | Description |
---|---|
color |
Use a uniform random color image as bitmap |
noise |
Use a random grey noise image as bitmap |
bwnoise |
Use a random black or white noise image as bitmap |
cnoise |
Use a random colored noise image as bitmap |
gradient |
Use a random gradient of colors as bitmap |
You can also pass an hex color (eg. #00FF00
) to use a specific uniform color image.
You can provide a seed with the -sd, --seed
parameter. It expects an integer. Multiple bitmap generations with the same seed will generate the same image.
Flow or bitmap sources can use webcams. As they do not have durations, you may have to use the Ctrl+C shortcut to interrupt the program when done. You can specify the width and height of the stream to request from the webcam with the -sz, --size
argument, with a value of the form WIDTHxHEIGHT
. By default, the preferred webcam option is selected.
OpenCV stream. Simplest option is to pass an integer as the source argument. This number should be the webcam index as referenced by OpenCV. You may use the list_cv2_webcams.py script to list available webcams.
PyAV stream. If you want to use webcam motion vectors as a flow source, you must specify the webcam using the avformat:name
format. avformat
should be a webcam API, often dshow
on Windows or v4l2
on Linux. name
is the webcam name as it appears in the output of the ffmpeg -list_devices true -f {dshow,v4l2} -i dummy
command. Note that this name may require a prefix depending on the API you use. For instance, on Windows, DirectShow (dshow) requires strings of the form video=WEBCAM_NAME
. Here is an example for viewing the intensity of the motion vectors that way:
transflow "dshow::video=Logitech Webcam C930e" -sz 1280x720 -mv -oi
Note
Motion Vectors in Webcam Streams. Most of the webcams I encountered do not encode video streams as H264, and thus do not provide the expected motion vectors extraction feature. For instance, the above command simply yields a black screen. You can list the capabilities of your webcam with the following command:
ffmpeg -f dshow -list_options true -i video="Logitech Webcam C930e"
Tip
Though, I found a hacky solution, that works at the cost of latency (2-3 seconds, which could be acceptable depending on your objective). This only works on Linux, with the v4l2
backend and the v4l2-loopback software. Once installed:
- Run it with:
sudo modprobe v4l2loopback
- Check which device it created with:
ls /sys/devices/virtual/video4linux/
- Use FFmpeg to encode the raw webcam stream as H264 and send it to that device:
Here
ffmpeg -hide_banner -loglevel error -i /dev/video0 -vcodec libx264 -s "1280x720" -pix_fmt yuv420p -preset fast -r 30 -bf 0 -refs 1 -f v4l2 /dev/video4
/dev/video0
is the real webcam, and/dev/video4
is the v4l2-loopback device found with the previous command. - In another terminal, run transflow with the v4l2-loopback device:
transflow v4l2::/dev/video4 -sz 1280x720 -mv -oi
Altering the bitmap is a way to control how the output will look if the flow has the expansion property, ie. some pixels grow into regions that eventually covers large areas of the original bitmap. For instances, a video of a body of water in motion will drastically decrease the number of independent pixels. This means that forcing the color of those pixels will impact the whole output while only imperceptibly alter the input.
You can specify an alteration image with the -ba, --bitmap-alteration
argument. It takes a path to a PNG file. This image should be transparent (alpha channel set to 0) for pixels that should not be touched. Non-transparent pixels will be pasted onto the bitmap. For video bitmaps, the same alteration is applied on all frames.
In order to know in advance which pixels to alter, you may use the control.py script:
- First, use Transflow with the
-cd
flag to generate a mapping checkpoint (see Restart From Checkpoint).transflow flow.mp4 -cd
- Then, call the control.py script and pass the checkpoint as argument. If you already know which bitmap image you're using, you might want to pass it as well.
python control.py flow_12345.ckpt.zip image.jpg
- Edit the color of some sources and export the alteration image by hitting Ctrl+S (more details below).
- Use Transflow again and specify the bitmap alteration argument.
transflow flow.mp4 -b image.jpg -ba flow_12345_1730000000000.png
In the control.py GUI, you see pixel sources in the header, ordered by their target area, decreasing: this means the first (top left) source covers the most area. Hovering on it highlights its target zone. Clicking on it opens a color picker to edit its color. Other bindings:
- Left click: change color
- Right click: reset color (can be held down)
- Ctrl+R: reset all colors
- Ctrl+C: store the color currently pointed at in the buffer
- Ctrl+V: apply the buffered color to the region pointed at (can be held down)
- Ctrl+S: export alteration input as PNG
Note
Currently, this script only works with the Mapping Accumulator (see Accumulation Methods). Also, if the loaded accumulator contains too many sources, the least important ones are ignored for better performances. This can be controlled with the -m, --max-sources-display
argument.
Warning
This script is a hacky tool and may (will) not work in some contexts. Feel free to improve it and share your code!
If the -o
argument is omitted, ie. no output file is provided, a window will show processed frames as they are produced. The window can be closed by pressing the ESC key. This allows for checking an output before going all-in on a one hour render.
If the -o
argument is specified, you can still preview the output by setting the -po, --preview-output
flag.
During processing, a progress bar shows how many frames were encoded, and how many remain. You can interrupt this process at any time by pressing Ctrl+C once, the output video will close nicely to produce a valid file.
If you set the -s, --safe
flag, interrupting the processing will create a checkpoint file (with .ckpt.zip
extension) alongside the flow video, to resume computation later on (see Restart From Checkpoint section). The same thing occurs if an error occurs during the processing. The -s, --safe
flags also enables an history log file that stores commands and output files, to keep track of which file is which and how was it generated.
Checkpoints allows for resuming computation at a given frame. It contains the accumulator data at the frame it was exported at. This helps for lengthy processings or for handling errors. There are three ways of creating checkpoints:
- You can specify a frame interval with the
-ce, --checkpoint-every
argument at which exporting a checkpoint file (with.ckpt.zip
extension). - You can export a checkpoint for the last frame by setting the
-cd, --checkpoint-end
argument. This can be used with the control script (TODO: add reference!) - They can be automatically created with an interruption or error occurs if the
-s, --safe
flag is set (see Interrupint Processing section).
Checkpoints files be passed as flow sources, as they contain data about which flow source was used and where to restart computation. Arguments must be passed again to resume computation in the exact same settings.
Flow source can be seeked with the -ss, --seek
argument. Value must be of the form HH:MM:SS
or HH:MM:SS.FFF
. Bitmap source can be seeked in the same way, with the -bss, --bitmap-seek
argument.
A slice of the flow source can be selected with the -t, --duration
argument, as the duration go from the starting point, with the same format as seeking timestamps. You can also specify an end timestamp instead with the -to, --to
argument and a value of the same format. Intent is similar to FFmpeg expression of seeking.
If you want to use the same flow input multiple times in a loop, you may specify the -r, --repeat
argument with an integer value for how many times you want to use the flow. Default value is 1 (ie. no repetition). If this value is 0, the flow sources loops forever until either the bitmap source is exhausted or the user interrupts the execution. The same feature is available for the bitmap source (if a video), with the -br, --bitmap-repeat
argument. It has the same behavior, default value is 1, 0 means infinite loop.