Run Inference

On any of the docker containers you can run sample inference to get an output video:

python3 --device DEVICE --input_video INPUT_VIDEO --out_dir OUT_DIR \
                [--model_path MODEL_PATH] [--label_map LABEL_MAP] [--threshold THRESHOLD]  [--input_width INPUT_WIDTH]\
                [--input_height INPUT_HEIGHT] [--out_width OUT_WIDTH] [--out_height OUT_HEIGHT]


DEVICE should be one of the x86, edgetpu or jetson.

INPUT_VIDEO is the path to the input video file.

OUT_DIR is a directory in which the script will save the output video file.

MODEL_PATH is the path to the model file or directory. For x86 devices, it should be a directory that contains the saved_model directory. For edgetpu it should be a compiled tflite file, and for jetson devices, it should be a TRT Engine file.

label_map is a pbtxt file which contains a series of mappings that connects a set of class IDs with the corresponding class names. For example if your detector predict 3 as the object label, with the help of label_map.pbtxt you can map this label to corresponding class. If you pass the model_path you should pass this argument too. A sample to this file can be find in utils/mscoco_label_map.pbtxt . If you use our Adaptive Learning service the model label map is exist in file

threshold is the detector’s threshold to detect objects.

INPUT_WIDTH and INPUT_HEIGHT are the width and height of the input of the model.

OUT_WIDTH and OUT_HEIGHT are the resolutions of output video.