Configuration Options¶
OpenKiwi supports an extensive range of options through its command line interface. (or see note below). All commands are prepended by a `<pipeline>`
command. For the available pipelines see: CLI
Note: Args that start with ‘--’ (eg. --save-config) can also be set in a config file (specified via --config). The config file uses YAML syntax and must represent a YAML ‘mapping’ (for details, see http://learn.getgrav.org/advanced/yaml). If an arg is specified in more than one place, then command line values override config file values which override defaults.
For pipeline specific options see:
General Options¶
These options are pipeline independent and are available in all different pipelines. They are divided into three different categories, general, IO and save/load.
usage: kiwi <pipeline> [-h] [--seed SEED] [--gpu-id GPU_ID]
random¶
--seed | Random seed Default: 42 |
gpu¶
--gpu-id | Use CUDA on the listed devices |
usage: kiwi <pipeline> [-h] [--save-config SAVE_CONFIG] [-d] [-q]
I/O¶
--save-config | Save parsed configuration and arguments to the specified file |
-d, --debug | Output additional messages. Default: False |
-q, --quiet | Only output warning and error messages. Default: False |
usage: kiwi <pipeline> [-h] [--load-model LOAD_MODEL] [--save-data SAVE_DATA]
[--load-data LOAD_DATA] [--load-vocab LOAD_VOCAB]
save-load¶
--load-model | Directory containing a model.torch file to be loaded |
--save-data | Output dir for saving the preprocessed data files. |
--load-data | Input dir for loading the preprocessed data files. |
--load-vocab | Directory containing a vocab.torch file to be loaded |
usage: kiwi <pipeline> [-h] [--log-interval LOG_INTERVAL]
[--mlflow-tracking-uri MLFLOW_TRACKING_URI]
[--experiment-name EXPERIMENT_NAME]
[--run-name RUN_NAME] [--run-uuid RUN_UUID]
[--output-dir OUTPUT_DIR]
[--mlflow-always-log-artifacts [MLFLOW_ALWAYS_LOG_ARTIFACTS]]
Logging¶
--log-interval | Log every k batches. Default: 100 |
--mlflow-tracking-uri | |
If using MLflow, logs model parameters, training metrics, and artifacts (files) to this MLflow server. Uses the localhost by default. Default: “mlruns/” | |
--experiment-name | |
If using MLflow, it will log this run under this experiment name, which appears as a separate sectionin the UI. It will also be used in some messages and files. | |
--run-name | If using MLflow, it will log this run under this run name, which appears as a separate item in the experiment. |
--run-uuid | If specified, MLflow/Default Logger will log metrics and params under this ID. If it exists, the run status will change to running. This ID is also used for creating this run’s output directory. (Run ID must be a 32-character hex string) |
--output-dir | Output several files for this run under this directory. If not specified, a directory under “runs” is created or reused based on the Run UUID. Files might also be sent to MLflow depending on the –mlflow-always-log-artifacts option. |
--mlflow-always-log-artifacts | |
If using MLFlow, always log (send) artifacts (files) to MLflow artifacts URI. By default (false), artifacts are only logged ifMLflow is a remote server (as specified by –mlflow-tracking-uri option). All generated files are always saved in –output-dir, so it might be considered redundant to copy them to a local MLflow server. If this is not the case, set this option to true. Default: False |