API Reference

Plug-AI as an Application

Plug-AI can be used as an application with the following parameters by calling:

python -m plug_ai --kwargs

Below are the arguments for plug_ai retrieved using:

$ python -m plug_ai -h
usage: __main__.py [-h] [--dataset DATASET] [--dataset_kwargs DATASET_KWARGS] [--preprocess PREPROCESS] [--preprocess_kwargs PREPROCESS_KWARGS] [--batch_size BATCH_SIZE] [--train_ratio TRAIN_RATIO]
                   [--val_ratio VAL_RATIO] [--limit_sample LIMIT_SAMPLE] [--shuffle SHUFFLE] [--drop_last DROP_LAST] [--model MODEL] [--model_kwargs MODEL_KWARGS] [--loop LOOP] [--loop_kwargs LOOP_KWARGS]
                   [--nb_epoch NB_EPOCH] [--device DEVICE] [--report_log REPORT_LOG] [--criterion CRITERION] [--criterion_kwargs CRITERION_KWARGS] [--metric METRIC] [--metric_kwargs METRIC_KWARGS]
                   [--optimizer OPTIMIZER] [--optimizer_kwargs OPTIMIZER_KWARGS] [--lr_scheduler LR_SCHEDULER] [--lr_scheduler_kwargs LR_SCHEDULER_KWARGS] [--config_file CONFIG_FILE]
                   [--export_config EXPORT_CONFIG] [--mode MODE] [--seed SEED] [--verbose VERBOSE]

optional arguments:
  -h, --help            show this help message and exit

Data arguments:
  --dataset DATASET     A dataset name in the valid list of of datasets supported by Plug_ai
  --dataset_kwargs DATASET_KWARGS
                        The dictionnary of args to use that are necessary for dataset
  --preprocess PREPROCESS
                        A valid preprocessing pipeline name provided by plug_ai
  --preprocess_kwargs PREPROCESS_KWARGS
                        A dictionnary of args that are given to the processing pipeline
  --batch_size BATCH_SIZE
                        Number of samples to load per batch
  --train_ratio TRAIN_RATIO
                        Float : The fraction of the dataset to use for training, the rest will be used for final evaluation
  --val_ratio VAL_RATIO
                        Float : The fraction of the train set to use for validation (hp tuning)
  --limit_sample LIMIT_SAMPLE
                        Index value at which to stop when considering the dataset
  --shuffle SHUFFLE     Boolean that indicates if the dataset should be shuffled at each epoch
  --drop_last DROP_LAST
                        Boolean that indicates if the last batch of an epoch should be left unused when incomplete.

Model arguments:
  --model MODEL         A model in the valid list of supported model or a callable that instantiate a Pytorch/Monai model
  --model_kwargs MODEL_KWARGS
                        Every arguments which should be passed to the model callable

Execution arguments:
  --loop LOOP
  --loop_kwargs LOOP_KWARGS
  --nb_epoch NB_EPOCH
  --device DEVICE
  --report_log REPORT_LOG
  --criterion CRITERION
  --criterion_kwargs CRITERION_KWARGS
  --metric METRIC
  --metric_kwargs METRIC_KWARGS
  --optimizer OPTIMIZER
  --optimizer_kwargs OPTIMIZER_KWARGS
  --lr_scheduler LR_SCHEDULER
  --lr_scheduler_kwargs LR_SCHEDULER_KWARGS

Global arguments:
  --config_file CONFIG_FILE
                        Path : The config file to set parameters more easily
  --export_config EXPORT_CONFIG
                        Path : If given, save the full config(combining CLI and config file) at the given path
  --mode MODE           String : A mode between "TRAINING", "EVALUATION" and "INFERENCE"
  --seed SEED           Int : If given, sets random aspect by setting random numbers generators
  --verbose VERBOSE     String or None: The level of verbose wanted. None, "RESTRICTED" or "FULL"

You can provide a configuration of all the arguments using –config_file. Arguments given both in the config file and the console are overwritten by the console. This can be useful if you are working with a general config file and are testing variations of just a few parameters.

Plug-AI as a package

Plug-AI can also be imported and used in your python codes. This can be useful if you want to use a module that is not in the catalogue of features while reusing parts of plug_ai.

In this situation, you would import the three managers plug_ai using :

  • dataset manager

  • model manager

  • execution manager

Many elements can be provided yourself to the managers.

For the dataset:

  • a dataset as a Pytorch dataset herited class

  • transformations as a collable

For the model:

  • a Pytorch model

For the execution manager:

  • a training loop

  • a training step

  • an optimizer

  • a loss

  • a criterion

Example: Let’s suppose you have a custom dataset you want to use with the rest of Plug-AI. You would instantiate your dataset and provide it to the dataset manager. Then you would set the model and execution.

custom_dataset = ...
dataset_manager = plug...(dataset = custom_dataset
                       ...)
model_manager = ...
execution_manager = ...