Add a method

A method is a specific technique used to solve the task problem and is compared to the baseline methods and other methods to determine the best approach for the task depending on the type of dataset.

This guide will show you how to create a new Viash component. In the following we will show examples for both Python and R. Note that the Label Projection task is used throughout the guide, so make sure to replace any occurrences of "label_projection" with your task of interest.

Tip

Make sure you have followed the “Getting started” guide.

Step 1: Create a new component

Use the create_component component to start creating a new method.

viash run src/common/create_component/config.vsh.yaml -- \
  --task label_projection \
  --type method \
  --name my_method_py \
  --language python

This will create a new folder at src/label_projection/methods/my_method_py containing a Viash config and a script.

src/label_projection/methods/my_method_py
    ├── script.py                    Script for running the method.
    ├── config.vsh.yaml              Config file for method.
    └── ...                          Optional additional resources.
viash run src/common/create_component/config.vsh.yaml -- \
  --task label_projection \
  --type method \
  --name my_method_r \
  --language r

This will create a new folder at src/label_projection/methods/my_method_r containing a Viash config and a script.

src/label_projection/methods/my_method_r
    ├── script.R                     Script for running the method.
    ├── config.vsh.yaml              Config file for method.
    └── ...                          Optional additional resources.

Change the --name to a unique name for your method. It must match the regex [a-z][a-z0-9_]* (snakecase).

  • A config file contains metadata of the component and the dependencies required to run it. In steps 2 and 3 we will fill in the required information.
  • A script contains the code to run the method. In step 4 we will edit the script.
Tip

Use the command viash run src/common/create_component/config.vsh.yaml -- --help to get information on all of the parameters if the create_component component.

Step 2: Fill in metadata

The Viash config contains metadata of your method, which script is used to run it, and the required dependencies.

Generated config file

This is what the config.vsh.yaml generated by the create_component component looks like:

Contents of config.vsh.yaml
# The API specifies which type of component this is.
# It contains specifications for:
#   - The input/output files
#   - Common parameters
#   - A unit test
__merge__: ../../api/comp_method.yaml

functionality:
  name: my_method_py

  # Metadata for your component (required)
  info:
    pretty_name: My Method Py
    summary: 'FILL IN: A one sentence summary of this method.'
    description: 'FILL IN: A (multiline) description of how this method works.'
    reference: bibtex_reference_key
    documentation_url: https://url.to/the/documentation
    repository_url: https://github.com/organisation/repository
    preferred_normalization: log_cpm

  # Component-specific parameters (optional)
  # arguments:
  #   - name: "--n_neighbors"
  #     type: "integer"
  #     default: 5
  #     description: Number of neighbors to use.

  # Resources required to run the component
  resources:
    # The script of your component
    - type: python_script
      path: script.py
platforms:
  - type: docker
    image: python:3.10
    # Add custom dependencies here
    setup:
      - type: python
        pypi: anndata~=0.8.0
  - type: nextflow
    directives:
      label: [midmem, midcpu]
Contents of config.vsh.yaml
# The API specifies which type of component this is.
# It contains specifications for:
#   - The input/output files
#   - Common parameters
#   - A unit test
__merge__: ../../api/comp_method.yaml

functionality:
  name: my_method_r

  # Metadata for your component (required)
  info:
    pretty_name: My Method R
    summary: 'FILL IN: A one sentence summary of this method.'
    description: 'FILL IN: A (multiline) description of how this method works.'
    reference: bibtex_reference_key
    documentation_url: https://url.to/the/documentation
    repository_url: https://github.com/organisation/repository
    preferred_normalization: log_cpm

  # Component-specific parameters (optional)
  # arguments:
  #   - name: "--n_neighbors"
  #     type: "integer"
  #     default: 5
  #     description: Number of neighbors to use.

  # Resources required to run the component
  resources:
    # The script of your component
    - type: r_script
      path: script.R
platforms:
  - type: docker
    image: eddelbuettel/r2u:22.04
    # Add custom dependencies here
    setup:
      - type: apt
        packages:
          - libhdf5-dev
          - libgeos-dev
          - python3
          - python3-pip
          - python3-dev
          - python-is-python3
      - type: python
        pypi: anndata~=0.8.0
      - type: r
        cran: anndata
  - type: nextflow
    directives:
      label: [midmem, midcpu]

Required metadata fields

Please edit functionality.info section in the config file to fill in the necessary metadata.

functionality.name

A unique identifier for the method. Must be written in snake case. Example: my_new_method.

functionality.info.pretty_name

A label for the method used for visualisations and documentation. Example: "My new method".

functionality.info.summary

A one sentence summary of the method. Used for creating short overviews of the components in a task.

functionality.info.description

An explanation for how the method works. Used for creating reference documentation of a task.

functionality.info.reference

A bibtex reference key to the paper where the method is described.

functionality.info.documentation_url

The url to the documentation of the method.

functionality.info.repository_url

The repository url for the method.

functionality.info.preferred_normalization

Which normalization method a component prefers. Possible values are l1_sqrt, log_cpm, log_scran_pooling, sqrt_cpm. Each value corresponds to a normalization component in the directory src/datasets/normalization.

__merge__

The file specified in this field contains information regarding the input and output arguments of the component, as well as a unit test to ensure that the component is functioning properly. Normally you don’t need to change this if you gave the right arguments to the create_component component.

Step 3: Add dependencies

Each component has it’s own set of dependencies, because different components might have conflicting dependencies.

Update the setup definition in the platforms section of the config file. This section describes the packages that need to be installed in the Docker image and are required for your method to run. Note that both anndata~=0.8.0 and pyyaml are necessary Python package dependencies.

Please check out this guide for more information on how to add extra package dependencies.

Note

Tip: After making changes to the components dependencies, you will need to rebuild the docker container as follows:

viash run src/label_projection/methods/my_method_py/config.vsh.yaml -- \
  ---setup cachedbuild
output
[notice] Building container 'ghcr.io/openproblems-bio/label_projection/methods/my_method_py:dev' with Dockerfile

Step 4: Edit script

A component’s script typically has five sections:

  1. Imports and libraries
  2. Argument values
  3. Read input data
  4. Generate results
  5. Write output data to file

This is what the script generated by the create_component component looks like:

Contents of script.py
import anndata as ad

## VIASH START
par = {
  'input_train': 'resources_test/label_projection/pancreas/train.h5ad',
  'input_test': 'resources_test/label_projection/pancreas/test.h5ad',
  'output': 'output.h5ad'
}
meta = {
  'functionality_name': 'my_method_py'
}
## VIASH END

print('Reading input files', flush=True)
input_train = ad.read_h5ad(par['input_train'])
input_test = ad.read_h5ad(par['input_test'])

print('Preprocess data', flush=True)
# ... preprocessing ...

print('Train model', flush=True)
# ... train model ...

print('Generate predictions', flush=True)
# ... generate predictions ...

print("Write output AnnData to file", flush=True)
output = ad.AnnData(
  obs={
    'label_pred': obs_label_pred
  },
  uns={
    'dataset_id': input_train.uns['dataset_id'],
    'normalization_id': input_train.uns['normalization_id'],
    'method_id': meta['functionality_name']
  }
)
output.write_h5ad(par['output'], compression='gzip')
Contents of script.R
library(anndata)

## VIASH START
par <- list(
  input_train = "resources_test/label_projection/pancreas/train.h5ad",
  input_test = "resources_test/label_projection/pancreas/test.h5ad",
  output = "output.h5ad"
)
meta <- list(
  functionality_name = "my_method_r"
)
## VIASH END

cat("Reading input files\n")
input_train <- anndata::read_h5ad(par[["input_train"]])
input_test <- anndata::read_h5ad(par[["input_test"]])

cat("Preprocess data\n")
# ... preprocessing ...

cat("Train model\n")
# ... train model ...

cat("Generate predictions\n")
# ... generate predictions ...

cat("Write output AnnData to file\n")
output <- anndata::AnnData(
  obs = list(
    label_pred = obs_label_pred
  ),
  uns = list(
    dataset_id = input_train$uns[["dataset_id"]],
    normalization_id = input_train$uns[["normalization_id"]],
    method_id = meta[["functionality_name"]]
  )
)
output$write_h5ad(par[["output"]], compression = "gzip")

The required sections are explained here in more detail:

a. Imports and libraries

In the top section of the script you can define which packages/libraries the method needs. If you add a new or different package add the dependency to config.vsh.yaml in the setup field (see above).

b. Argument block

The Viash code block is designed to facilitate prototyping, by enabling you to execute directly by running python script.py (or Rscript script.R for R users). Note that anything between “VIASH START” and “VIASH END” will be removed and replaced with a CLI argument parser when the components are being built by Viash.

Here, the par dictionary contains all the arguments defined in the config.vsh.yaml file (including those from the defined __merge__ file). When adding a argument in the par dict also add it to the config.vsh.yaml in the arguments section.

c. Read input data

This section reads any input AnnData files passed to the component.

d. Generate results

This is the most important section of your script, as it defines the core functionality provided by the component. It processes the input data to create results for the particular task at hand.

e. Write output data to file

The output stored in a AnnData object and then written to an .h5ad file. The format is specified by the API file specified in the __merge__ field in the config file.

Step 5: Add resources (optional)

It is possible to add additional resources such as a file containing helper functions or other resources. Please visit this page for more information on how to do this.

Step 6: Try component

Your component’s API file contains the necessary unit tests to check whether your component works and the output is in the correct format.

You can test your component by using the following command:

viash test src/label_projection/methods/my_method_py/config.vsh.yaml
Output
Running tests in temporary directory: '/tmp/viash_test_knn7724992926334110594'
====================================================================
+/tmp/viash_test_knn7724992926334110594/build_executable/knn ---verbosity 6 ---setup cachedbuild
[notice] Building container 'ghcr.io/openproblems-bio/label_projection/methods/knn:test' with Dockerfile
[info] Running 'docker build -t ghcr.io/openproblems-bio/label_projection/methods/knn:test /tmp/viash_test_knn7724992926334110594/build_executable -f /tmp/viash_test_knn7724992926334110594/build_executable/tmp/dockerbuild-knn-RUQLSY/Dockerfile'
Sending build context to Docker daemon  39.94kB

Step 1/7 : FROM python:3.10
 ---> fc98d03e6037
Step 2/7 : RUN pip install --upgrade pip &&   pip install --upgrade --no-cache-dir "scikit-learn" "pyyaml" "anndata~=0.8.0"
 ---> Using cache
 ---> 1d35b64eb218
Step 3/7 : LABEL org.opencontainers.image.description="Companion container for running component label_projection/methods knn"
 ---> Using cache
 ---> f9833a51c1bc
Step 4/7 : LABEL org.opencontainers.image.created="2023-05-06T00:08:35Z"
 ---> Running in dd24ef37ae9c
Removing intermediate container dd24ef37ae9c
 ---> b283cb6e633d
Step 5/7 : LABEL org.opencontainers.image.source="https://github.com/openproblems-bio/openproblems-v2"
 ---> Running in 904956427efb
Removing intermediate container 904956427efb
 ---> 1133431e5786
Step 6/7 : LABEL org.opencontainers.image.revision="9438b8ad0cdd9cd2ed3ba6a01d0b4f075c059d64"
 ---> Running in a0fa67f1e5ef
Removing intermediate container a0fa67f1e5ef
 ---> fcd16d92a287
Step 7/7 : LABEL org.opencontainers.image.version="test"
 ---> Running in bd63b0e7032b
Removing intermediate container bd63b0e7032b
 ---> bdea220a417f
Successfully built bdea220a417f
Successfully tagged ghcr.io/openproblems-bio/label_projection/methods/knn:test
====================================================================
+/tmp/viash_test_knn7724992926334110594/test_check_method_config/test_executable
Load config data
Check general fields
Check info fields
All checks succeeded!
====================================================================
+/tmp/viash_test_knn7724992926334110594/test_run_and_check_adata/test_executable
>> Checking whether input files exist
>> Running script as test
Load input data
Fit to train data
Predict on test data
Write output to file
>> Checking whether output file exists
>> Reading h5ad files and checking formats
Reading and checking input_train
  AnnData object with n_obs × n_vars = 346 × 419
    obs: 'label', 'batch'
    var: 'hvg', 'hvg_score'
    uns: 'dataset_id', 'normalization_id'
    obsm: 'X_pca'
    layers: 'counts', 'normalized'
Reading and checking input_test
  AnnData object with n_obs × n_vars = 154 × 419
    obs: 'batch'
    var: 'hvg', 'hvg_score'
    uns: 'dataset_id', 'normalization_id'
    obsm: 'X_pca'
    layers: 'counts', 'normalized'
Reading and checking output
  AnnData object with n_obs × n_vars = 154 × 419
    obs: 'batch', 'label_pred'
    var: 'hvg', 'hvg_score'
    uns: 'dataset_id', 'method_id', 'normalization_id'
    obsm: 'X_pca'
    layers: 'counts', 'normalized'
All checks succeeded!
====================================================================
SUCCESS! All 2 out of 2 test scripts succeeded!
Cleaning up temporary directory

Visit “Run tests” for more information on running unit tests and how to interpret common error messages.

You can also run your component on local files using the viash run command. For example:

viash run src/label_projection/methods/my_method_py/config.vsh.yaml -- \
  --input_train resources_test/label_projection/pancreas/train.h5ad \
  --input_test resources_test/label_projection/pancreas/test.h5ad \
  --output output.h5ad

Next steps

If your component works, please create a pull request.