viash run src/common/create_component/config.vsh.yaml -- \
--task label_projection \
--type control_method \
--name my_method_py \
--language python
Add a control method
A control method is used to test the relative performance of all other methods, and also as a quality control for the pipeline as a whole. A control method can either be a positive control or a negative control. The positive control and negative control methods set a maximum and minimum threshold for performance, so any new method should perform better than the negative control methods and worse than the positive control method.
This guide will show you how to create a new Viash component. In the following we will show examples for both Python and R. Note that the Label Projection task is used throughout the guide, so make sure to replace any occurrences of "label_projection"
with your task of interest.
Make sure you have followed the “Getting started” guide.
Step 1: Create a new component
Use the create_component
component to start creating a new control method.
This creates a new folder at src/tasks/label_projection/control_methods/my_method_py
containing a Viash config and a script.
tree src/tasks/label_projection/control_methods/my_method_py
├── script.py Script for running the method.
├── config.vsh.yaml Config file for method.
└── ... Optional additional resources.
viash run src/common/create_component/config.vsh.yaml -- \
--task label_projection \
--type control_method \
--name my_method_r \
--language r
Some tasks have multiple method subtypes (e.g. batch_integration
), which will require you to use a different value for --type
corresponding to the desired method subtype.
This creates a new folder at src/tasks/label_projection/control_methods/my_method_r
containing a Viash config and a script.
tree src/tasks/label_projection/control_methods/my_method_r
├── script.R Script for running the method.
├── config.vsh.yaml Config file for method.
└── ... Optional additional resources.
- A config file contains metadata of the component and the dependencies required to run it. In steps 2 and 3 we will fill in the required information.
- A script contains the code to run the method. In step 4 we will edit the script.
Step 2: Fill in metadata
The Viash config contains metadata of your method, which script is used to run it, and the required dependencies.
Generated config file
This is what the config.vsh.yaml
generated by the create_component
component looks like:
Contents of config.vsh.yaml
# The API specifies which type of component this is.
# It contains specifications for:
# - The input/output files
# - Common parameters
# - A unit test
__merge__: ../../api/comp_control_method.yaml
functionality:
# A unique identifier for your component (required).
# Can contain only lowercase letters or underscores.
name: my_method_py
# Metadata for your component
info:
# A relatively short label, used when rendering visualisarions (required)
label: My Method Py
# A one sentence summary of how this method works (required). Used when
# rendering summary tables.
summary: "FILL IN: A one sentence summary of this method."
# A multi-line description of how this component works (required). Used
# when rendering reference documentation.
description: |
FILL IN: A (multi-line) description of how this method works. # Which normalisation method this component prefers to use (required).
preferred_normalization: log_cpm
# Component-specific parameters (optional)
# arguments:
# - name: "--n_neighbors"
# type: "integer"
# default: 5
# description: Number of neighbors to use.
# Resources required to run the component
resources:
# The script of your component (required)
- type: python_script
path: script.py
# Additional resources your script needs (optional)
# - type: file
# path: weights.pt
platforms:
# Specifications for the Docker image for this component.
- type: docker
image: ghcr.io/openproblems-bio/base_python:1.0.1
# Add custom dependencies here (optional). For more information, see
# https://viash.io/reference/config/platforms/docker/#setup .
# setup:
# - type: python
# packages: scib==1.1.3
# This platform allows running the component natively
- type: native
# Allows turning the component into a Nextflow module / pipeline.
- type: nextflow
directives:
label: [midmem, midcpu]
Contents of config.vsh.yaml
# The API specifies which type of component this is.
# It contains specifications for:
# - The input/output files
# - Common parameters
# - A unit test
__merge__: ../../api/comp_control_method.yaml
functionality:
# A unique identifier for your component (required).
# Can contain only lowercase letters or underscores.
name: my_method_r
# Metadata for your component
info:
# A relatively short label, used when rendering visualisarions (required)
label: My Method R
# A one sentence summary of how this method works (required). Used when
# rendering summary tables.
summary: "FILL IN: A one sentence summary of this method."
# A multi-line description of how this component works (required). Used
# when rendering reference documentation.
description: |
FILL IN: A (multi-line) description of how this method works. # Which normalisation method this component prefers to use (required).
preferred_normalization: log_cpm
# Component-specific parameters (optional)
# arguments:
# - name: "--n_neighbors"
# type: "integer"
# default: 5
# description: Number of neighbors to use.
# Resources required to run the component
resources:
# The script of your component (required)
- type: r_script
path: script.R
# Additional resources your script needs (optional)
# - type: file
# path: weights.pt
platforms:
# Specifications for the Docker image for this component.
- type: docker
image: ghcr.io/openproblems-bio/base_r:1.0.1
# Add custom dependencies here (optional). For more information, see
# https://viash.io/reference/config/platforms/docker/#setup .
# setup:
# - type: r
# packages: tidyverse
# This platform allows running the component natively
- type: native
# Allows turning the component into a Nextflow module / pipeline.
- type: nextflow
directives:
label: [midmem, midcpu]
Required metadata fields
Please edit functionality.info
section in the config file to fill in the necessary metadata.
functionality.name
A unique identifier for the method. Must be written in snake case. Example: my_new_method
.
functionality.info.pretty_name
A label for the method used for visualisations and documentation. Example: "My new method"
.
functionality.info.subtype
Whether the method is a "positive_control"
or a "negative_control"
.
functionality.info.summary
A one sentence summary of the method. Used for creating short overviews of the components in a task.
functionality.info.description
An explanation for how the method works. Used for creating reference documentation of a task.
functionality.info.preferred_normalization
Which normalization method a component prefers. Possible values are l1_sqrt
, log_cpm
, log_scran_pooling
, sqrt_cpm
. Each value corresponds to a normalization component in the directory src/datasets/normalization
.
__merge__
The file specified in this field contains information regarding the input and output arguments of the component, as well as a unit test to ensure that the component is functioning properly. Normally you don’t need to change this if you gave the right arguments to the create_component
component.
Step 3: Add dependencies
Each component has it’s own set of dependencies, because different components might have conflicting dependencies.
For your convenience we have created 2 base images that can be used for python or R scripts. These images can be found in the OpenProblems github repo base-images. Click on the packages to view the url you need to use. You are not required to use these images but make sure the required packages are installed to make sure OpenProblems works properly.
Update the setup
definition in the platforms
section of the config file. This section describes the packages that need to be installed in the Docker image and are required for your method to run.
If you’re using a custom image use the following minimum setup:
platforms:
- type: docker
Image: your custom image
setup:
- type: apt
packages:
- procps
- type: python
packages:
- anndata~=0.8.0
- scanpy
- pyyaml
- requests
- jsonschema
platforms:
- type: docker
Image: your custom image
setup:
- type: apt
packages:
- procps
- libhdf5-dev
- libgeos-dev
- python3
- python3-pip
- python3-dev
- python-is-python3
- type: python
packages:
- rpy2
- anndata~=0.8.0
- scanpy
- pyyaml
- requests
- jsonschema
- type: r
packages:
- anndata
- BiocManager
Please check out this guide for more information on how to add extra package dependencies.
Tip: After making changes to the components dependencies, you will need to rebuild the docker container as follows:
viash run src/tasks/label_projection/control_methods/my_method_py/config.vsh.yaml -- \
---setup cachedbuild
[notice] Building container 'ghcr.io/openproblems-bio/label_projection/control_methods/my_method_py:dev' with Dockerfile
output
[notice] Building container 'ghcr.io/openproblems-bio/label_projection/control_methods/my_method_py:dev' with Dockerfile
Step 4: Edit script
A component’s script typically has five sections:
- Imports and libraries
- Argument values
- Read input data
- Generate results
- Write output data to file
Generated script
This is what the script generated by the create_component
component looks like:
Contents of script.py
import anndata as ad
## VIASH START
# Note: this section is auto-generated by viash at runtime. To edit it, make changes
# in config.vsh.yaml and then run `viash config inject config.vsh.yaml`.
= {
par 'input_train': 'resources_test/label_projection/pancreas/train.h5ad',
'input_test': 'resources_test/label_projection/pancreas/test.h5ad',
'input_solution': 'resources_test/label_projection/pancreas/solution.h5ad',
'output': 'output.h5ad'
}= {
meta 'functionality_name': 'my_method_py'
}## VIASH END
print('Reading input files', flush=True)
= ad.read_h5ad(par['input_train'])
input_train = ad.read_h5ad(par['input_test'])
input_test = ad.read_h5ad(par['input_solution'])
input_solution
print('Preprocess data', flush=True)
# ... preprocessing ...
print('Train model', flush=True)
# ... train model ...
print('Generate predictions', flush=True)
# ... generate predictions ...
print("Write output AnnData to file", flush=True)
= ad.AnnData(
output ={
obs'label_pred': obs_label_pred
},={
uns'dataset_id': input_train.uns['dataset_id'],
'normalization_id': input_train.uns['normalization_id'],
'method_id': meta['functionality_name']
}
)'output'], compression='gzip') output.write_h5ad(par[
Contents of script.R
library(anndata)
## VIASH START
<- list(
par input_train = "resources_test/label_projection/pancreas/train.h5ad",
input_test = "resources_test/label_projection/pancreas/test.h5ad",
input_solution = "resources_test/label_projection/pancreas/solution.h5ad",
output = "output.h5ad"
)<- list(
meta functionality_name = "my_method_r"
)## VIASH END
cat("Reading input files\n")
<- anndata::read_h5ad(par[["input_train"]])
input_train <- anndata::read_h5ad(par[["input_test"]])
input_test <- anndata::read_h5ad(par[["input_solution"]])
input_solution
cat("Preprocess data\n")
# ... preprocessing ...
cat("Train model\n")
# ... train model ...
cat("Generate predictions\n")
# ... generate predictions ...
cat("Write output AnnData to file\n")
<- anndata::AnnData(
output obs = list(
label_pred = obs_label_pred
),uns = list(
dataset_id = input_train$uns[["dataset_id"]],
normalization_id = input_train$uns[["normalization_id"]],
method_id = meta[["functionality_name"]]
)
)$write_h5ad(par[["output"]], compression = "gzip") output
Required sections
Imports and libraries
In the top section of the script you can define which packages/libraries the method needs. If you add a new or different package add the dependency to config.vsh.yaml
in the setup
field (see above).
Argument block
The Viash code block is designed to facilitate prototyping, by enabling you to execute directly by running python script.py
(or Rscript script.R
for R users). Note that anything between “VIASH START” and “VIASH END” will be removed and replaced with a CLI argument parser when the components are being built by Viash.
Here, the par
dictionary contains all the arguments
defined in the config.vsh.yaml
file (including those from the defined __merge__
file). When adding a argument
in the par
dict also add it to the config.vsh.yaml
in the arguments
section.
Read input data
This section reads any input AnnData files passed to the component.
Generate results
This is the most important section of your script, as it defines the core functionality provided by the component. It processes the input data to create results for the particular task at hand.
Write output data to file
The output stored in a AnnData object and then written to an .h5ad
file. The format is specified by the API file specified in the __merge__
field in the config file.
Step 5: Try component
Your component’s API file contains the necessary unit tests to check whether your component works and the output is in the correct format.
You can test your component by using the following command:
viash test src/tasks/label_projection/control_methods/my_method_py/config.vsh.yaml
Output
Running tests in temporary directory: '/tmp/viash_test_majority_vote9554782346839397520'
====================================================================
+/tmp/viash_test_majority_vote9554782346839397520/build_executable/majority_vote ---verbosity 6 ---setup cachedbuild
[notice] Building container 'ghcr.io/openproblems-bio/label_projection/control_methods/majority_vote:test' with Dockerfile
[info] Running 'docker build -t ghcr.io/openproblems-bio/label_projection/control_methods/majority_vote:test /tmp/viash_test_majority_vote9554782346839397520/build_executable -f /tmp/viash_test_majority_vote9554782346839397520/build_executable/tmp/dockerbuild-majority_vote-YqMtF3/Dockerfile'
#0 building with "default" instance using docker driver
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 511B done
#2 DONE 0.0s
#3 [internal] load metadata for ghcr.io/openproblems-bio/base_python:1.0.1
#3 DONE 0.1s
#4 [1/2] FROM ghcr.io/openproblems-bio/base_python:1.0.1@sha256:54ab73ef47dee4b0ba32bcacab08b760351d118943e4b30d7476f9a85072303d
#4 DONE 0.0s
#5 [2/2] RUN :
#5 CACHED
#6 exporting to image
#6 exporting layers done
#6 writing image sha256:63922d0d1e0c8ca79b158d9627b81b94ac9d64f2b5b39a66dce0645d64bfde00 done
#6 naming to ghcr.io/openproblems-bio/label_projection/control_methods/majority_vote:test done
#6 DONE 0.0s
====================================================================
+/tmp/viash_test_majority_vote9554782346839397520/test_check_method_config/test_executable
Load config data
Check general fields
Check info fields
All checks succeeded!
====================================================================
+/tmp/viash_test_majority_vote9554782346839397520/test_run_and_check_adata/test_executable
>> Checking whether input files exist
>> Running script as test
Load data
Compute majority vote
Create prediction object
Write output to file
>> Checking whether output file exists
>> Reading h5ad files and checking formats
Reading and checking input_train
AnnData object with n_obs × n_vars = 400 × 500
obs: 'label', 'batch'
var: 'hvg', 'hvg_score'
uns: 'dataset_id', 'normalization_id'
obsm: 'X_pca'
layers: 'counts', 'normalized'
Reading and checking input_test
AnnData object with n_obs × n_vars = 100 × 500
obs: 'batch'
var: 'hvg', 'hvg_score'
uns: 'dataset_id', 'normalization_id'
obsm: 'X_pca'
layers: 'counts', 'normalized'
Reading and checking input_solution
AnnData object with n_obs × n_vars = 100 × 500
obs: 'label', 'batch'
var: 'hvg', 'hvg_score'
uns: 'dataset_id', 'normalization_id'
obsm: 'X_pca'
layers: 'counts', 'normalized'
Reading and checking output
AnnData object with n_obs × n_vars = 100 × 500
obs: 'batch', 'label_pred'
var: 'hvg', 'hvg_score'
uns: 'dataset_id', 'method_id', 'normalization_id'
obsm: 'X_pca'
layers: 'counts', 'normalized'
All checks succeeded!
====================================================================
[32mSUCCESS! All 2 out of 2 test scripts succeeded![0m
Cleaning up temporary directory
Visit “Run tests” for more information on running unit tests and how to interpret common error messages.
You can also run your component on local files using the viash run
command. For example:
viash run src/tasks/label_projection/control_methods/my_method_py/config.vsh.yaml -- \
--input_train resources_test/label_projection/pancreas/train.h5ad \
--input_test resources_test/label_projection/pancreas/test.h5ad \
--output output.h5ad
Next steps
If your component works, please create a pull request.