1. Install metagwgs as describe here:[installation doc](../docs/installation.md)
2. Get datasets : two datasets are currently available for these functional tests,
- ones in source code : [here](../test))
- thoses from [nf-core/mag pipeline](https://github.com/nf-core/test-datasets/tree/mag/test_data)
#TODO : lien direct sur les x fichier, ou sinon indiquer quels fichier telecharger/copier depuis test_data
```
wget
1. Install metagwgs as described here: [installation doc](../docs/installation.md)
2. Get datasets: two datasets are currently available for these functional tests at `https://forgemia.inra.fr/genotoul-bioinfo/metagwgs-test-datasets.git`
Replace "\<dataset\>" with either "small" or "mag":
Each step of metagwgs produces a series of files. We want to be able to determine if the modifications we perform on metagwgs have an impact on any of these files (presence, contents, format, ...). You'll find more info about how the files are tested at the end of this page.
#TODO Mettre l'usage du script
There is two way to launch functionnal test :
- by providing result of a pipeline already exectuted
- by providing a script which launch the nextflow pipeline (see example) #TODO mettre un fichier de soumission d'exemple dans les sources
### Launch from pipeline already exectuted
If you have already launched metagwgs [see metagwgs README and usage] on test data:
Exemples below use the slurm job manager and launch all 7 steps of metagwgs to ensure all parts of main.nf work as intended.
1. Create a new directory (project-directory) containing a shell script to be used by functional tests:
To launch functional tests, you need to be located at the root of the folder where you want to perform the tests. There are two ways to launch functionnal tests (testing all steps to 07_taxo_affi):
- by providing the results folder of a pipeline already exectuted
*In this exemple, [work_dir] = "/home/pmartin2/work"*
*"--min_contigs_cpm 1000" is mandatory to have the same results as exp_dir for step 03_filtering*
2. Run functionnal test by providing the script :
- by providing a script which will launch the nextflow pipeline [see example](./launch_example.sh)(this example is designed for the "small" dataset with --min_contigs_cpm>1000, using slurm)