1. Install metagwgs as describe here:[installation doc](../docs/installation.md)
1. Install metagwgs as described here: [installation doc](../docs/installation.md)
2. Get datasets : two datasets are currently available for these functional tests,
2. Get datasets: two datasets are currently available for these functional tests at `https://forgemia.inra.fr/genotoul-bioinfo/metagwgs-test-datasets.git`
- ones in source code : [here](../test))
- thoses from [nf-core/mag pipeline](https://github.com/nf-core/test-datasets/tree/mag/test_data)
Replace "\<dataset\>" with either "small" or "mag":
#TODO : lien direct sur les x fichier, ou sinon indiquer quels fichier telecharger/copier depuis test_data
Each step of metagwgs produces a series of files. We want to be able to determine if the modifications we perform on metagwgs have an impact on any of these files (presence, contents, format, ...). You'll find more info about how the files are tested at the end of this page.
Each step of metagwgs produces a series of files. We want to be able to determine if the modifications we perform on metagwgs have an impact on any of these files (presence, contents, format, ...). You'll find more info about how the files are tested at the end of this page.
#TODO Mettre l'usage du script
To launch functional tests, you need to be located at the root of the folder where you want to perform the tests. There are two ways to launch functionnal tests (testing all steps to 07_taxo_affi):
- by providing the results folder of a pipeline already exectuted
There is two way to launch functionnal test :
- by providing result of a pipeline already exectuted
- by providing a script which launch the nextflow pipeline (see example) #TODO mettre un fichier de soumission d'exemple dans les sources
### Launch from pipeline already exectuted
If you have already launched metagwgs [see metagwgs README and usage] on test data:
- by providing a script which will launch the nextflow pipeline [see example](./launch_example.sh)(this example is designed for the "small" dataset with --min_contigs_cpm>1000, using slurm)
*In this exemple, [work_dir] = "/home/pmartin2/work"*
*"--min_contigs_cpm 1000" is mandatory to have the same results as exp_dir for step 03_filtering*