Commit 878e35c7 authored by Celine Noirot's avatar Celine Noirot
Browse files

review readme

parent 56f85288
......@@ -2,48 +2,44 @@
## I. Pre-requisites
1. Make sure you are in the directory where you downloaded `metagwgs` source files and added the three mandatory Singularity images in `metagwgs/env`
1. Install metagwgs as describe here:[installation doc](../docs/installation.md)
2. Get datasets : two datasets are currently available for these functional tests,
- ones in source code : [here](../test))
- thoses from [nf-core/mag pipeline](https://github.com/nf-core/test-datasets/tree/mag/test_data)
#TODO : lien direct sur les x fichier, ou sinon indiquer quels fichier telecharger/copier depuis test_data
```
wget
```
2. Make sure you downloaded all the required data files for metagwgs. If not, they will be downloaded by the pipeline each time you run it in a new project.
3. Download the expected results from [link-to-test-datasets]. #TODO
3. Download the test datasets (expected results + test fastq) from [link-to-test-datasets].
## II. Run functional tests
## II. Functional tests
Each step of metagwgs produces a series of files. We want to be able to determine if the modifications we perform on metagwgs have an impact on any of these files (presence, contents, format, ...). You'll find more info about how the files are tested at the end of this page.
Each step of metagwgs produces a series of files. We want to be able to determine if the modifications we perform on metagwgs have an impact on any of these files (presence, contents, format, ...).
#TODO Mettre l'usage du script
Two datasets are currently available for these functional tests: test (from [metagwgs/test](https://forgemia.inra.fr/genotoul-bioinfo/metagwgs/-/tree/master/test)) and MAG (from [nf-core/test-datasets](https://github.com/nf-core/test-datasets/tree/mag/test_data))
When launching the functional test script, the files contained in *exp_dir* (in ./test_expected_logs) are scanned and, for each possible file extension, a test if performed on an expected file against it's observed version (in ./results).
There is two way to launch functionnal test :
- by providing result of a pipeline already exectuted
- by providing a script which launch the nextflow pipeline (see example) #TODO mettre un fichier de soumission d'exemple dans les sources
### Test methods
5 simple test methods are used:
diff: simple bash difference between two files
`diff exp_path obs_path`
zdiff: simple bash difference between two gzipped files
`zdiff exp_path obs_path`
no_header_diff: remove the headers of .annotations and .seed_orthologs files
`diff <(grep -w "^?#" exp_path) <(grep -w "^?#" obs_path)`
cut_diff: exception for cutadapt.log file
`diff <(tail -n+6 exp_path) <(tail -n+6 obs_path)`
not_empty: in python, check if file is empty
`test = path.getsize(obs_path) > 0`
### Launch from pipeline already exectuted
## III. Launch test
If you have already launched metagwgs [see metagwgs README and usage] on test data:
Nextflow metagwgs can be launched on any cluster manager (sge, slurm, ...). The script for functional tests can use a provided script containing the command to launch Nextflow on a cluster.
```
cd project-directory
python [work_dir]/metaG/metagwgs/functional_tests/main.py -step 07_taxo_affi -exp_dir [work_dir]/test_expected_logs -obs_dir ./results
```
Exemples below use the slurm job manager and launche all 7 steps of metagwgs to ensure all parts of main.nf work as intended.
### Launch with script
Create a new directory (project-directory) containing a shell script to be used by functional tests:
Exemples below use the slurm job manager and launch all 7 steps of metagwgs to ensure all parts of main.nf work as intended.
1. Create a new directory (project-directory) containing a shell script to be used by functional tests:
```
#!/bin/bash
......@@ -55,23 +51,15 @@ sbatch -W -p workq -J metagwgs --mem=6G \
*In this exemple, [work_dir] = "/home/pmartin2/work"*
*"--min_contigs_cpm 1000" is mandatory to have the same results as exp_dir for step 03_filtering*
Then launch this command:
2. Run functionnal test by providing the script :
```
cd project-directory
python [work_dir]/metaG/metagwgs/functional_tests/main.py -step 07_taxo_affi -exp_dir [work_dir]/test_expected_logs -obs_dir ./results --script launch_07_taxo_affi.sh
```
### Launch without script
If you already have launched metagwgs [see metagwgs README and usage] on test data:
```
cd project-directory
python [work_dir]/metaG/metagwgs/functional_tests/main.py -step 07_taxo_affi -exp_dir [work_dir]/test_expected_logs -obs_dir ./results
```
## Output
## III. Output
A ft_\[step\].log file is created for each step of metagwgs. It contains information about each test performed on given files.
......@@ -114,4 +102,24 @@ Not tested: 0
If a test resulted in 'Failed' instead of 'Passed', the stdout is printed in log.
Sometimes, files are not tested because present in exp_dir but not in obs_dir. Then a log ft_\[step\].not_tested is created containing names of missing files. In 02_assembly, there are two possible assembly programs that can be used: metaspades and megahit, resulting in this .not_tested log file. Not tested files are not counted in missed count.
\ No newline at end of file
Sometimes, files are not tested because present in exp_dir but not in obs_dir. Then a log ft_\[step\].not_tested is created containing names of missing files. In 02_assembly, there are two possible assembly programs that can be used: metaspades and megahit, resulting in this .not_tested log file. Not tested files are not counted in missed count.
### Test methods
5 simple test methods are used:
diff: simple bash difference between two files
`diff exp_path obs_path`
zdiff: simple bash difference between two gzipped files
`zdiff exp_path obs_path`
no_header_diff: remove the headers of .annotations and .seed_orthologs files
`diff <(grep -w "^?#" exp_path) <(grep -w "^?#" obs_path)`
cut_diff: exception for cutadapt.log file
`diff <(tail -n+6 exp_path) <(tail -n+6 obs_path)`
not_empty: in python, check if file is empty
`test = path.getsize(obs_path) > 0`
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment