Skip to content
Snippets Groups Projects
Commit 1422ec20 authored by Helene Rimbert's avatar Helene Rimbert
Browse files

Merge branch 'helene.rimbert-master-patch-23301' into 'master'

Update README.md

See merge request !8
parents ed93ca70 0df66dc8
No related branches found
No related tags found
1 merge request!8Update README.md
......@@ -191,7 +191,18 @@ This will allow to have at most 32 subproccess run through the SLURM scheduler w
You can use a custom [cluster.json](cluster.json) JSON file do setup the parameters of SBATCH for each rules, and use it with with:
```console
$ snakemake -j 32 -u cluster.json --cluster "sbatch -J {cluster.jobName} -c {cluster.c} --mem {cluster.mem} -e {cluster.error} -o {cluster.output} -p debug" --verbose"
$ snakemake -j 32 --cluster-config cluster-hpc2.json --cluster "sbatch -J {cluster.jobName} -c {cluster.c} --mem {cluster.mem} -e {cluster.error} -o {cluster.output} -p gdec" --verbose"
```
The pipeline comes with conda environment file which can be used by Snakemake. To enable the use of conda:
```console
$ snakemake --use-conda -j 8 --cluster-config cluster-hpc2.json --cluster "sbatch -p {cluster.partition} -c {cluster.c} -N 1 -t {cluster.time} -J {cluster.jobName} --mem={cluster.mem} -e {cluster.error} -o {cluster.output}"
```
It is possible to force Snakemake to wait for a defined amount of time in case of latency on the filesystem of your cluster/server.
```console
# wating 30 seconds after each job to check for output files
$ snakemake --latency-wait 30 [...]
```
You can generate the diagram of all the processes and dependancies of you analysis:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment