You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run times may not be comparable between tools/runs if we don't ensure that the underlying computational conditions were comparable. For that, the evaluation would probably have to be executed on a dedicated server with a task having exclusive access to that machine and input/output files being placed on local storage (e.g. using nextflow's scratch true).
Cluster execution could be acceptable if we can ensure
homogeneity of the nodes (explicit partition spec?)
exclusive use of nodes
use of local scratch space
Cloud (awsbatch) execution could be acceptable if we can ensure
homogeneity of the nodes
exclusive use of nodes
use of local scratch space
In addition, we must capture more of the task information via
which should include requested resources cpus,memory,time - more here
The cpu details can be easily picked-up in the mapping process
e.g. beforeScript 'cat /proc/cpuinfo > cpuinfo' which can be parsed downstream.
It is of limited value on its own for serious speed benchmarking,
but may be useful for the indicative use of speed in reports.
The text was updated successfully, but these errors were encountered:
rsuchecki
added a commit
to rsuchecki/repset
that referenced
this issue
Nov 22, 2019
* bug fix
* params not captured in meta
* timestamp in results release tag
* date in release tag, release body update
* removed beers remnants
* release body update
* iso date/time in release tag
* release body formatting
* tidy-up release
* update
* updated docu
* added --subset option
* cleanup
* nf trace populated later #52
Run times may not be comparable between tools/runs if we don't ensure that the underlying computational conditions were comparable. For that, the evaluation would probably have to be executed on a dedicated server with a task having exclusive access to that machine and input/output files being placed on local storage (e.g. using nextflow's
scratch true
).awsbatch
) execution could be acceptable if we can ensureIn addition, we must capture more of the task information via
cpus,memory,time
- more hereThe cpu details can be easily picked-up in the mapping process
e.g.
beforeScript 'cat /proc/cpuinfo > cpuinfo'
which can be parsed downstream.It is of limited value on its own for serious speed benchmarking,
but may be useful for the indicative use of speed in reports.
The text was updated successfully, but these errors were encountered: