You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Job failed because there wasn't enough memory allocated. This is a little more problematic than just updating resources because for every job that failed, 20+ subjobs were created (this is running in parallel) which did not have enough memory. Since the max attempts is 2 this can really escalate quickly. We need to
update the memory for this module
decrease retries to 1x
Command output:
GRanges object with 66127 ranges and 2 metadata columns:
seqnames ranges strand | pvalue qvalue
<Rle> <IRanges> <Rle> | <numeric> <numeric>
[1] chr1 181444-181645 * | 15.18 14.75
[2] chr1 181913-182114 * | 10.54 10.18
[3] chr1 186155-186356 * | 5.85 5.59
[4] chr1 186835-187036 * | 3.20 3.03
[5] chr1 267916-268117 * | 64.14 63.30
... ... ... ... . ... ...
[66123] chrX 155651333-155651534 * | 3.81 3.62
[66124] chrX 155662420-155662621 * | 6.09 5.83
[66125] chrY 11085350-11085551 * | 2.64 2.50
[66126] chrY 11156173-11156374 * | 16.16 15.71
[66127] chrY 11156459-11156660 * | 12.06 11.67
-------
seqinfo: 24 sequences from an unspecified genome; no seqlengths
>> binning method is used...2024-02-23 00:28:55
>> preparing body regions by gene... 2024-02-23 00:28:55
>> preparing tag matrix by binning... 2024-02-23 00:28:55
>> preparing matrix with extension from (TSS-20%)~(TTS+20%)... 2024-02-23 00:28:55
>> 87 peaks(0.4930296%), having lengths smaller than 800bp, are filtered... 2024-02-23 00:29:15
Command error:
Warning messages:
1: In normalizePath("~") :
path[1]="/home/sevillas2": No such file or directory
2: package 'dplyr' was built under R version 4.3.2
Error in t.star[r, ] <- res[[r]] :
number of items to replace is not a multiple of replacement length
Calls: plotPeakProf2 ... plotAvgProf.binning.internal -> getTagCount -> getTagCiMatrix -> boot
In addition: Warning message:
In parallel::mclapply(seq_len(RR), fn, mc.cores = ncpus) :
scheduled cores 63, 68, 69, 71, 76, 80, 82, 95, 96, 100, 110, 119 did not deliver results, all values of the jobs will be affected
Execution halted
Command used and terminal output
Ran within NF workflow, but this is the specific command ran chipseeker_peakplot.R \ --peak CTCF.gem.sorted.merged.consensus.bed \ --outfile-prefix CTCF.gem \ --genome-txdb TxDb.Hsapiens.UCSC.hg38.knownGene \ --genome-annot org.Hs.eg.db
The text was updated successfully, but these errors were encountered:
Description of the bug
Job failed because there wasn't enough memory allocated. This is a little more problematic than just updating resources because for every job that failed, 20+ subjobs were created (this is running in parallel) which did not have enough memory. Since the max attempts is 2 this can really escalate quickly. We need to
Command used and terminal output
The text was updated successfully, but these errors were encountered: