François-Xavier Stubbe
08/21/2025, 12:54 PMJames Fellows Yates
08/22/2025, 12:13 PM.first()
after mixing module versions into ch_versions
?
Given .unique
is run prior passing to MultiQC, is there any overhead benefit for taking just the version.yaml
from the first module invocation vs passing all versions.yml
and running uniqueSylvia Li
08/22/2025, 6:26 PMdef longpac_longpolish = SAMPLESHEETFILTERING.out.list_longpac_longPolish
def flattened_result = longpac_longpolish
.filter { value -> value instanceof List && !value.isEmpty() }
.flatMap()
flattened_result.view()
PACBIO_SUBWORKFLOW(flattened_result)
it views() fine, emitting
[[id:Sample1, polish:long, basecaller:NA], short1NA, short2NA, TestDatasetNfcore/Pacbio_illuminaPolish/PacbioSRR27591472.hifi.fastq.gz, assemblyNA]
[[id:Sample2, polish:long, basecaller:NA], short1NA, short2NA, TestDatasetNfcore/Pacbio_illuminaPolish/PacbioSRR27591472.hifi.fastq.gz, assemblyNA]
but when i pass it to the subworkflow
workflow PACBIO_SUBWORKFLOW {
take:
ch_input_full // channel: [ val(meta), files/data, files/data, files/data..etc ]
// bam_file
// polish
// gambitdb
// krakendb
main:
def ch_output = Channel.empty()
def ch_versions = Channel.empty()
println("hello")
ch_input_full.view()
It just prints hello, and gets hung up, doesn't seem to ever print the channel values? just sits there. I dont understand why?
my nextflow.log also says all processes finished, all barriers passed
Aug-22 13:23:17.907 [main] DEBUG nextflow.script.ScriptRunner - > Awaiting termination
Aug-22 13:23:17.907 [main] DEBUG nextflow.Session - Session await
Aug-22 13:23:17.907 [main] DEBUG nextflow.Session - Session await > all processes finished
Aug-22 13:23:17.908 [main] DEBUG nextflow.Session - Session await > all barriers passed
Juan E. Arango Ossa
08/22/2025, 6:33 PM-ansi-log false
but I do want the ansi output to have the latest colored output.
I saw in this issue @Phil Ewels was suggesting something with full names as in the pic. Was this implemented? Can I get something like that with ansi logs and full process name or at least longer?
As it is, it's still very challenging to readSylvia Li
08/22/2025, 10:38 PMNour El Houda Barhoumi
08/23/2025, 3:02 PMyanzi L.
08/26/2025, 10:59 PMFredrick
08/28/2025, 2:07 AMFASTQ_ALIGN_BWA (
ch_samplesheet, // channel input reads: [ val(meta2), path(index) ]
PREPARE_REFERENCE_INDEXES.out.bwa_index, // channel BWA index: [ val(meta2), path(index) ]
true, // boolean value: true/false for sorting BAM files
fasta, // channel reference fasta: [ val(meta3), path(fasta) ]
)
ch_versions = ch_versions.mix( FASTQ_ALIGN_BWA.out.versions.first() )
ch_bam_bai = FASTQ_ALIGN_BWA.out.bam.join( FASTQ_ALIGN_BWA.out.bai, by: 0)
// Extract BAM and BAI channels from joined input
ch_bam = ch_bam_bai.map { meta, bam, bai -> [meta, bam] }
ch_bai = ch_bam_bai.map { meta, bam, bai -> [meta, bai] }
/*
MODULE: GATK4_ADDORREPLACEREADGROUPS
*/
GATK4_ADDORREPLACEREADGROUPS (
ch_bam,
fasta,
fasta_fai
)
ch_versions = ch_versions.mix(GATK4_ADDORREPLACEREADGROUPS.out.versions.first())
/*
MODULE: GATK4_MARKDUPLICATES
*/
// DUBUG SANITY CHECKS: Create view for debugging
// GATK4_ADDORREPLACEREADGROUPS.out.bam.view { "GATK4_MARKDUPLICATES input BAM: $it" }
// fasta.map{ meta, fasta -> fasta }.view { "GATK4_MARKDUPLICATES input FASTA: $it" }
// fasta_fai.map{ meta, fai -> fai }.view { "GATK4_MARKDUPLICATES input FASTA_FAI: $it" }
GATK4_MARKDUPLICATES (
GATK4_ADDORREPLACEREADGROUPS.out.bam,
fasta.map{ meta, fasta -> fasta },
fasta_fai.map{ meta, fai -> fai}
)
ch_versions = ch_versions.mix(GATK4_MARKDUPLICATES.out.versions.first())
/*
MODULE: GATK4_CALIBRATEDRAGSTRMODEL
*/
// DUBUG SANITY CHECKS: create view for debugging
// GATK4_MARKDUPLICATES.out.bam.join(GATK4_MARKDUPLICATES.out.bai).view { "GATK4_CALIBRATEDRAGSTRMODEL input BAM+BAI: $it" }
// fasta.map{ meta, fasta -> fasta }.view { "GATK4_CALIBRATEDRAGSTRMODEL input FASTA: $it" }
// fasta_fai.map{ meta, fai -> fai }.view { "GATK4_CALIBRATEDRAGSTRMODEL input FASTA_FAI: $it" }
// genome_dict.view { "GATK4_CALIBRATEDRAGSTRMODEL input GENOME_DICT: $it" }
// str_table.view { "GATK4_CALIBRATEDRAGSTRMODEL input STR_TABLE: $it" }
GATK4_CALIBRATEDRAGSTRMODEL (
GATK4_MARKDUPLICATES.out.bam.join(GATK4_MARKDUPLICATES.out.bai),
fasta.map{ meta, fasta -> fasta },
fasta_fai.map{ meta, fai -> fai },
genome_dict.map{ meta, dict -> dict },
str_table
)
ch_versions = ch_versions.mix(GATK4_CALIBRATEDRAGSTRMODEL.out.versions.first())
/*
MODULE: GATK4_HAPLOTYPECALLER
Expected input:
tuple val(meta), path(input), path(input_index), path(intervals), path(dragstr_model)
tuple val(meta2), path(fasta)
tuple val(meta3), path(fai)
tuple val(meta4), path(dict)
tuple val(meta5), path(dbsnp)
tuple val(meta6), path(dbsnp_tbi)
*/
ch_gatk_markduplicates = GATK4_MARKDUPLICATES.out.bam
.join(GATK4_MARKDUPLICATES.out.bai, by: 0, failOnMismatch: true)
.join(GATK4_CALIBRATEDRAGSTRMODEL.out.dragstr_model, by: 0, failOnMismatch: true)
.combine(bed)
.map { meta, bam, bai, model, bed -> [meta, bam, bai, bed, model] }
.set { ch_gatk_haplo_input }
// ch_gatk_haplo_input.view() { "GATK4_HAPLOTYPECALLER INPUT: $it" }
GATK4_HAPLOTYPECALLER (
ch_gatk_haplo_input,
fasta,
fasta_fai,
genome_dict,
dbsnp.map { meta, vcf -> [meta, vcf] },
dbsnp_tbi.map { tbi -> ["dbsnp_tbi", tbi] }
)
ch_versions = ch_versions.mix(GATK4_HAPLOTYPECALLER.out.versions.first())
Richard Francis
08/28/2025, 4:14 PMThiseas C. Lamnidis
08/29/2025, 9:29 AMlog.message
on successful pipeline completion (in addition to the standard Pipeline completed successfully
one).
At first I tried adding it to subworkflows/nf-core/utils_nfcore_pipeline/main.nf
, but doing so breaks linting because the file is different to remote. I could ignore this check in .nf-core.yml
, but that seems dangerous as it is generally a good idea to keep those important core functions checked imo.
So I made my own copy of the completionSummary
function, which I added directly within subworkflows/local/utils_nfcore_eager_pipeline/main.nf
. It looks like this:
def easterEgg(monochrome_logs) {
def colors = logColours(monochrome_logs) as Map
if (workflow.stats.ignoredCount == 0) {
if (workflow.success) {
// <https://en.wiktionary.org/wiki/jw.f_pw>
log.info("-${colors.green}𓂻 𓅱 𓆑 𓊪 𓅱${colors.reset}-")
}
}
}
Here’s the code from the completionSummary
function, for reference:
def completionSummary(monochrome_logs=true) {
def colors = logColours(monochrome_logs) as Map
if (workflow.success) {
if (workflow.stats.ignoredCount == 0) {
log.info("-${colors.purple}[${workflow.manifest.name}]${colors.green} Pipeline completed successfully${colors.reset}-")
}
else {
log.info("-${colors.purple}[${workflow.manifest.name}]${colors.yellow} Pipeline completed successfully, but with errored process(es) ${colors.reset}-")
}
}
else {
log.info("-${colors.purple}[${workflow.manifest.name}]${colors.red} Pipeline completed with errors${colors.reset}-")
}
}
I then call my easterEgg
function within PIPELINE_COMPLETION
, directly after completionSummary
, like so:
workflow PIPELINE_COMPLETION {
[...]
workflow.onComplete {
[...]
completionSummary(monochrome_logs)
easterEgg(monochrome_logs)
[...]
}
}
Considering it is essentially a copy of completionSummary
, I would expect this to work, but instead I get this error:
-[nf-core/eager] Pipeline completed successfully-
ERROR ~ Failed to invoke `workflow.onComplete` event handler
-- Check script './workflows/../subworkflows/local/../../subworkflows/local/utils_nfcore_eager_pipeline/main.nf' at line: 190 or see '.nextflow.log' file for more details
It seems I cannot access the workflow
object to check its .success
or .stats.ignoredCount
attributes. The error stays the same when I flip the order of checks, so it seems I cannot access the workflow
object altogether. Any ideas what is going on here? This is rather unintuitive.Sam Sims
08/29/2025, 11:09 AMjava.lang.RuntimeException: Different Snapshot:
[ [
{ {
"0": [ "0": [
[ [
"cchf_test", "cchf_test",
"3052518.warning.json:md5,1b59b4c73ec5eb7a87a2e6b1cc810e9a" | "/home/runner/work/scylla/scylla/.nf-test/tests/6b5fb1e4015fc9f93a37a33a917222c3
] ]
], ],
"warning_ch": [ "warning_ch": [
[ [
"cchf_test", "cchf_test",
"3052518.warning.json:md5,1b59b4c73ec5eb7a87a2e6b1cc810e9a" | "/home/runner/work/scylla/scylla/.nf-test/tests/6b5fb1e4015fc9f93a37a33a917222c3
] ]
] ]
}, },
"hcid.counts.csv:md5,c45ab01001988dc88e4469ae29a92448" "hcid.counts.csv:md5,c45ab01001988dc88e4469ae29a92448"
] ] ],
In my test I am doing something like this assert snapshot(workflow.out, path("${outputDir}/cchf_test/qc/hcid.counts.csv")).match()
Interestingly it seems in this example the hcid.counts.csv
file works fine - its just the outputs of workflow.out
that seem to have this problem
I might be missing something obvious, but I have been stumped for a while trying to figure this out - and so thought Id see if anyone had any ideas.
Thanks 🙂Cheyenne
08/29/2025, 12:29 PMkarima
09/01/2025, 1:03 PMERROR ~ Error executing process > 'NFCORE_RNASEQ:RNASEQ:FASTQ_QC_TRIM_FILTER_SETSTRANDEDNESS:FASTQ_FASTQC_UMITOOLS_TRIMGALORE:FASTQC (RAP1_UNINDUCED_REP1)'
Caused by:
Process requirement exceeds available memory -- req: 15 GB; avail: 14.8 GB
My machine specifications are: RAM: 14 GB and CPUs: 8
configuration file:
process {
cpus = 4
memory = '12 GB'
time = '12h'
withLabel:process_low {
cpus = 1
memory = '4 GB'
time = '2h'
}
withLabel:process_medium {
cpus = 2
memory = '6 GB'
time = '4h'
}
withLabel:process_high {
cpus = 4
memory = '12 GB'
time = '10h'
}
}
Could you please advise on the best way to successfully run the test dataset ?Fredrick
09/03/2025, 5:03 AMUgo Iannacchero
09/03/2025, 4:03 PMCheyenne
09/06/2025, 10:12 PMUgo Iannacchero
09/10/2025, 6:49 AMPICARD_MARKDUPLICATES
.
[... terminated with an error exit status (1) -- Execution is retried (1)
... retried (2)
ERROR ~ Error executing process > '...:PICARD_MARKDUPLICATES (Tnaive_24h_act_repB_S4)'
Command executed:
picard -Xmx13107M MarkDuplicates \
--ASSUME_SORTED true --REMOVE_DUPLICATES false --VALIDATION_STRINGENCY LENIENT --TMP_DIR tmp \
--INPUT Tnaive_24h_act_repB_S4.bam \
--OUTPUT Tnaive_24h_act_repB_S4.md.bam \
--REFERENCE_SEQUENCE GRCh38.primary_assembly.genome.fa \
--METRICS_FILE Tnaive_24h_act_repB_S4.md.MarkDuplicates.metrics.txt
Command error (picard 3.3.0):
Command error:
/usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory
05:49:06.233 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-3.3.0-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so
[Wed Sep 10 05:49:06 GMT 2025] MarkDuplicates --INPUT Tnaive_24h_act_repB_S4.bam --OUTPUT Tnaive_24h_act_repB_S4.md.bam --METRICS_FILE Tnaive_24h_act_repB_S4.md.MarkDuplicates.metrics.txt --REMOVE_DUPLICATES false --ASSUME_SORTED true --TMP_DIR tmp --VALIDATION_STRINGENCY LENIENT --REFERENCE_SEQUENCE GRCh38.primary_assembly.genome.fa --MAX_SEQUENCES_FOR_DISK_READ_ENDS_MAP 50000 --MAX_FILE_HANDLES_FOR_READ_ENDS_MAP 8000 --SORTING_COLLECTION_SIZE_RATIO 0.25 --TAG_DUPLICATE_SET_MEMBERS false --REMOVE_SEQUENCING_DUPLICATES false --TAGGING_POLICY DontTag --CLEAR_DT true --DUPLEX_UMI false --FLOW_MODE false --FLOW_DUP_STRATEGY FLOW_QUALITY_SUM_STRATEGY --FLOW_USE_END_IN_UNPAIRED_READS false --FLOW_USE_UNPAIRED_CLIPPED_END false --FLOW_UNPAIRED_END_UNCERTAINTY 0 --FLOW_UNPAIRED_START_UNCERTAINTY 0 --FLOW_SKIP_FIRST_N_FLOWS 0 --FLOW_Q_IS_KNOWN_END false --FLOW_EFFECTIVE_QUALITY_THRESHOLD 15 --ADD_PG_TAG_TO_READS true --DUPLICATE_SCORING_STRATEGY SUM_OF_BASE_QUALITIES --PROGRAM_RECORD_ID MarkDuplicates --PROGRAM_GROUP_NAME MarkDuplicates --READ_NAME_REGEX <optimized capture of last three ':' separated fields as numeric values> --OPTICAL_DUPLICATE_PIXEL_DISTANCE 100 --MAX_OPTICAL_DUPLICATE_SET_SIZE 300000 --VERBOSITY INFO --QUIET false --COMPRESSION_LEVEL 5 --MAX_RECORDS_IN_RAM 500000 --CREATE_INDEX false --CREATE_MD5_FILE false --help false --version false --showHidden false --USE_JDK_DEFLATER false --USE_JDK_INFLATER false
[Wed Sep 10 05:49:06 GMT 2025] Executing as root@fcada74b96e5 on Linux 3.10.0-1160.59.1.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 22.0.1-internal-adhoc.conda.src; Deflater: Intel; Inflater: Intel; Provider GCS is available; Picard version: Version:3.3.0
INFO 2025-09-10 05:49:06 MarkDuplicates Start of doWork freeMemory: 47987624; totalMemory: 58720256; maxMemory: 13748928512
INFO 2025-09-10 05:49:06 MarkDuplicates Reading input file and constructing read end information.
INFO 2025-09-10 05:49:06 MarkDuplicates Will retain up to 49814958 data points before spilling to disk.
[Wed Sep 10 05:49:06 GMT 2025] picard.sam.markduplicates.MarkDuplicates done. Elapsed time: 0.01 minutes.
Runtime.totalMemory()=713031680
To get help, see <http://broadinstitute.github.io/picard/index.html#GettingHelp>
Exception in thread "main" java.lang.NullPointerException: Cannot invoke "htsjdk.samtools.SAMReadGroupRecord.getReadGroupId()" because the return value of "htsjdk.samtools.SAMRecord.getReadGroup()" is null
at picard.sam.markduplicates.MarkDuplicates.buildSortedReadEndLists(MarkDuplicates.java:558)
at picard.sam.markduplicates.MarkDuplicates.doWork(MarkDuplicates.java:270)
at picard.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:281)
at picard.cmdline.PicardCommandLine.instanceMain(PicardCommandLine.java:105)
at picard.cmdline.PicardCommandLine.main(PicardCommandLine.java:115)
Work dir:
/storage-daredevil/sammyseq_nfcore/Analisi/Linfociti/CD4/51_bp/work/1d/bcc4f0e42fbfd4ef35d277841bb40a
Container:
quay.io/biocontainers/picard:3.3.0--hdfd78af_0
Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
-- Check '.nextflow.log' file for details
ERROR ~ Pipeline failed. Please refer to troubleshooting docs: <https://nf-co.re/docs/usage/troubleshooting>
-- Check '.nextflow.log' file for details
I don’t have much experience with paired-end inputs, so I’d like to ask if does anyone recognize this type of error? Could it mean that the pipeline currently doesn’t handle PE data correctly and therefore the code needs to be updated?
Thanks in advanceLis Arend
09/10/2025, 7:25 AMBenjamin Story
09/10/2025, 2:17 PMexport NXF_HOME=/mnt/sample; export NXF_OPTS='-Xms4g -Xmx6g -XX:+UseG1GC'; cd $NXF_HOME; echo $PWD; nextflow run /mnt/HDD2/test/vep_module/main.nf -with-docker <http://quay.io/biocontainers/ensembl-vep:111.0--pl5321h2a3209d_0|quay.io/biocontainers/ensembl-vep:111.0--pl5321h2a3209d_0> --my_id 'sample' --vcf '/mnt/HDD2/sample/merge.vcf.gz';
I've been getting this intermittent java crash since updating java to version 17 a couple weeks ago (late July) due to the requirements from nextflow v25+. It worked before on java v11 with 0 crashes for over a year. all of this is on an Ubuntu server.
I'm launching around 20 tasks in parallel usually they all work (one crash occurred the day I updated java ago) - I thought maybe it was due to updating nextflow. since then everything has been running smoothly (so at least 4 runs of 20 samples each). now today, I get a 2 crashes (was using tons of the server RAM)... so I thought maybe it was that. I killed all processes and relaunched but then a random different process fails. any thoughts on the source of this? maybe some OOM im not understanding. I dropped the number of parallel process down from 20 to 8 but it still happened. any idea?
[2] "Downloading nextflow dependencies. It may require a few seconds, please wait .. \r\033[K"
[3] " N E X T F L O W ~ version 25.04.6"
[4] ""
[5] "Launching `/mnt/HDD2/test/vep_module/main.nf` [distraught_khorana] DSL2 - revision: 3a0ff5ed42"
[6] ""
[7] "#"
[8] "# A fatal error has been detected by the Java Runtime Environment:"
[9] "#"
[10] "# SIGSEGV (0xb) at pc=0x00007f7a8180e55a, pid=73649, tid=957"
[11] "#"
[12] "# JRE version: OpenJDK Runtime Environment (17.0.7+7) (build 17.0.7+7-Ubuntu-0ubuntu118.04)"
[13] "# Java VM: OpenJDK 64-Bit Server VM (17.0.7+7-Ubuntu-0ubuntu118.04, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)"
[14] "# Problematic frame:"
[15] "# C [ld-linux-x86-64.so.2+0x1d55a]"
[16] "#"
[17] "# Core dump will be written. Default location: Core dumps may be processed with \"/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -F%F -- %E\" (or dumping to /mnt/HDD2/test/core.73649)"
[18] "#"
[19] "# An error report file with more information is saved as:"
[20] "# /mnt/HDD2/test/hs_err_pid73649.log"
[21] "#"
[22] "# If you would like to submit a bug report, please visit:"
[23] "# Unknown"
[24] "# The crash happened outside the Java Virtual Machine in native code."
[25] "# See problematic frame for where to report the bug."
[26] "#"
Eva Gunawan
09/12/2025, 4:45 PMif (ch_no_ntc == "false") {
CREATE_REPORT (
stuff...
)
}
if (ch_no_ntc == "true") {
CREATE_REPORT_NO_NTC (
stuff...
)
}
When I view ch_no_ntc, it shows "true". But it seemingly skips the module regardless of meeting the if condition. I have even added the modules to another troubleshooting if statement where ch_no_ntc is being created:
if (ch_kraken_ntc == "empty" && ch_ntc_check == "empty" ) {
Channel.of("false")
.set{ ch_no_ntc }
CREATE_REPORT (
stuff...
)
} else {
Channel.of("true")
.set{ ch_no_ntc }
CREATE_REPORT_NO_NTC (
stuff...
)
}
For some reason, it still ends up being skipped regardless of where I put it. I can run both of the modules just fine outside of the if statements. Just as a test, I've tried using a param denoted in the nextflow.config. For example:
if (params.ntc_present == "true") {
CREATE_REPORT (
stuff...
)
}
if (params.ntc_present == "false") {
CREATE_REPORT_NO_NTC (
stuff...
)
}
Both modules work if there is a param set like this, but I need to determine if it is present inside the workflow itself. Any suggestions/advice? Thanks in advance 😄Luis Heinzlmeier
09/15/2025, 8:25 AMnf-test test tests/default.nf.test --profile +singularity --update-snapshot
. However, when I run nf-test in Codespaces, I get the following error message (I do not get this error when I run the pipeline locally):
Sep-14 11:48:46.861 [Actor Thread 66] ERROR nextflow.extension.OperatorImpl - @unknown
org.yaml.snakeyaml.parser.ParserException: while parsing a block mapping
in 'reader', line 2, column 5:
echo mkdir -p failed for path /h ...
^
expected <block end>, but found '<scalar>'
in 'reader', line 2, column 79:
... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ...
^
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:654)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:161)
at org.yaml.snakeyaml.comments.CommentEventsCollector$1.peek(CommentEventsCollector.java:57)
at org.yaml.snakeyaml.comments.CommentEventsCollector$1.peek(CommentEventsCollector.java:43)
at org.yaml.snakeyaml.comments.CommentEventsCollector.collectEvents(CommentEventsCollector.java:136)
at org.yaml.snakeyaml.comments.CommentEventsCollector.collectEvents(CommentEventsCollector.java:116)
at org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:291)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:216)
at org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:396)
at org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:361)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:329)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:218)
at org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:396)
at org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:361)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:329)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:218)
at org.yaml.snakeyaml.composer.Composer.getNode(Composer.java:141)
at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:167)
at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:178)
at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:507)
at org.yaml.snakeyaml.Yaml.load(Yaml.java:448)
at nextflow.file.SlurperEx.load(SlurperEx.groovy:67)
at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
at Script_5c4e8d4051efa81e.processVersionsFromYAML(Script_5c4e8d4051efa81e:82)
at jdk.internal.reflect.GeneratedMethodAccessor257.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:343)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
at Script_5c4e8d4051efa81e$_softwareVersionsToYAML_closure2.doCall(Script_5c4e8d4051efa81e:101)
at jdk.internal.reflect.GeneratedMethodAccessor256.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:280)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321)
at nextflow.extension.MapOp$_apply_closure1.doCall(MapOp.groovy:56)
at jdk.internal.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:343)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:328)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:280)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1007)
at groovy.lang.Closure.call(Closure.java:433)
at groovyx.gpars.dataflow.operator.DataflowOperatorActor.startTask(DataflowOperatorActor.java:120)
at groovyx.gpars.dataflow.operator.DataflowOperatorActor.onMessage(DataflowOperatorActor.java:108)
at groovyx.gpars.actor.impl.SDAClosure$1.call(SDAClosure.java:43)
at groovyx.gpars.actor.AbstractLoopingActor.runEnhancedWithoutRepliesOnMessages(AbstractLoopingActor.java:293)
at groovyx.gpars.actor.AbstractLoopingActor.access$400(AbstractLoopingActor.java:30)
at groovyx.gpars.actor.AbstractLoopingActor$1.handleMessage(AbstractLoopingActor.java:93)
at groovyx.gpars.util.AsyncMessagingCore.run(AsyncMessagingCore.java:132)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Sep-14 11:48:46.891 [Actor Thread 66] DEBUG nextflow.Session - Session aborted -- Cause: while parsing a block mapping
in 'reader', line 2, column 5:
echo mkdir -p failed for path /h ...
^
expected <block end>, but found '<scalar>'
in 'reader', line 2, column 79:
... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ...
^
Test [5d0fca1c] '-profile test' Assertion failed:
assert workflow.success
| |
workflow false
FAILED (534.124s)
Assertion failed:
1 of 2 assertions failed
Nextflow stdout:
ERROR ~ while parsing a block mapping
in 'reader', line 2, column 5:
echo mkdir -p failed for path /h ...
^
expected <block end>, but found '<scalar>'
in 'reader', line 2, column 79:
... /.config/matplotlib: [Errno 30] Read-only file system: '/home/gi ...
^
-- Check script '/workspaces/hadge/subworkflows/nf-core/utils_nfcore_pipeline/main.nf' at line: 82 or see '/workspaces/hadge/.nf-test/tests/5d0fca1c9bc3a6b101ae0cb52e6a311a/meta/nextflow.log' file for more details
ERROR ~ Pipeline failed. Please refer to troubleshooting docs: <https://nf-co.re/docs/usage/troubleshooting>
-- Check '/workspaces/hadge/.nf-test/tests/5d0fca1c9bc3a6b101ae0cb52e6a311a/meta/nextflow.log' file for details
Nextflow stderr:
Nadia Sanseverino
09/15/2025, 2:48 PMnf-core modules test deeptools/bigwigcompare
, nf-core modules test deeptools/bigwigcompare --profile docker
, nf-core modules test deeptools/bigwigcompare --profile conda
(all launched from modules root dir) just execute infinitely.
But if I run nf-test test modules/nf-core/deeptools/bigwigcompare/tests/main.nf.test
it works, it's just that on the tutorial pages https://nf-co.re/docs/tutorials/tests_and_test_data/nf-test_comprehensive_guide there's a broken link for '3. Testing modules' and the Modules tutorial doesn't mention much nf-test, so I can't find a reason why the nf-core modules test command doesn't wok.
Thank you to anyone willing to look into this 😊Andries van Tonder
09/15/2025, 3:06 PMHelen Huang
09/15/2025, 9:04 PM*ERROR ~ Failed to publish file*: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/work/aa/48b96b1bc55c153b2885f0cf2f2ea6/samplesheet.valid.csv; to: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/nf_atac_test/pipeline_info/samplesheet.valid.csv [copy] -- See log file for details
Here’s the error in the log:
DEBUG nextflow.processor.PublishDir - Failed to publish file: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/work/aa/48b96b1bc55c153b2885f0cf2f2ea6/samplesheet.valid.csv; to: /mnt/biggie/Signaling_Systems_Drive/Users/Helen/nf_atac_test/pipeline_info/samplesheet.valid.csv [copy] -- attempt: 4; *reason: Input/output error*
Ubuntu server: 18.04 LTS
Nexflow version: version 25.04.6
SMB version: 3.0 (tried different versions, none worked)
The NAS was mounted with noperm, which means everyone can write without permission checks, so it’s not a permission issue. I understand NFS mounting might work better but we have reasons to use SMB mounting. (The NAS is mounted in many different Windows and Mac systems as well, so NFS mounting is going to cause issues.)
Thank you all! Our lab used to use pipelines we built ourselves, but now want to move towards using Nexfflow core pipelines.Martin Rippin
09/16/2025, 11:35 AM[ ['id1', 'id1/path/to/MetricsOutput.tsv', 'id1/path/to/RunCompletionStatus.xml'], ['id2', 'id2/path/to/MetricsOutput.tsv', 'id2/path/to/RunCompletionStatus.xml'], ... ]
I am struggeling to define the structure correctly inside the process. I tried something like:
input:
tuple tuple(val(id), path(tsv), path(xml))
but that does not work. Also the files will be mounted with their `basename`s which I also don't know how to solve. Does anyone have an idea how to solve this? I was thinking of just giving the root dir of all files and glob inside the process but maybe there is a more sophisticated way?Hannes Kade
09/16/2025, 5:47 PM.nextflow.log
[romantic_fourier] DSL2 - revision: adf043ce82
ERROR ~ Script compilation error
- file : /mnt/wsl/docker-desktop-bind-mounts/Ubuntu/bcd7a1f898c503385f2a83c3ba853c7acd3d7bb6b1ddd98b63de37dcda26623f/.nextflow.log
- cause: Unexpected input: ':' @ line 1, column 13.
Sep-16 183653.621 [main] DEBUG nextflow.cli.Launcher - $> nextflow run .nextflow.log
^
1 errorSylvia Li
09/16/2025, 10:17 PMJoshua
09/17/2025, 3:43 AMHovakim Grabski
09/17/2025, 9:44 AMAgrima Bhatt
09/17/2025, 12:35 PMnf-test / docker
on several test cases (e.g. 1/7, 2/7, etc.) for my pipeline PR, but when I manually run the pipeline locally everything works fine. The pre-commit and linting checks pass, and some nf-test checks (like 6/7) are successful, but most fail after 1–3 minutes.
What could be causing these nf-test docker failures in CI, especially when the pipeline runs without issues on my machine?
Is there something specific I should check?
Any advice on debugging nf-test failures would be appreciated! My PR : https://github.com/nf-core/seqinspector/pull/127