https://nf-co.re logo
Join Slack
Powered by
# help
  • c

    Carla Pereira-Garcia

    08/08/2025, 9:53 AM
    I am trying to run the nf-core/metatdenovo pipeline on the ifb cluster and it keeps crushing. If I post the error could someone help me?
    #️⃣ 1
    c
    r
    • 3
    • 3
  • l

    Laura Helou

    08/11/2025, 6:33 AM
    Hello, I have a dumb question. Where can I found which container are used in a nf-core pipeline and which version is used for traceability? I am currently trying to use a "simple nf-core pipeline", bamtofastq. Thank you very much for your help !
    j
    n
    • 3
    • 12
  • t

    Tyler Gross

    08/11/2025, 11:43 PM
    Has anyone successfully integrated an nf-core pipeline into a larger, custom, nextflow pipeline? The goal is to able to treat several nf-core pipeline as ‘workflows’ as part of a much larger neoantigen prediction pipeline im building. It requires a lot of steps and existing nf-core pipelines (sarek, rnaseq, hlatyping, epitopeprediction) cover the majority of the steps. I’ve gotten it to work manually but it could be done automatically as everything is connected
    👀 1
    j
    v
    n
    • 4
    • 3
  • l

    Luuk Harbers

    08/12/2025, 11:42 AM
    I've ran into the issue where certain channels are dropped after
    resume
    after a specific
    join
    operation in
    nf-core/scnanoseq
    (https://github.com/nf-core/scnanoseq/blob/05d705a301a262669c2252c890106e31a28a120e/subworkflows/local/quantify_scrna_isoquant.nf#L95C16-L95C17) I was reading some of the discussions and also found the gotcha explaining some of it here: https://midnighter.github.io/nextflow-gotchas/gotchas/join-on-map-fails-resume/ I was now simply wondering what the best way is to implement this is (nf-core) workflows. If it's using the proposed solution in that gotcha, I'm a little bit lost on how to properly include that in a pipeline :') Edit: maybe slightly different issue (?)
    n
    • 2
    • 10
  • y

    Yasset Perez Riverol

    08/12/2025, 4:49 PM
    has anyone face this error in the CI/CD:
    Copy code
    Installed distributions
    Creating settings.xml with server-id: github
    Overwriting existing file /home/runner/.m2/settings.xml
    Run ./setup-nextflow/subaction
      with:
        version: 24.10.5
        all: false
      env:
        NXF_ANSI_LOG: false
        NXF_SINGULARITY_CACHEDIR: /home/runner/work/quantms/quantms/.singularity
        NXF_SINGULARITY_LIBRARYDIR: /home/runner/work/quantms/quantms/.singularity
        CAPSULE_LOG: none
        TEST_PROFILE: test_dda_id
        EXEC_PROFILE: docker
        JAVA_HOME: /opt/hostedtoolcache/Java_Zulu_jdk/17.0.16-8/x64
        JAVA_HOME_17_X64: /opt/hostedtoolcache/Java_Zulu_jdk/17.0.16-8/x64
    Input version '24.10.5' resolved to Nextflow undefined
    Error: Cannot read properties of undefined (reading 'includes')
    Error: Could not run 'nextflow help'. Error: Unable to locate executable file: nextflow. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
    r
    • 2
    • 3
  • p

    phuaxmb

    08/13/2025, 5:08 AM
    Hi, have a bit of a naive question. If we decide that we'd rather not run one of the tools currently set to run, cancel the pipeline and resume it without said tool, does it work or does the entire pipeline restart?
    j
    a
    a
    • 4
    • 5
  • s

    shanshan wang

    08/13/2025, 10:25 AM
    hello i am new here and i try to use nf-core scrna starsolo to ran my scrna-seq data. because it is 150bp pair end sequencing, so the barcodelength is not 28 (default), but 150, what is the best practice to use customized parameters
    --soloUMIfiltering -  --soloMultiMappers EM --soloCBstart 1 --soloCBlen 16  --soloUMIstart 17 --soloUMIlen 12\
    Copy code
    N E X T F L O W  ~ version 25.04.2
    nextflow run nf-core/scrnaseq -r 2.0.0
    let me know if more info is required. Thank you!
    j
    • 2
    • 2
  • r

    Rayan Hassaïne

    08/14/2025, 12:32 PM
    Hi everyone, dumb question: can someone explain the impact of the
    executor.perJobMemLimit
    vs
    executor.perTaskReserve
    settings with the
    lsf
    executor? What would happen when setting one (or both) to true and vice versa ? Greatly appreciate it 🙏
    h
    • 2
    • 2
  • j

    James Fellows Yates

    08/15/2025, 12:59 PM
    Is there anyone here with python scripting experience with pandas/numpy/scipy/matplotlib/scipy who would be willing to look at a likely 'tiny' papercut bug issue in mag? https://github.com/nf-core/mag/issues/383 TL;DR: fixing a python script to handle if you have a distance matrix of just one object (so there is no distance)?
    homer disappear 1
    u
    • 2
    • 44
  • z

    Zeyad Ashraf

    08/16/2025, 1:12 AM
    Hello everyone, I hope you are all doing well. I have never used Nextflow before (every time I look through the website, I get a little overwhelmed, haha), and I have some scripts in R/Python that I want to link up into a nice pipeline. Can someone suggest a good tutorial for doing so? Thank you for your time :D
    j
    t
    • 3
    • 3
  • v

    Victor

    08/18/2025, 9:33 AM
    I am trying to download a pipeline and associated containers for the rnavar pipeline, using `nf-core pipeline download`` . However, I am using a mac for this setup, and Singularity/Apptainer is required to pull the containers:
    ERROR    Singularity/Apptainer is needed to pull images, but it is not installed or not in $PATH
    Setting up
    Singularity/Apptainer
    on a mac is... more work than I expected. Is there another way to pull the containers outside of the nf-core tools system?
    m
    j
    +2
    • 5
    • 6
  • b

    Ben

    08/19/2025, 11:22 AM
    A question to the Python professionals here (which I am not, obviously): I read a number of posts regarding the use of custom (unpublished) Python projects inside a nextflow pipeline. My problem seems to have a different root, though. My pipeline runs with docker and I try to use officially released docker images throughout. One of the tools, however, sets some parameters in a
    params.py
    module which is part of the tool code base inside the docker image. I tried adding a modified
    params.py
    to the
    /bin
    directory in the pipeline and adding this path to the
    PYTHONPATH
    variable. I checked that
    PYTHONPATH
    is correctly set and that
    params.py
    is accessible inside the nextflow module. However, it does not seem to work, and the module (and the parameters) are still loaded from the original file. Any ideas why that is? Any hints how to deal with tools that hard-code parameters (except building custom docker images)?
    a
    t
    • 3
    • 3
  • f

    Fredrick

    08/20/2025, 1:48 AM
    A naive question, I have created a minimum dataset to test the #C084Z8NLZFB pipeline at
    <https://github.com/fmobegi/nf-core-test-datasets/tree/abotyper>
    . The guiding notes (https://nf-co.re/docs/tutorials/adding_a_pipeline/test_data) suggest creating a new branch at https://github.com/nf-core/test-datasets/branches/active and then make a pull request to use that branch as the target. For some reason, I am unable to create this branch. Any pointers will be appreciated.
    j
    • 2
    • 3
  • a

    Avani Bhojwani

    08/21/2025, 1:32 AM
    I noticed that the results for denovotranscript pipeline on the website are incomplete. The pipeline failed during the Trinity module because "Essential container in task exited". I'm guessing it has to do with spot instances. is it possible to launch a full test manually with different settings to get around this? Slack conversation
    j
    e
    p
    • 4
    • 22
  • l

    Luuk Harbers

    08/21/2025, 10:08 AM
    Quick question regarding nextflow configs. I thought that if I specify multiple configs (for instance
    institutional
    and
    test
    in a specific order one should overwrite the other when it comes to certain values? For instance, shouldnt the
    test
    profile's
    resourcelimits
    overwrite the ones specified in the institutional config if the
    test
    profile is specified later? Or am I misunderstanding
    j
    m
    • 3
    • 24
  • f

    François-Xavier Stubbe

    08/21/2025, 12:54 PM
    Hey! I'm trying to dynamically set a scale_factor for deeptools_bamcoverage. Failing so far. Has anyone achieved this?
    m
    s
    • 3
    • 2
  • j

    James Fellows Yates

    08/22/2025, 12:13 PM
    Is it still recommended/necessary to run
    .first()
    after mixing module versions into
    ch_versions
    ? Given
    .unique
    is run prior passing to MultiQC, is there any overhead benefit for taking just the
    version.yaml
    from the first module invocation vs passing all
    versions.yml
    and running unique
    m
    m
    n
    • 4
    • 33
  • s

    Sylvia Li

    08/22/2025, 6:26 PM
    Workflow gets hung up on subworkflows .view() call. View() works fine outside of it, How i am calling the subworkflow
    Copy code
    def longpac_longpolish = SAMPLESHEETFILTERING.out.list_longpac_longPolish
        def flattened_result = longpac_longpolish
            .filter { value -> value instanceof List && !value.isEmpty() }
            .flatMap()
        flattened_result.view()
    PACBIO_SUBWORKFLOW(flattened_result)
    it views() fine, emitting [[id:Sample1, polish:long, basecaller:NA], short1NA, short2NA, TestDatasetNfcore/Pacbio_illuminaPolish/PacbioSRR27591472.hifi.fastq.gz, assemblyNA] [[id:Sample2, polish:long, basecaller:NA], short1NA, short2NA, TestDatasetNfcore/Pacbio_illuminaPolish/PacbioSRR27591472.hifi.fastq.gz, assemblyNA] but when i pass it to the subworkflow
    Copy code
    workflow PACBIO_SUBWORKFLOW {
    
        take:
        ch_input_full // channel: [ val(meta), files/data, files/data, files/data..etc ]
        // bam_file
        // polish
        // gambitdb
        // krakendb
    
        main:
        def ch_output = Channel.empty()
        def ch_versions = Channel.empty()
        println("hello")
        ch_input_full.view()
    It just prints hello, and gets hung up, doesn't seem to ever print the channel values? just sits there. I dont understand why? my nextflow.log also says all processes finished, all barriers passed
    Copy code
    Aug-22 13:23:17.907 [main] DEBUG nextflow.script.ScriptRunner - > Awaiting termination 
    Aug-22 13:23:17.907 [main] DEBUG nextflow.Session - Session await
    Aug-22 13:23:17.907 [main] DEBUG nextflow.Session - Session await > all processes finished
    Aug-22 13:23:17.908 [main] DEBUG nextflow.Session - Session await > all barriers passed
    n
    • 2
    • 2
  • j

    Juan E. Arango Ossa

    08/22/2025, 6:33 PM
    As we know process names get truncated with ansi output. I know I can get full names if I use
    -ansi-log false
    but I do want the ansi output to have the latest colored output. I saw in this issue @Phil Ewels was suggesting something with full names as in the pic. Was this implemented? Can I get something like that with ansi logs and full process name or at least longer? As it is, it's still very challenging to read
    p
    • 2
    • 28
  • s

    Sylvia Li

    08/22/2025, 10:38 PM
    if i have 2 channels from channel factories ch_1 = Channel.of(1,2,3) ch_2 = Channel.of(4,5,6) if i input them into a subworkflow together subworkflow(ch_1, ch_2) will they always emit in order? so the first value of ch_1 will be with the first value of ch_2 and so on?
    t
    a
    • 3
    • 2
  • n

    Nour El Houda Barhoumi

    08/23/2025, 3:02 PM
    Hello, I hope you are doing well. In my BAM file, I noticed that some reads with the same QNAME appear multiple times but have different MAPQ values (for example, one is 42 and the other is 0), and they also have different flags. If I remove the reads with MAPQ = 0, will this risk producing an unbalanced BAM file? Thank you.
    t
    • 2
    • 2
  • y

    yanzi L.

    08/26/2025, 10:59 PM
    I need help on running the nf-core/phaseimpute, I run the test profile, and I have this error. This used to be working for me
    a
    • 2
    • 1
  • f

    Fredrick

    08/28/2025, 2:07 AM
    I need help with GATK4. With this workflow setup, GATK4_HAPLOTYPECALLER only runs on a single sample. Am i doing something wrong? Am trying little gymnastics here since some of the gatk4 subtools take references with metadata while other don't. I realised that there is a lot of joining and splitting involved as is but bear with me...
    Copy code
    FASTQ_ALIGN_BWA (
            ch_samplesheet,                                  // channel input reads: [ val(meta2), path(index) ]
            PREPARE_REFERENCE_INDEXES.out.bwa_index,         // channel BWA index: [ val(meta2), path(index) ]
            true,                                            // boolean value: true/false for sorting BAM files
            fasta,                                           // channel reference fasta: [ val(meta3), path(fasta) ]
        )
        ch_versions = ch_versions.mix( FASTQ_ALIGN_BWA.out.versions.first() )
    
        ch_bam_bai = FASTQ_ALIGN_BWA.out.bam.join( FASTQ_ALIGN_BWA.out.bai, by: 0)
    
        // Extract BAM and BAI channels from joined input
        ch_bam = ch_bam_bai.map { meta, bam, bai -> [meta, bam] }
        ch_bai = ch_bam_bai.map { meta, bam, bai -> [meta, bai] }
    
        /*
        MODULE: GATK4_ADDORREPLACEREADGROUPS
        */
        GATK4_ADDORREPLACEREADGROUPS (
            ch_bam,
            fasta,
            fasta_fai
        )
        ch_versions = ch_versions.mix(GATK4_ADDORREPLACEREADGROUPS.out.versions.first())
    
        /*
        MODULE: GATK4_MARKDUPLICATES
        */
    
        // DUBUG SANITY CHECKS: Create view for debugging
        // GATK4_ADDORREPLACEREADGROUPS.out.bam.view { "GATK4_MARKDUPLICATES input BAM: $it" }
        // fasta.map{ meta, fasta -> fasta }.view { "GATK4_MARKDUPLICATES input FASTA: $it" }
        // fasta_fai.map{ meta, fai -> fai }.view { "GATK4_MARKDUPLICATES input FASTA_FAI: $it" }
        
        GATK4_MARKDUPLICATES (
            GATK4_ADDORREPLACEREADGROUPS.out.bam,
            fasta.map{ meta, fasta -> fasta },
            fasta_fai.map{ meta, fai -> fai}
        )
        ch_versions = ch_versions.mix(GATK4_MARKDUPLICATES.out.versions.first())
    
        /*
        MODULE: GATK4_CALIBRATEDRAGSTRMODEL
        */
    
        // DUBUG SANITY CHECKS: create view for debugging
        // GATK4_MARKDUPLICATES.out.bam.join(GATK4_MARKDUPLICATES.out.bai).view { "GATK4_CALIBRATEDRAGSTRMODEL input BAM+BAI: $it" }
        // fasta.map{ meta, fasta -> fasta }.view { "GATK4_CALIBRATEDRAGSTRMODEL input FASTA: $it" }
        // fasta_fai.map{ meta, fai -> fai }.view { "GATK4_CALIBRATEDRAGSTRMODEL input FASTA_FAI: $it" }
        // genome_dict.view { "GATK4_CALIBRATEDRAGSTRMODEL input GENOME_DICT: $it" }
        // str_table.view { "GATK4_CALIBRATEDRAGSTRMODEL input STR_TABLE: $it" }
    
        GATK4_CALIBRATEDRAGSTRMODEL (
            GATK4_MARKDUPLICATES.out.bam.join(GATK4_MARKDUPLICATES.out.bai),
            fasta.map{ meta, fasta -> fasta },
            fasta_fai.map{ meta, fai -> fai },
            genome_dict.map{ meta, dict -> dict },
            str_table
        )
        ch_versions = ch_versions.mix(GATK4_CALIBRATEDRAGSTRMODEL.out.versions.first())
        /*
        MODULE: GATK4_HAPLOTYPECALLER
        Expected input:
            tuple val(meta), path(input), path(input_index), path(intervals), path(dragstr_model)
            tuple val(meta2), path(fasta)
            tuple val(meta3), path(fai)
            tuple val(meta4), path(dict)
            tuple val(meta5), path(dbsnp)
            tuple val(meta6), path(dbsnp_tbi)
        */
        ch_gatk_markduplicates = GATK4_MARKDUPLICATES.out.bam
            .join(GATK4_MARKDUPLICATES.out.bai, by: 0, failOnMismatch: true)
            .join(GATK4_CALIBRATEDRAGSTRMODEL.out.dragstr_model, by: 0, failOnMismatch: true)
            .combine(bed)
            .map { meta, bam, bai, model, bed -> [meta, bam, bai, bed, model] }
            .set { ch_gatk_haplo_input }
    
        // ch_gatk_haplo_input.view() { "GATK4_HAPLOTYPECALLER INPUT: $it" }
        GATK4_HAPLOTYPECALLER (
            ch_gatk_haplo_input,
            fasta,
            fasta_fai,
            genome_dict,
            dbsnp.map { meta, vcf -> [meta, vcf] },
            dbsnp_tbi.map { tbi -> ["dbsnp_tbi", tbi] }
        )
        ch_versions = ch_versions.mix(GATK4_HAPLOTYPECALLER.out.versions.first())
    m
    • 2
    • 5
  • r

    Richard Francis

    08/28/2025, 4:14 PM
    Originally posted in #C07MFUQAR1B but reposted here in case anyone can provide assistance. Many thanks in advance.
    t
    • 2
    • 5
  • t

    Thiseas C. Lamnidis

    08/29/2025, 9:29 AM
    Hi all! I am trying to add a
    log.message
    on successful pipeline completion (in addition to the standard
    Pipeline completed successfully
    one). At first I tried adding it to
    subworkflows/nf-core/utils_nfcore_pipeline/main.nf
    , but doing so breaks linting because the file is different to remote. I could ignore this check in
    .nf-core.yml
    , but that seems dangerous as it is generally a good idea to keep those important core functions checked imo. So I made my own copy of the
    completionSummary
    function, which I added directly within
    subworkflows/local/utils_nfcore_eager_pipeline/main.nf
    . It looks like this:
    Copy code
    def easterEgg(monochrome_logs) {
        def colors = logColours(monochrome_logs) as Map
        if (workflow.stats.ignoredCount == 0) {
                if (workflow.success) {
                    // <https://en.wiktionary.org/wiki/jw.f_pw>
                    log.info("-${colors.green}𓂻 𓅱 𓆑 𓊪 𓅱${colors.reset}-")
                }
        }
    }
    Here’s the code from the
    completionSummary
    function, for reference:
    Copy code
    def completionSummary(monochrome_logs=true) {
        def colors = logColours(monochrome_logs) as Map
        if (workflow.success) {
            if (workflow.stats.ignoredCount == 0) {
                log.info("-${colors.purple}[${workflow.manifest.name}]${colors.green} Pipeline completed successfully${colors.reset}-")
            }
            else {
                log.info("-${colors.purple}[${workflow.manifest.name}]${colors.yellow} Pipeline completed successfully, but with errored process(es) ${colors.reset}-")
            }
        }
        else {
            log.info("-${colors.purple}[${workflow.manifest.name}]${colors.red} Pipeline completed with errors${colors.reset}-")
        }
    }
    I then call my
    easterEgg
    function within
    PIPELINE_COMPLETION
    , directly after
    completionSummary
    , like so:
    Copy code
    workflow PIPELINE_COMPLETION {
        [...]
        workflow.onComplete {
            [...]
            completionSummary(monochrome_logs)
            easterEgg(monochrome_logs)
            [...]
        }
    }
    Considering it is essentially a copy of
    completionSummary
    , I would expect this to work, but instead I get this error:
    Copy code
    -[nf-core/eager] Pipeline completed successfully-
    ERROR ~ Failed to invoke `workflow.onComplete` event handler
    
     -- Check script './workflows/../subworkflows/local/../../subworkflows/local/utils_nfcore_eager_pipeline/main.nf' at line: 190 or see '.nextflow.log' file for more details
    It seems I cannot access the
    workflow
    object to check its
    .success
    or
    .stats.ignoredCount
    attributes. The error stays the same when I flip the order of checks, so it seems I cannot access the
    workflow
    object altogether. Any ideas what is going on here? This is rather unintuitive.
    m
    s
    a
    • 4
    • 30
  • s

    Sam Sims

    08/29/2025, 11:09 AM
    Hi all! I am attempting to get nf-test snapshot testing working with GitHub Actions, but I am running into some issues with filepaths that seem to be causing the snapshot to fail. Locally nf-test seems to resolve a relative path to the output file and thus that relative path is saved in the snapshot (which is what I should expect I think?). However when I run this in GH actions I get a full filepath that just points to a work directory by the looks of things (6b5fb1e4015fc9f93a37a33a917222c3) which I assume is causing the snapshot to fail e.g:
    Copy code
    java.lang.RuntimeException: Different Snapshot:
      [													[
          {													    {
              "0": [												        "0": [
                  [												            [
                      "cchf_test",										                "cchf_test",
                      "3052518.warning.json:md5,1b59b4c73ec5eb7a87a2e6b1cc810e9a"			   |	                "/home/runner/work/scylla/scylla/.nf-test/tests/6b5fb1e4015fc9f93a37a33a917222c3
                  ]												            ]
              ],												        ],
              "warning_ch": [											        "warning_ch": [
                  [												            [
                      "cchf_test",										                "cchf_test",
                      "3052518.warning.json:md5,1b59b4c73ec5eb7a87a2e6b1cc810e9a"			   |	                "/home/runner/work/scylla/scylla/.nf-test/tests/6b5fb1e4015fc9f93a37a33a917222c3
                  ]												            ]
              ]												        ]
          },													    },
          "hcid.counts.csv:md5,c45ab01001988dc88e4469ae29a92448"						    "hcid.counts.csv:md5,c45ab01001988dc88e4469ae29a92448"
      ]													]											        ],
    In my test I am doing something like this
    assert snapshot(workflow.out, path("${outputDir}/cchf_test/qc/hcid.counts.csv")).match()
    Interestingly it seems in this example the
    hcid.counts.csv
    file works fine - its just the outputs of
    workflow.out
    that seem to have this problem I might be missing something obvious, but I have been stumped for a while trying to figure this out - and so thought Id see if anyone had any ideas. Thanks 🙂
    m
    • 2
    • 7
  • c

    Cheyenne

    08/29/2025, 12:29 PM
    Does the nf-core/base docker image already include things like fastqc, samtools, etc. or do I need to add those separately to my dockerfile?
    m
    p
    p
    • 4
    • 12
  • k

    karima

    09/01/2025, 1:03 PM
    Hi all! I am currently trying to run the nf-core/rnaseq test dataset as part of learning the pipeline. I am relatively new to Nextflow and nf-core workflows. While running the pipeline, I encountered the following error:
    ERROR ~ Error executing process > 'NFCORE_RNASEQ:RNASEQ:FASTQ_QC_TRIM_FILTER_SETSTRANDEDNESS:FASTQ_FASTQC_UMITOOLS_TRIMGALORE:FASTQC (RAP1_UNINDUCED_REP1)'
    Caused by:
    Process requirement exceeds available memory -- req: 15 GB; avail: 14.8 GB
    My machine specifications are: RAM: 14 GB and CPUs: 8 configuration file: process { cpus = 4 memory = '12 GB' time = '12h' withLabel:process_low { cpus = 1 memory = '4 GB' time = '2h' } withLabel:process_medium { cpus = 2 memory = '6 GB' time = '4h' } withLabel:process_high { cpus = 4 memory = '12 GB' time = '10h' } } Could you please advise on the best way to successfully run the test dataset ?
    t
    • 2
    • 4
  • f

    Fredrick

    09/03/2025, 5:03 AM
    Hi everyone 👋 I’m looking to learn more about the pipelines used in pharmacogenomics analyses. If you work in this space, I’d love to hear what tools or workflows you rely on, especially for variant calling and interpretation. I'm particularly interested in panels that span across: • Pharmacogenes (e.g. CYP2D6) • Enzyme deficiencies (e.g. G6PD) • Primary immunodeficiencies (e.g. UBA1) • Hematologic disorders Do you use nf-core pipelines, custom workflows, or something else entirely? What sequencing formats do you normally use (short-read and long-read)? Thanks in advance for any insights and recommendations
  • u

    Ugo Iannacchero

    09/03/2025, 4:03 PM
    Hi, I was wondering when the weekly help-desk will come back for European Summer Time. Thanks!
    j
    m
    • 3
    • 3