https://nf-co.re logo
Join Slack
Powered by
# help
  • s

    Sylvia Li

    10/08/2025, 6:07 PM
    If I am using a custom pipeline from someone else, that uses custom local modules. If i run nf-core modules update --all, would that mess with anything? is it recommended to do it to keep versions of nf-core modules updated?
    p
    • 2
    • 4
  • p

    Priyanka Surana

    10/10/2025, 7:24 AM
    For a new in-house pipeline, each module takes between 8s to 5m. This is extremely inefficient on the HPC. Is there a way to push the entire pipeline into a single job instead of running each module separately. We cannot run anything on the head node. Reposting here from #C0364PKGWJE
    m
    u
    +6
    • 9
    • 47
  • s

    Susanne Jodoin

    10/13/2025, 2:49 PM
    Dear all, I would need help from anyone having experience with nextflow secrets in pipeline-level nf-tests. I'm working on a template update of metapep https://github.com/nf-core/metapep/pulls/159 and get different results in nf-test than before. Some tests need nextflow secrets for NCBI which have been created a while ago and to which I do not have access. So far I've added the secrets from the repo to the .github/actions/nf-test/action.yml and the workflows/nf-test.yml following sarek to make sure they are used in the CI tests. When creating snapshots with my own NCBI secrets on our infrastructure or in github copilot, the generated snapshots show the same md5sums as before the template update. But in the CI tests using the metapep NCBI secrets, the md5sums and file lengths differ. Hope I am not missing something obvious. Here is one example: Failing snapshot https://github.com/nf-core/metapep/actions/runs/18095105573/job/51484296262 Before template update https://github.com/nf-core/metapep/blob/master/tests/pipeline/test_mhcnuggets_2.nf.test.snap
    m
    p
    • 3
    • 9
  • j

    Jon Brรฅte

    10/16/2025, 10:38 AM
    Hi all, I'm trying to incorporate the pipeline version or revision into a process that summarizes the results from the pipeline. I've looked at the way this information gets into the MultiQC report, but I'm doing something wrong. My summarize process launches an R-script, so I tried to define some environment variables inside the process that then will be available to the R-script. My manifest block:
    Copy code
    manifest {
        name            = 'folkehelseinstituttet/hcvtyper'
        author          = """Jon Brรฅte"""
        homePage        = '<https://github.com/folkehelseinstituttet/hcvtyper>'
        description     = """Assemble and genotype HCV genomes from NGS data"""
        mainScript      = '<http://main.nf|main.nf>'
        nextflowVersion = '!>=23.04.0'
        version         = '1.1.1'  // Current release version
        doi             = ''
    }
    Current code for the process:
    Copy code
    process SUMMARIZE {
    
        label 'process_medium'
        errorStrategy 'terminate'
    
        // Environment with R tidyverse and seqinr packages from the conda-forge channel. Created using seqera containers.
        // URLs:
        // Docker image: <https://wave.seqera.io/view/builds/bd-3536dd50a17de0ab_1?_gl=1*16bm7ov*_gcl_au*MTkxMjgxNTMwMi4xNzUzNzczOTQz>
        // Singularity image: <https://wave.seqera.io/view/builds/bd-88101835c4571845_1?_gl=1*5trzpp*_gcl_au*MTkxMjgxNTMwMi4xNzUzNzczOTQz>
        conda "${moduleDir}/environment.yml"
        container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
            '<https://community-cr-prod.seqera.io/docker/registry/v2/blobs/sha256/2a/2a1764abd77b9638883a202b96952a48f46cb0ee6c4f65874b836b9455a674d1/data>':
            '<http://community.wave.seqera.io/library/r-gridextra_r-png_r-seqinr_r-tidyverse:3536dd50a17de0ab|community.wave.seqera.io/library/r-gridextra_r-png_r-seqinr_r-tidyverse:3536dd50a17de0ab>' }"
    
        // Set environment variables for summarize.R
        env "PIPELINE_NAME",      "${workflow.manifest.name}"
        env "PIPELINE_VERSION",   "${workflow.manifest.version}"                 // Primary version source
        env "PIPELINE_REVISION",  "${workflow.revision ?: ''}"                  // Secondary (branch/tag info)
        env "PIPELINE_COMMIT",    "${workflow.commitId ?: ''}"                  // Tertiary (commit info)
    
        input:
        path samplesheet
        val stringency_1
        val stringency_2
        path 'trimmed/'
        path 'kraken_classified/'
        path 'parsefirst_mapping/'
        path 'stats_withdup/'
        path 'stats_markdup/'
        path 'depth/'
        path 'blast/'
        path 'glue/'
        path 'id/'
        path 'variation/'
    
        output:
        path 'Summary.csv'      , emit: summary
        path '*mqc.csv'         , emit: mqc
        path '*png'             , emit: png
        path "versions.yml"     , emit: versions
    
        when:
        task.ext.when == null || task.ext.when
    
        script:
        def args = task.ext.args ?: ''
    
        """
        summarize.R \\
            $samplesheet \\
            $stringency_1 \\
            $stringency_2 \\
            $args
    But the R-script prints "HCVTyper (version unknown)". Relevant code part in the R-script:
    Copy code
    pipeline_name    <- Sys.getenv("PIPELINE_NAME", "HCVTyper") # Use HCVTyper if not set
    pipeline_version <- Sys.getenv("PIPELINE_VERSION", "")
    pipeline_rev     <- Sys.getenv("PIPELINE_REVISION", "")
    pipeline_commit  <- Sys.getenv("PIPELINE_COMMIT", "")
    
    # Precedence: version > revision > commit
    ver_label <- dplyr::coalesce(
      na_if(pipeline_version, ""),
      na_if(pipeline_rev, ""),
      na_if(pipeline_commit, "")
    )
    
    script_name_version <- if (!<http://is.na|is.na>(ver_label)) {
      paste(pipeline_name, ver_label)
    } else {
      paste(pipeline_name, "(version unknown)")
    }
    n
    a
    • 3
    • 9
  • s

    Salim Bougouffa

    10/16/2025, 12:36 PM
    Hi there, i have a question. I run the mag workflow using the coassemble option. The master job was killed due to timelimit and the maxbin2 job carried on and it finished. So, i assumed if I rerun the master job using -resume, it should use the maxbin2 job that had already finished but it did not. It submitted a new maxbin2 job. The one that was successful took almost 10days to finish Thread in Slack conversation
    a
    • 2
    • 3
  • p

    Paweล‚ Ciurka

    10/17/2025, 4:15 PM
    Hi all! Is there any golden path for handling a resume-from-arbitrary-process case? An usecase is that in some scenarios I need to rerun few last processes even though an initial pass of the workflow completed correctly. Any guidance welcome ๐Ÿ™‚
    p
    • 2
    • 6
  • n

    Nour El Houda Barhoumi

    10/19/2025, 3:46 PM
    Hi everyone, I hope you are doing well I would like to ask if the method I used for my functional enrichment analysis is correct I took all the differentially expressed genes from my RNA-seq comparisons and annotated them using a custom functional classification called My_subclass then for each subclass I calculated the number of DE genes the mean log2 fold change and the fraction of genes belonging to that subclass relative to all DE genes I visualized the results using bubble plots where the x axis represents the mean log2 fold change the y axis represents the subclass bubble size indicates the number of genes and color indicates up or down regulation this approach does not include any statistical testing like GSEA or Fisherโ€™s exact test it is purely descriptive to highlight trends in functional categories I would like to know if this method is scientifically valid for interpreting functional enrichment or if it should be complemented with a statistical enrichment analysis
    n
    • 2
    • 2
  • s

    Sylvia Li

    10/20/2025, 11:37 PM
    i have a python script that uses the output dir (once all processes are done), at the absolute end of a pipeline that contains analysis of genomes - the script creates visualizations comparing all samples to eachother (read quality, plasmids..etc). I understand that publishdir is asynchronous, but now I am stuck, is there anyway to wait until output dir publishdir is done? โ€ข Maybe check for amount of subdirectories/files in the outputdir would be a good workaround in my python script? โ€ข Maybe workflow.Oncomplete and make it sleep for a user-set amount of time before running the python script? โ—ฆ The files that are being published aren't big so maybe sleeping would be okay?
    p
    a
    • 3
    • 10
  • y

    Yeji Bae

    10/21/2025, 1:17 AM
    Hi, I am trying to run
    nf-core/scrnaseq_nf
    , but Iโ€™m facing a memory issue with the
    cellranger count
    step. I tried increasing the memory by specifying it in my config file as shown below, but it still runs with 15 GB.
    Copy code
    process {
      withName: 'CELLRANGER_COUNT' {
        cpus = 8
        memory = 36.GB //TODO: check the format 
        time = '24h'
      }
    }
    Here is my command for running the pipeline.
    Copy code
    nextflow \
      -log test_running_cellranger1017.log \
      run nf-core/scrnaseq \
      -c conf/rnaseq_sasquatch.config \
      -profile seattlechildrens,test \
      -params-file params/nf-core_scrnaseq_params_ex_cellranger.yml \
      -resume
    t
    • 2
    • 1
  • s

    Sylvia Li

    10/21/2025, 6:58 PM
    if I input 2 different channels of 2 different lengths into a subworkflow
    Copy code
    workflow MAIN {
        channel1 = Channel.of(1, 2, 3, 4)      // size 4
        channel2 = Channel.of('A', 'B', 'C')    // size 3
        
        SUB_WORKFLOW(channel1, channel2)
    }
    
    workflow SUB_WORKFLOW {
        take:
        input1  // size 4
        input2  // size 3
        
        main:
        PROCESS_ONE(input1)  // will this run 4times?
        PROCESS_TWO(input2)  // Will this run 3 times?
    }
    Will the processes within the subworkflow run the amount of times necessary for each separate channel? I'm worried, or maybe I misunderstood, that it will only run 3 times, since that is the smallest channel for input. is that an issue only if they're both used in 1 Process and they differ in size?
    p
    • 2
    • 1
  • a

    Ammar Aziz

    10/22/2025, 4:50 AM
    Why do I get this error sometimes:
    Copy code
    ERROR ~ /tmp/nextflow-plugin-nf-schema-2.1.1.lock (Permission denied)
    n
    t
    • 3
    • 8
  • a

    Ammar Aziz

    10/22/2025, 4:51 AM
    Possibly I
    ctrl + c
    during nextflow startup/downloading?
  • a

    Aivars Cฤซrulis

    10/22/2025, 1:36 PM
    Hey, When we run the newest sarek 3.6.0. either with or without parabricks option, we get this error (somewhat similar as with sarek 3.5.1 and sention as I remember). Does someone know how to deal with it?: * --msisensorpro_scan (/mnt/tier2/project/p200971/batchMeluxina/igenomes/Homo_sapiens/GATK/GRCh38/Annotation/MSIsensorPro/Homo_sapiens_assembly38.msisensor_scan.list): the file or directory '/mnt/tier2/project/p200971/batchMeluxina/igenomes/Homo_sapiens/GATK/GRCh38/Annotation/MSIsensorPro/*Homo_sapiens_assembly38.msisensor_scan.list*' does not exist * --msisensor2_models (/mnt/tier2/project/p200971/batchMeluxina/igenomes/Homo_sapiens/GATK/GRCh38/Annotation/MSIsensor2/models_hg38//): the file or directory '/mnt/tier2/project/p200971/batchMeluxina/igenomes/Homo_sapiens/GATK/GRCh38/Annotation/MSIsensor2/models_hg38//' does not exist Best wishes, Aivars
    #๏ธโƒฃ 1
    c
    m
    • 3
    • 2
  • c

    chase mateusiak

    10/23/2025, 12:14 AM
    what is the best way to handle a case where, when there are too many input files, they should be passed in via a file?
    Copy code
    ...
        input:
        tuple val(meta), path(peaks) // peaks can be a list of peak files
    
        output:
        tuple val(meta), path("*_merged.txt"), emit: merged
        path "versions.yml"                  , emit: versions
    
        when:
        task.ext.when == null || task.ext.when
    
        script:
        def args = task.ext.args ?: ''
        def prefix = task.ext.prefix ?: "${meta.id}"
        def peak_list = peaks instanceof List ? peaks : [peaks]
        def use_peak_file = peak_list.size() > 10
        def VERSION = '5.1'
        
        if (use_peak_file) {
            """
            # Create file list using a shell loop
            for peak_file in ${peaks.join(' ')}; do
                echo "\${peak_file}" >> ${prefix}_peak_files.txt
            done
    
            mergePeaks \\
                $args \\
                -file ${prefix}_peak_files.txt \\
                > ${prefix}_merged.txt
    
            cat <<-END_VERSIONS > versions.yml
            "${task.process}":
                homer: ${VERSION}
            END_VERSIONS
            """
        } else {
            """
            mergePeaks \\
                $args \\
                ${peaks.join(' ')} \\
                > ${prefix}_merged.txt
    
            cat <<-END_VERSIONS > versions.yml
            "${task.process}":
                homer: ${VERSION}
            END_VERSIONS
            """
        }
    ...
    that works with 4 files (reducing the threshold at which it is using a lookup), but when I try it with 1000+, it isn't. It seems like it is failing to symlink the files into the work dir, too, which also isn't happening when I run this on a reduced number of files (can't tell if that is true yet or not -- just dealing with it now)
    s
    • 2
    • 3
  • d

    Dylan Renard

    10/23/2025, 2:06 PM
    Anyone familiar with getting MASHWrapper running on nextflow core pipelines, working with paired illumina reads and struggling with toggling between parallelized MASH results and a separate process for sequence pairs to run via mashwrapper. https://github.com/tiagofilipe12/mash_wrapper
    n
    • 2
    • 2
  • y

    Yang Pei

    10/24/2025, 1:14 PM
    Quick question about expected scheduling behaviour. Iโ€™m seeing downstream processes wait until all upstream tasks for a sample set finish before any downstream tasks start. For example:
    Copy code
    executor >  slurm (6)
    [4d/0abfaf] YPEโ€ฆRBASE:FASTQC (sub_tes33d6) | 0 of 3
    [6a/6d8f03] YPEโ€ฆBASE:FASTP (sub_spike0125) | 2 of 3
    [-        ] YPEโ€ฆNCLE_FGBIO_PERBASE:BWA_MEM -
    
    executor >  slurm (6)
    [2a/cfa7f3] YPEโ€ฆASE:FASTQC (sub_spike0125) | 1 of 3
    [6a/6d8f03] YPEโ€ฆBASE:FASTP (sub_spike0125) | 2 of 3
    [-        ] YPEโ€ฆNCLE_FGBIO_PERBASE:BWA_MEM -
    
    executor >  slurm (9)
    [4d/0abfaf] YPEโ€ฆRBASE:FASTQC (sub_tes33d6) | 3 of 3 โœ”
    [fd/92aa1d] YPEโ€ฆERBASE:FASTP (sub_tes33d6) | 3 of 3 โœ”
    [50/b7a700] YPEโ€ฆBASE:BWA_MEM (sub_tes33d6) | 0 of 3
    It looks like BWA_MEM only starts once all 3 FASTP tasks for that sample have finished, rather than as each FASTP task completes. Is that expected Nextflow behaviour? Or is there a way to configure Nextflow (or Slurm/executor) to let downstream tasks be scheduled as each upstream task finishes (like a per-task streaming)?
    n
    t
    • 3
    • 7
  • s

    Slackbot

    10/28/2025, 10:25 AM
    This message was deleted.
    m
    • 2
    • 1
  • รญ

    รcaro Castro

    10/28/2025, 2:47 PM
    Hi everyone, I was processing around 500 samples of 16S data using the nf-core/ampliseq pipeline on our university HPC. The run took more than two days and eventually stopped due to a connection timeout. When I tried to restart the run, I got the following error:
    Copy code
    (nextflow) icastro@sbcb-cbiot-05:~$ nextflow run nf-core/ampliseq -profile conda,test --outdir test_ampliseq
    
    N E X T F L O W   ~  version 25.04.3
    nextflow run nf-core/ampliseq -profile conda,test --outdir test_ampliseq
    ERROR ~ Unable to parse config file: '/home/icastro/.nextflow/assets/nf-core/ampliseq/nextflow.config'
    
      Cannot read config file include: <https://raw.githubusercontent.com/nf-core/configs/master/pipeline/ampliseq.config>
    The same error now happens with any nf-core pipeline I try to run, for example:
    Copy code
    (nextflow) icastro@sbcb-cbiot-05:~$ nextflow run nf-core/sarek -profile conda,test --outdir test
    
     N E X T F L O W   ~  version 25.04.3
    
    ERROR ~ Unable to parse config file: '/home/icastro/.nextflow/assets/nf-core/sarek/nextflow.config'
    
      Cannot read config file include: <https://raw.githubusercontent.com/nf-core/configs/master/nfcore_custom.config>
    It seems Nextflow canโ€™t fetch the nf-core configs from GitHub anymore. Has anyone experienced this issue or knows how to fix it? Iโ€™ve already tried removing and reinstalling nf-core pipelines with
    nextflow pull
    , but the error persists. Thanks in advance for your help! ๐Ÿ™๐Ÿผ
    j
    d
    t
    • 4
    • 9
  • a

    Aytac Oksuzoglu

    10/29/2025, 9:26 AM
    Hello, I have small problem in long-read pipeline. I would love to get advices. This is my starting code:
    nextflow run nf-core/mag \
    -profile apptainer \
    --input samplesheet.csv \
    --outdir ./mag_output \
    --binqc_tool checkm2 \
    --gtdb_db /db/GTDB/r226/gtdbtk_r226_data.tar.gz \
    -r 5.0.0 \
    -resume
    The pipeline works great until PROKKA. Then this error pops up. I could not solve it, is there any advice for me? Error msg: -[nf-core/mag] Pipeline completed with errors- ERROR ~ Error executing process > 'NFCORE_MAG๐Ÿ”PROKKA (METAMDBG-CONCOCT-PAEA-B4_9)' Caused by: Process
    NFCORE_MAG:mag:PROKKA (METAMDBG-CONCOCT-PAEA-B4_9)
    terminated with an error exit status (2) Command executed: prokka \ --metagenome \ --cpus 2 \ --prefix METAMDBG-CONCOCT-PAEA-B4_9 \ \ \ METAMDBG-CONCOCT-PAEA-B4_9.fa cat <<-END_VERSIONS > versions.yml "NFCORE_MAG๐Ÿ”PROKKA": prokka: $(echo $(prokka --version 2>&1) | sed 's/^.*prokka //') END_VERSIONS Command exit status: 2 Command output: (empty) Command error: [145315] Determined blastp version is 002016 from 'blastp: 2.16.0+' [145315] Looking for 'cmpress' - found /opt/conda/bin/cmpress [145315] Determined cmpress version is 001001 from '# INFERNAL 1.1.5 (Sep 2023)' [145315] Looking for 'cmscan' - found /opt/conda/bin/cmscan [145315] Determined cmscan version is 001001 from '# INFERNAL 1.1.5 (Sep 2023)' [145315] Looking for 'egrep' - found /usr/bin/egrep [145315] Looking for 'find' - found /usr/bin/find [145315] Looking for 'grep' - found /usr/bin/grep [145315] Looking for 'hmmpress' - found /opt/conda/bin/hmmpress [145315] Determined hmmpress version is 003004 from '# HMMER 3.4 (Aug 2023); http://hmmer.org/' [145315] Looking for 'hmmscan' - found /opt/conda/bin/hmmscan [145315] Determined hmmscan version is 003004 from '# HMMER 3.4 (Aug 2023); http://hmmer.org/' [145315] Looking for 'java' - found /opt/conda/bin/java [145315] Looking for 'makeblastdb' - found /opt/conda/bin/makeblastdb [145315] Determined makeblastdb version is 002016 from 'makeblastdb: 2.16.0+' [145315] Looking for 'minced' - found /opt/conda/bin/minced [145315] Determined minced version is 004002 from 'minced 0.4.2' [145315] Looking for 'parallel' - found /opt/conda/bin/parallel [145316] Determined parallel version is 20241122 from 'GNU parallel 20241122' [145316] Looking for 'prodigal' - found /opt/conda/bin/prodigal [145316] Determined prodigal version is 002006 from 'Prodigal V2.6.3: February, 2016' [145316] Looking for 'prokka-genbank_to_fasta_db' - found /opt/conda/bin/prokka-genbank_to_fasta_db [145316] Looking for 'sed' - found /usr/bin/sed [145316] Looking for 'tbl2asn' - found /opt/conda/bin/tbl2asn [145316] Determined tbl2asn version is 025007 from 'tbl2asn 25.7 arguments:' [145316] Using genetic code table 11. [145316] Loading and checking input file: METAMDBG-CONCOCT-PAEA-B4_9.fa [145316] Wrote 1 contigs totalling 5578 bp. [145316] Predicting tRNAs and tmRNAs [145316] Running: aragorn -l -gc11 -w METAMDBG\-CONCOCT\-PAEA\-B4_9\/METAMDBG\-CONCOCT\-PAEA\-B4_9\.fna [145316] Found 0 tRNAs [145316] Predicting Ribosomal RNAs [145316] Running Barrnap with 2 threads [145316] Found 0 rRNAs [145316] Skipping ncRNA search, enable with --rfam if desired. [145316] Total of 0 tRNA + rRNA features [145316] Searching for CRISPR repeats [145316] Found 0 CRISPRs [145316] Predicting coding sequences [145316] Contigs total 5578 bp, so using meta mode [145316] Running: prodigal -i METAMDBG\-CONCOCT\-PAEA\-B4_9\/METAMDBG\-CONCOCT\-PAEA\-B4_9\.fna -c -m -g 11 -p meta -f sco -q [145316] Found 5 CDS [145316] Connecting features back to sequences [145316] Not using genus-specific database. Try --usegenus to enable it. [145316] Annotating CDS, please be patient. [145316] Will use 2 CPUs for similarity searching. [145317] There are still 5 unannotated CDS left (started with 5) [145317] Will use blast to search against /opt/conda/db/kingdom/Bacteria/IS with 2 CPUs [145317] Running: cat METAMDBG\-CONCOCT\-PAEA\-B4_9\/METAMDBG\-CONCOCT\-PAEA\-B4_9\.IS\.tmp\.67\.faa | parallel --gnu --plain -j 2 --block 366 --recstart '>' --pipe blastp -query - -db /opt/conda/db/kingdom/Bacteria/IS -evalue 1e-30 -qcov_hsp_perc 90 -num_threads 1 -num_descriptions 1 -num_alignments 1 -seg no > METAMDBG\-CONCOCT\-PAEA\-B4_9\/METAMDBG\-CONCOCT\-PAEA\-B4_9\.IS\.tmp\.67\.blast 2> /dev/null [145317] Could not run command: cat METAMDBG\-CONCOCT\-PAEA\-B4_9\/METAMDBG\-CONCOCT\-PAEA\-B4_9\.IS\.tmp\.67\.faa | parallel --gnu --plain -j 2 --block 366 --recstart '>' --pipe blastp -query - -db /opt/conda/db/kingdom/Bacteria/IS -evalue 1e-30 -qcov_hsp_perc 90 -num_threads 1 -num_descriptions 1 -num_alignments 1 -seg no > METAMDBG\-CONCOCT\-PAEA\-B4_9\/METAMDBG\-CONCOCT\-PAEA\-B4_9\.IS\.tmp\.67\.blast 2> /dev/null Work dir: /scratch/shire/data/nj/projects/mosquito_ubiome_aging/preliminary/20251022_aytac_trial_analysis/work/64/757b4928116431c1597396b80a1475 Container: /gen/lnxdata/nf-core/community.wave.seqera.io-library-prokka_openjdk-10546cadeef11472.img Tip: when you have fixed the problem you can continue the execution adding the option
    -resume
    to the run command line -- Check '.nextflow.log' file for details ERROR ~ Pipeline failed. Please refer to troubleshooting docs: https://nf-co.re/docs/usage/troubleshooting -- Check '.nextflow.log' file for details [Mon 27 Oct 184010 CET 2025] Finished workflow.
    t
    • 2
    • 1
  • c

    Carolina Albuquerque Massena Ribeiro

    10/29/2025, 3:05 PM
    Hi everyone! Could someone please clarify whether the output file
    salmon.merged.gene_counts.tsv
    is already normalized in any way? Thank you!
    t
    • 2
    • 1
  • m

    mina ming

    10/29/2025, 3:17 PM
    Please somebody helps me to figure out why I get this error when running rnaseq pipeline on one sample
    Caused by:
    `Process
    NFCORE_RNASEQ:RNASEQ:FASTQ_QC_TRIM_FILTER_SETSTRANDEDNESS:FQ_LINT (S1)
    terminated with an error exit status (1)`
    Command executed:
    fq lint \
    --disable-validator P001 \
    034_1_S1_R1_001.fastq.gz 034_1_S1_R2_001.fastq.gz > S1.fq_lint.txt
    cat <<-END_VERSIONS > versions.yml
    "NFCORE_RNASEQ:RNASEQ:FASTQ_QC_TRIM_FILTER_SETSTRANDEDNESS:FQ_LINT":
    fq: $(echo $(fq lint --version | sed 's/fq-lint //g'))
    END_VERSIONS
    Command exit status:
    1
    this this my code
    nextflow run nf-core/rnaseq \
    -profile singularity \
    -process.executor slurm \
    --input /users/fi0001/singlesample.csv \
    --outdir /parallel_scratch/$USER/nfcore/rnaseq_S1/results \
    --genome GRCh38 \
    --start_from_fastq_qc false \
    --skip_fq_lint true \
    --save_trimmed \
    -work-dir /parallel_scratch/$USER/nfcore/rnaseq_S1/work \
    -with-report -with-trace -with-timeline
    t
    • 2
    • 1
  • a

    Abdoulie Kanteh

    10/30/2025, 10:36 AM
    Hi... has anyone used nf-core/seqcoverage ? do you know if this still working.. I am running into issues like "WARN: Cannot read project manifest -- Cause: Remote resource not found: https://api.github.com/repos/nf-core/seqcoverage/contents/nextflow.config Remote resource not found: https://api.github.com/repos/nf-core/seqcoverage/contents/main.nf Any help will be appreciated.
    n
    j
    a
    • 4
    • 12
  • j

    Jon Brรฅte

    10/30/2025, 12:15 PM
    Hi, In my pipeline I suddenly got a strange result where it appears that two samples got mixed up halfway through the pipeline. It seems that the wrong meta.id was attached to the wrong file from SAMTOOLS_DEPTH. Even though I don't think that's possible. I did a number of "-resumes" during the run, but I don't have the work-directory anymore so I can't exactly trace down the error. But my questions is, are there tools (e.g. in nf-core tools) or methods to check that a pipeline does not have any wrong logic or other errors that can allow these things to happen? For example, trace every single file and how they pass through the pipeline?
    m
    m
    • 3
    • 6
  • j

    Jorge Gonzalez

    10/31/2025, 12:19 PM
    Hi everyone, Iโ€™m new user to Nextflow. Iโ€™m running the dev version of
    nf-core/genomeassembler
    with Apptainer on our SLURM cluster (Leftraru) in Chile, but I keep running into a consistent error during the container pulling phase. Error message:
    Copy code
    Cannot invoke "nextflow.util.Duration.toMillis()" because "this.pullTimeout" is null
    Iโ€™ve tried setting
    apptainer.pullTimeout
    at the user level (e.g., in
    .config
    and through environment variables), but the error persists. 1. Has anyone seen this specific
    pullTimeout is null
    error before? Was it fixed by having the cluster admin define a default value in the global Nextflow config? 2. Is there a known user-level workaround to force initialization of this variable when the system default is missing? Iโ€™ve already contacted our cluster support team, but Iโ€™m hoping to get some quick insights from the community in the meantime. Thanks a lot for any guidance โ€” still learning my way around Nextflow!
    m
    p
    t
    • 4
    • 6
  • d

    Dylan Renard

    10/31/2025, 8:40 PM
    Hi Team, I'm trying to run mashwrapper nextflow pipeline from the CDC inside an existing nf-core pipeline. Any advice for running outside nextflow pipelines or modules in your own nextflow code? Best, Dylan
    p
    t
    • 3
    • 3
  • n

    Nour El Houda Barhoumi

    11/02/2025, 11:58 PM
    Hello, I hope you are doing well. I would like to ask for your feedback on the approach I used to compare transcriptional responses between strains. For each stress condition, I identified differentially expressed genes and examined the pairwise overlap of up-regulated and down-regulated genes between two strains at a time, while excluding genes that were also regulated in the third strain under the same condition. To determine whether the observed overlaps reflected coordinated regulation rather than coincidence, I calculated the odds ratio and the corresponding Fisherโ€™s exact test p-value for each pairwise comparison. This analysis was focused only on the number of shared versus strain-specific DEGs, without considering functional pathways or enrichment analysis at this stage. I would like to confirm whether this approach is appropriate for assessing similarity and divergence in transcriptional responses based purely on DEG overlap counts? Thank you
    t
    • 2
    • 1
  • n

    Nadia Sanseverino

    11/03/2025, 2:54 PM
    Hi all! I haven't foud yet similar topics, so in the meantime I'll drop a request for help here: I'm trying to update modules in a pipeline, I had no issues with the first two but then I'm only gettinr an error that I think is related to a python version. I can't paste the whole output but I have it, if needed I have latest versione of nextflow and nf-core/tools. And I have Python 3.13 ... please send help
    Copy code
    (new-dev) nadiunix@LAPTOP-FAG8G0FQ:~/sammyseq$ nf-core modules update untar
    ...
    TypeError: unhashable type: 'dict'
    m
    • 2
    • 13
  • k

    Kathryn Greiner-Ferris

    11/03/2025, 6:54 PM
    Question.. my pipeline stopped midway because I ran out of storage. Is there a way to resume from where the pipeline stopped? executor > local (393) [- ] NFCโ€ฆDEX_BISMARK_BWAMETH:GUNZIP - [7c/6feebd] NFCโ€ฆex/reference_genome.fasta) | 1 of 1 โœ”๏ธ [f9/d43039] NFCโ€ฆAT_FASTQ (SYNSC-741point2) | 57 of 57 โœ”๏ธ [81/777e70] NFCโ€ฆQ:FASTQC (SYNSC-738point2) | 57 of 57 โœ”๏ธ [7a/0f6c02] NFCโ€ฆIMGALORE (SYNSC-741point2) | 57 of 57 โœ”๏ธ [aa/bdf0cd] NFCโ€ฆRK_ALIGN (SYNSC-741point2) | 57 of 57 โœ”๏ธ [0c/de53d8] NFCโ€ฆUPLICATE (SYNSC-737point4) | 56 of 57 [f2/23fc15] NFCโ€ฆOLS_SORT (SYNSC-730point4) | 37 of 56 [63/514907] NFCโ€ฆLS_INDEX (SYNSC-742point1) | 36 of 37 [4a/82f621] NFCโ€ฆXTRACTOR (SYNSC-742point4) | 10 of 56 [11/3977b9] NFCโ€ฆCYTOSINE (SYNSC-742point4) | 8 of 10 [b8/7580fa] NFCโ€ฆK_REPORT (SYNSC-739point2) | 9 of 10 [- ] NFCโ€ฆUP_BISMARK:BISMARK_SUMMARY - [- ] NFCโ€ฆQMETHYLSEQQUALIMAP_BAMQC | 0 of 37 [- ] NFCโ€ฆETHYLSEQMETHYLSEQMULTIQC -
  • j

    James Fellows Yates

    11/04/2025, 11:00 AM
    The Nextflow syntax/language server is complaining about calling
    meta
    in prefix and args etc... in our
    modules.config
    Any suggestions how else to use these variables within in the
    modules.config
    file 'properly'?
  • j

    James Fellows Yates

    11/04/2025, 11:01 AM
    Ah no wait I just need to wrap the whole thing in a closure, ingnore me
    party this 1