https://nf-co.re logo
Join Slack
Powered by
# help
  • n

    Nadia Sanseverino

    09/18/2025, 2:39 PM
    Hello everybody! I tried to implement a parameter to choose between two different extensions of the same file (one binary
    .bigwig
    and one readable
    .bedgraph
    ). Citing modules guidelines,
    All _non-mandatory_ command-line tool _non-file_ arguments MUST be provided as a string via the $task.ext.args variable
    . Test-writing seem to suggest that I need a
    nextflow.config
    to successfully launch tests. I need a kind soul to take a look at my snippets and confirm if I'm all set to update my branch. • from main.nf
    Copy code
    input:
        tuple val(meta) , path(bigwig1)     , path(bigwig2)
        tuple val(meta2), path(blacklist)
        
        output:
        tuple val(meta), path("*.{bw,bedgraph}"), emit: output
        path "versions.yml"                     , emit: versions
    
        when:
        task.ext.when == null || task.ext.when
    
        script:
        def args = task.ext.args                                  ?: ""
        def prefix = task.ext.prefix                              ?: "${meta.id}"
        def blacklist_cmd = blacklist                             ? "--blackListFileName ${blacklist}" : ""        
        def extension = args.contains("--outFileFormat bedgraph") ? "bedgraph"                         : "bw"
        
        """
        bigwigCompare \\
            --bigwig1 $bigwig1 \\
            --bigwig2 $bigwig2 \\
            --outFileName ${prefix}.${extension} \\
            --numberOfProcessors $task.cpus \\
            $blacklist_cmd \\
            $args
    
        cat <<-END_VERSIONS > versions.yml
        "${task.process}":
            deeptools: \$(bigwigCompare --version | sed -e "s/bigwigCompare //g")
        END_VERSIONS
        """
    • from main.nf.tests
    Copy code
    test("homo_sapiens - 2 bigwig files - bedgraph output") {
    
            config "./nextflow.config"
    
            when {
                params {
                    deeptools_bigwigcompare_args = '--outFileFormat bedgraph'
                }
                process {
                    """
                    def bigwig1 = file(params.modules_testdata_base_path + 'genomics/homo_sapiens/illumina/bigwig/test_S2.RPKM.bw', checkIfExists: true)
                    def bigwig2 = file(params.modules_testdata_base_path + 'genomics/homo_sapiens/illumina/bigwig/test_S3.RPKM.bw', checkIfExists: true)
    
                    input[0] = [
                        [ id:'test' ],
                        bigwig1, 
                        bigwig2
                    ]
                    input[1] = [
                        [ id:'no_blacklist' ],
                        []
                    ]
                    """
                }
            }
    
            then {
                assertAll(
                    { assert process.success },
                    { assert snapshot(process.out.output,                                
                                      process.out.versions)
                                      .match()
                    }
                )
            }
        }
    • from nextflow.config
    Copy code
    process {
      withName: 'DEEPTOOLS_BIGWIGCOMPARE' {
          ext.args = params.deeptools_bigwigcompare_args
      }
    }
    s
    • 2
    • 4
  • c

    Chenyu Jin (Amend)

    09/19/2025, 12:19 PM
    hey all, I encountered a problem that when I get many files that I want to run in parallel in the same process. If take it in with
    each()
    in the process then it's not input as a file as
    path()
    does. How can I process each path?
    Copy code
    workflow {
    file = Channel.fromPath("${params.input_dir}/*", checkIfExists: true).view()
    index_reference(files, params.threads)
    
    }
    
    process index_reference {
        input:
        each(input_ref)
        val(threads)
    
    ...
    }
    a
    • 2
    • 5
  • e

    eparisis

    09/22/2025, 2:27 PM
    Hi there! Is there a way to rename files inside a channel and gzip/gunzip with a nextflow function inside the workflow code or is passing it through a process the only way?
    n
    t
    • 3
    • 3
  • e

    Evangelos Karatzas

    09/23/2025, 7:10 AM
    Is there currently a problem with AWS tests for pipeline release PRs? https://github.com/nf-core/proteinfamilies/actions/runs/17918528153/job/51008881450?pr=114
    m
    m
    r
    • 4
    • 16
  • m

    Megan Justice

    09/24/2025, 4:12 PM
    Hey, all! I'm running some NF pipelines in an AWS EC2 instance and am having issues with speed / throughput. Is anyone knowledgeable about optimizing pipelines on AWS that could help me out?
    e
    • 2
    • 1
  • s

    shaojun sun

    09/24/2025, 5:39 PM
    Hi there! Is there a pipeline to do WES analysis? Thanks!
    r
    • 2
    • 2
  • f

    Fabian Egli

    09/26/2025, 5:44 AM
    how can I select the architecture for a quai.io image and apply that patch to a workflow when running it?
  • f

    Fabian Egli

    09/26/2025, 8:15 AM
    I experience a process getting killed in a Docker container and don't know how to figure out why it is being kille. Does Anyone here know? First I thought it was a resource limit issue, but the error I got was not indicating that.
    command.sh: line xx:   282 Killed [...] command -with -parameters [...]
    • 1
    • 1
  • j

    James Fellows Yates

    09/26/2025, 9:11 AM
    Any fans of docker and symlink 'nesting' in Nextflow processes? (help!) https://community.seqera.io/t/how-to-handle-in-nextflow-docker-mounting-of-symlinked-files-within-a-symlinked-directory/2381 (I made a reprex!)
    n
    m
    • 3
    • 13
  • a

    Andrea Bagnacani

    09/26/2025, 2:26 PM
    Dear all, I'm using
    nf-core/samtools/merge
    to merge some BAM files. The input channel that I provide to this process has meta fields
    id
    and
    sample_name
    . The former is used by samtools merge to infer the file prefix for merging, while the latter is used in my pipeline to keep provenance info. When I run my pipeline, this performs the merge as intended. However, when I run `nf-core/samtools/merge`'s stub test,
    meta.sample_name
    ends up being interpreted as the relative path to a docker mount point, and since Docker mount points must be absolute, the stub test is (in my case) bound to fail:
    Copy code
    $ nf-test test tests/01.stub.nf.test --profile docker
    ...
    Command exit status:
        125
      Command output:
        (empty)
      Command error:
        docker: Error response from daemon: invalid volume specification: 'HG00666:HG00666': invalid mount config for type "volume": invalid mount path: 'HG00666' mount path must be absolute
        Run 'docker run --help' for more information
    From `.command.run`:
    Copy code
    ...
    nxf_launch() {
        docker run -i --cpu-shares 2048 --memory 12288m -e "NXF_TASK_WORKDIR" -e "NXF_DEBUG=${NXF_DEBUG:=0}" \
            \
            -v HG00666:HG00666 \  # <-- meta.sample_name becomes a mount point
            \
        -v /home/user1/src/nf-ont-vc/.nf-test/tests/5e6015530fd10b4314bec7ef1809a11/work/bb/8f8ce8297ea4c0263e765dcdffacc8:/home/user1/src/nf-ont-vc/.nf-test/tests/5e6015530fd10b4314bec7ef1809a11/work/bb/8f8ce8297ea4c0263e765dcdffacc8 -w "$NXF_TASK_WORKDIR" -u $(id -u):$(id -g) --name $NXF_BOXID quay.io/biocontainers/samtools:1.22.1--h96c455f_0 /bin/bash -c "eval $(nxf_container_env); /bin/bash /home/user1/src/nf-ont-vc/.nf-test/tests/5e6015530fd10b4314bec7ef1809a11/work/bb/8f8ce8297ea4c0263e765dcdffacc8/.command.run nxf_trace"
    }
    ...
    How do I make samtools merge ignore
    meta.sample_name
    when the docker cli is buit?
    n
    m
    +2
    • 5
    • 12
  • q

    Quentin Blampey

    09/30/2025, 2:41 PM
    Hello! I have one process that requires to write on the
    $HOME
    directory. I fixed it for Docker with
    containerOptions = ''
    , but for
    singularity
    I still receive an error saying
    Read-only file system
    Does anyone know how to fix that?
    a
    t
    p
    • 4
    • 23
  • k

    Kurayi

    10/02/2025, 8:31 PM
    None Nextflow related question here: is there any comprehensive website that lists PhD offers in Europe?
    j
    m
    +2
    • 5
    • 12
  • q

    Quentin Blampey

    10/08/2025, 1:19 PM
    Hi everyone, I'm developing a pipeline that works on objects stored as zarr directories. In short, it means that whenever a process creates a new output, it's a subdirectory inside this
    .zarr
    directory. Everything works well for "standard" usage (e.g., Docker / Singularity on a HPC / standard file system), but I have some staging issues on AWS Batch specifically. When one process updates the zarr (i.e. creating a new subdir), the new subdir is not passed to the following process, although I specify it in my inputs/outputs (and, again, it works nicely when not on the cloud). Does anyone faced similar issues? Do you have any idea how to fix it?
    t
    • 2
    • 1
  • l

    Luuk Harbers

    10/08/2025, 3:22 PM
    Caching question: We are taking a set of files through github using the igenomes config (and
    getAttribute
    ) just like normally with fasta files etc. These files are on github lfs and so we specified them like this in the config (opposed to raw.githubusercontent.com )
    Copy code
    gnomad          = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_gnomad.vcf.gz>"
                dbsnp           = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_dbsnp.vcf.gz>"
                onekgenomes     = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_1kgenomes.vcf.gz>"
                colors          = "<https://github.com/IntGenomicsLab/test-datasets/raw/refs/heads/main/ClairSTO-pon/final_colors.vcf.gz>"
    This works perfectly and downloads them fine. However the caching doesn't work and I'm unsure why. It will always restage the files from github lfs and result in processed not caching properly on resume. I'll put a
    nextflow.log
    snippet with hashes in a reply here.
    t
    • 2
    • 4
  • s

    Sylvia Li

    10/08/2025, 6:07 PM
    If I am using a custom pipeline from someone else, that uses custom local modules. If i run nf-core modules update --all, would that mess with anything? is it recommended to do it to keep versions of nf-core modules updated?
    p
    • 2
    • 4
  • p

    Priyanka Surana

    10/10/2025, 7:24 AM
    For a new in-house pipeline, each module takes between 8s to 5m. This is extremely inefficient on the HPC. Is there a way to push the entire pipeline into a single job instead of running each module separately. We cannot run anything on the head node. Reposting here from #C0364PKGWJE
    m
    u
    +6
    • 9
    • 47
  • s

    Susanne Jodoin

    10/13/2025, 2:49 PM
    Dear all, I would need help from anyone having experience with nextflow secrets in pipeline-level nf-tests. I'm working on a template update of metapep https://github.com/nf-core/metapep/pulls/159 and get different results in nf-test than before. Some tests need nextflow secrets for NCBI which have been created a while ago and to which I do not have access. So far I've added the secrets from the repo to the .github/actions/nf-test/action.yml and the workflows/nf-test.yml following sarek to make sure they are used in the CI tests. When creating snapshots with my own NCBI secrets on our infrastructure or in github copilot, the generated snapshots show the same md5sums as before the template update. But in the CI tests using the metapep NCBI secrets, the md5sums and file lengths differ. Hope I am not missing something obvious. Here is one example: Failing snapshot https://github.com/nf-core/metapep/actions/runs/18095105573/job/51484296262 Before template update https://github.com/nf-core/metapep/blob/master/tests/pipeline/test_mhcnuggets_2.nf.test.snap
    m
    p
    • 3
    • 9
  • j

    Jon Bråte

    10/16/2025, 10:38 AM
    Hi all, I'm trying to incorporate the pipeline version or revision into a process that summarizes the results from the pipeline. I've looked at the way this information gets into the MultiQC report, but I'm doing something wrong. My summarize process launches an R-script, so I tried to define some environment variables inside the process that then will be available to the R-script. My manifest block:
    Copy code
    manifest {
        name            = 'folkehelseinstituttet/hcvtyper'
        author          = """Jon Bråte"""
        homePage        = '<https://github.com/folkehelseinstituttet/hcvtyper>'
        description     = """Assemble and genotype HCV genomes from NGS data"""
        mainScript      = '<http://main.nf|main.nf>'
        nextflowVersion = '!>=23.04.0'
        version         = '1.1.1'  // Current release version
        doi             = ''
    }
    Current code for the process:
    Copy code
    process SUMMARIZE {
    
        label 'process_medium'
        errorStrategy 'terminate'
    
        // Environment with R tidyverse and seqinr packages from the conda-forge channel. Created using seqera containers.
        // URLs:
        // Docker image: <https://wave.seqera.io/view/builds/bd-3536dd50a17de0ab_1?_gl=1*16bm7ov*_gcl_au*MTkxMjgxNTMwMi4xNzUzNzczOTQz>
        // Singularity image: <https://wave.seqera.io/view/builds/bd-88101835c4571845_1?_gl=1*5trzpp*_gcl_au*MTkxMjgxNTMwMi4xNzUzNzczOTQz>
        conda "${moduleDir}/environment.yml"
        container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
            '<https://community-cr-prod.seqera.io/docker/registry/v2/blobs/sha256/2a/2a1764abd77b9638883a202b96952a48f46cb0ee6c4f65874b836b9455a674d1/data>':
            '<http://community.wave.seqera.io/library/r-gridextra_r-png_r-seqinr_r-tidyverse:3536dd50a17de0ab|community.wave.seqera.io/library/r-gridextra_r-png_r-seqinr_r-tidyverse:3536dd50a17de0ab>' }"
    
        // Set environment variables for summarize.R
        env "PIPELINE_NAME",      "${workflow.manifest.name}"
        env "PIPELINE_VERSION",   "${workflow.manifest.version}"                 // Primary version source
        env "PIPELINE_REVISION",  "${workflow.revision ?: ''}"                  // Secondary (branch/tag info)
        env "PIPELINE_COMMIT",    "${workflow.commitId ?: ''}"                  // Tertiary (commit info)
    
        input:
        path samplesheet
        val stringency_1
        val stringency_2
        path 'trimmed/'
        path 'kraken_classified/'
        path 'parsefirst_mapping/'
        path 'stats_withdup/'
        path 'stats_markdup/'
        path 'depth/'
        path 'blast/'
        path 'glue/'
        path 'id/'
        path 'variation/'
    
        output:
        path 'Summary.csv'      , emit: summary
        path '*mqc.csv'         , emit: mqc
        path '*png'             , emit: png
        path "versions.yml"     , emit: versions
    
        when:
        task.ext.when == null || task.ext.when
    
        script:
        def args = task.ext.args ?: ''
    
        """
        summarize.R \\
            $samplesheet \\
            $stringency_1 \\
            $stringency_2 \\
            $args
    But the R-script prints "HCVTyper (version unknown)". Relevant code part in the R-script:
    Copy code
    pipeline_name    <- Sys.getenv("PIPELINE_NAME", "HCVTyper") # Use HCVTyper if not set
    pipeline_version <- Sys.getenv("PIPELINE_VERSION", "")
    pipeline_rev     <- Sys.getenv("PIPELINE_REVISION", "")
    pipeline_commit  <- Sys.getenv("PIPELINE_COMMIT", "")
    
    # Precedence: version > revision > commit
    ver_label <- dplyr::coalesce(
      na_if(pipeline_version, ""),
      na_if(pipeline_rev, ""),
      na_if(pipeline_commit, "")
    )
    
    script_name_version <- if (!<http://is.na|is.na>(ver_label)) {
      paste(pipeline_name, ver_label)
    } else {
      paste(pipeline_name, "(version unknown)")
    }
    n
    a
    • 3
    • 9
  • s

    Salim Bougouffa

    10/16/2025, 12:36 PM
    Hi there, i have a question. I run the mag workflow using the coassemble option. The master job was killed due to timelimit and the maxbin2 job carried on and it finished. So, i assumed if I rerun the master job using -resume, it should use the maxbin2 job that had already finished but it did not. It submitted a new maxbin2 job. The one that was successful took almost 10days to finish Thread in Slack conversation
    a
    • 2
    • 3
  • p

    Paweł Ciurka

    10/17/2025, 4:15 PM
    Hi all! Is there any golden path for handling a resume-from-arbitrary-process case? An usecase is that in some scenarios I need to rerun few last processes even though an initial pass of the workflow completed correctly. Any guidance welcome 🙂
    p
    • 2
    • 6
  • n

    Nour El Houda Barhoumi

    10/19/2025, 3:46 PM
    Hi everyone, I hope you are doing well I would like to ask if the method I used for my functional enrichment analysis is correct I took all the differentially expressed genes from my RNA-seq comparisons and annotated them using a custom functional classification called My_subclass then for each subclass I calculated the number of DE genes the mean log2 fold change and the fraction of genes belonging to that subclass relative to all DE genes I visualized the results using bubble plots where the x axis represents the mean log2 fold change the y axis represents the subclass bubble size indicates the number of genes and color indicates up or down regulation this approach does not include any statistical testing like GSEA or Fisher’s exact test it is purely descriptive to highlight trends in functional categories I would like to know if this method is scientifically valid for interpreting functional enrichment or if it should be complemented with a statistical enrichment analysis
    n
    • 2
    • 2
  • s

    Sylvia Li

    10/20/2025, 11:37 PM
    i have a python script that uses the output dir (once all processes are done), at the absolute end of a pipeline that contains analysis of genomes - the script creates visualizations comparing all samples to eachother (read quality, plasmids..etc). I understand that publishdir is asynchronous, but now I am stuck, is there anyway to wait until output dir publishdir is done? • Maybe check for amount of subdirectories/files in the outputdir would be a good workaround in my python script? • Maybe workflow.Oncomplete and make it sleep for a user-set amount of time before running the python script? ◦ The files that are being published aren't big so maybe sleeping would be okay?
    p
    a
    • 3
    • 10
  • y

    Yeji Bae

    10/21/2025, 1:17 AM
    Hi, I am trying to run
    nf-core/scrnaseq_nf
    , but I’m facing a memory issue with the
    cellranger count
    step. I tried increasing the memory by specifying it in my config file as shown below, but it still runs with 15 GB.
    Copy code
    process {
      withName: 'CELLRANGER_COUNT' {
        cpus = 8
        memory = 36.GB //TODO: check the format 
        time = '24h'
      }
    }
    Here is my command for running the pipeline.
    Copy code
    nextflow \
      -log test_running_cellranger1017.log \
      run nf-core/scrnaseq \
      -c conf/rnaseq_sasquatch.config \
      -profile seattlechildrens,test \
      -params-file params/nf-core_scrnaseq_params_ex_cellranger.yml \
      -resume
    t
    • 2
    • 1
  • s

    Sylvia Li

    10/21/2025, 6:58 PM
    if I input 2 different channels of 2 different lengths into a subworkflow
    Copy code
    workflow MAIN {
        channel1 = Channel.of(1, 2, 3, 4)      // size 4
        channel2 = Channel.of('A', 'B', 'C')    // size 3
        
        SUB_WORKFLOW(channel1, channel2)
    }
    
    workflow SUB_WORKFLOW {
        take:
        input1  // size 4
        input2  // size 3
        
        main:
        PROCESS_ONE(input1)  // will this run 4times?
        PROCESS_TWO(input2)  // Will this run 3 times?
    }
    Will the processes within the subworkflow run the amount of times necessary for each separate channel? I'm worried, or maybe I misunderstood, that it will only run 3 times, since that is the smallest channel for input. is that an issue only if they're both used in 1 Process and they differ in size?
    p
    • 2
    • 1
  • a

    Ammar Aziz

    10/22/2025, 4:50 AM
    Why do I get this error sometimes:
    Copy code
    ERROR ~ /tmp/nextflow-plugin-nf-schema-2.1.1.lock (Permission denied)
    n
    t
    • 3
    • 8
  • a

    Ammar Aziz

    10/22/2025, 4:51 AM
    Possibly I
    ctrl + c
    during nextflow startup/downloading?
  • a

    Aivars Cīrulis

    10/22/2025, 1:36 PM
    Hey, When we run the newest sarek 3.6.0. either with or without parabricks option, we get this error (somewhat similar as with sarek 3.5.1 and sention as I remember). Does someone know how to deal with it?: * --msisensorpro_scan (/mnt/tier2/project/p200971/batchMeluxina/igenomes/Homo_sapiens/GATK/GRCh38/Annotation/MSIsensorPro/Homo_sapiens_assembly38.msisensor_scan.list): the file or directory '/mnt/tier2/project/p200971/batchMeluxina/igenomes/Homo_sapiens/GATK/GRCh38/Annotation/MSIsensorPro/*Homo_sapiens_assembly38.msisensor_scan.list*' does not exist * --msisensor2_models (/mnt/tier2/project/p200971/batchMeluxina/igenomes/Homo_sapiens/GATK/GRCh38/Annotation/MSIsensor2/models_hg38//): the file or directory '/mnt/tier2/project/p200971/batchMeluxina/igenomes/Homo_sapiens/GATK/GRCh38/Annotation/MSIsensor2/models_hg38//' does not exist Best wishes, Aivars
    #️⃣ 1
    c
    m
    • 3
    • 2
  • c

    chase mateusiak

    10/23/2025, 12:14 AM
    what is the best way to handle a case where, when there are too many input files, they should be passed in via a file?
    Copy code
    ...
        input:
        tuple val(meta), path(peaks) // peaks can be a list of peak files
    
        output:
        tuple val(meta), path("*_merged.txt"), emit: merged
        path "versions.yml"                  , emit: versions
    
        when:
        task.ext.when == null || task.ext.when
    
        script:
        def args = task.ext.args ?: ''
        def prefix = task.ext.prefix ?: "${meta.id}"
        def peak_list = peaks instanceof List ? peaks : [peaks]
        def use_peak_file = peak_list.size() > 10
        def VERSION = '5.1'
        
        if (use_peak_file) {
            """
            # Create file list using a shell loop
            for peak_file in ${peaks.join(' ')}; do
                echo "\${peak_file}" >> ${prefix}_peak_files.txt
            done
    
            mergePeaks \\
                $args \\
                -file ${prefix}_peak_files.txt \\
                > ${prefix}_merged.txt
    
            cat <<-END_VERSIONS > versions.yml
            "${task.process}":
                homer: ${VERSION}
            END_VERSIONS
            """
        } else {
            """
            mergePeaks \\
                $args \\
                ${peaks.join(' ')} \\
                > ${prefix}_merged.txt
    
            cat <<-END_VERSIONS > versions.yml
            "${task.process}":
                homer: ${VERSION}
            END_VERSIONS
            """
        }
    ...
    that works with 4 files (reducing the threshold at which it is using a lookup), but when I try it with 1000+, it isn't. It seems like it is failing to symlink the files into the work dir, too, which also isn't happening when I run this on a reduced number of files (can't tell if that is true yet or not -- just dealing with it now)
    s
    • 2
    • 3
  • d

    Dylan Renard

    10/23/2025, 2:06 PM
    Anyone familiar with getting MASHWrapper running on nextflow core pipelines, working with paired illumina reads and struggling with toggling between parallelized MASH results and a separate process for sequence pairs to run via mashwrapper. https://github.com/tiagofilipe12/mash_wrapper
    n
    • 2
    • 2
  • y

    Yang Pei

    10/24/2025, 1:14 PM
    Quick question about expected scheduling behaviour. I’m seeing downstream processes wait until all upstream tasks for a sample set finish before any downstream tasks start. For example:
    Copy code
    executor >  slurm (6)
    [4d/0abfaf] YPE…RBASE:FASTQC (sub_tes33d6) | 0 of 3
    [6a/6d8f03] YPE…BASE:FASTP (sub_spike0125) | 2 of 3
    [-        ] YPE…NCLE_FGBIO_PERBASE:BWA_MEM -
    
    executor >  slurm (6)
    [2a/cfa7f3] YPE…ASE:FASTQC (sub_spike0125) | 1 of 3
    [6a/6d8f03] YPE…BASE:FASTP (sub_spike0125) | 2 of 3
    [-        ] YPE…NCLE_FGBIO_PERBASE:BWA_MEM -
    
    executor >  slurm (9)
    [4d/0abfaf] YPE…RBASE:FASTQC (sub_tes33d6) | 3 of 3 ✔
    [fd/92aa1d] YPE…ERBASE:FASTP (sub_tes33d6) | 3 of 3 ✔
    [50/b7a700] YPE…BASE:BWA_MEM (sub_tes33d6) | 0 of 3
    It looks like BWA_MEM only starts once all 3 FASTP tasks for that sample have finished, rather than as each FASTP task completes. Is that expected Nextflow behaviour? Or is there a way to configure Nextflow (or Slurm/executor) to let downstream tasks be scheduled as each upstream task finishes (like a per-task streaming)?
    n
    t
    • 3
    • 6