flowchart TB subgraph Workflow A[fastq] A --> B([fastqc]) A --> C([trim galore]) C --> D([hisat2]) B --> E([MultiQC]) C --> E D --> E E --> F[html] end
nf-core for RNA-seq
In this part, we are going to create the same pipeline for analysing RNA-seq data as the one we built with Nextflow here.
As a reminder, the pipeline is composed of the following steps:
We are going to use the GitHub Codespace that we used in the previous section in order to do this part.
We are going to: 1. create a new project using the nf-core template 2. download the nf-core modules needed 3. create our own MultiQC module using nf-core module template 4. assemble the pipeline
Let’s get started !
Create a new project using the nf-core template
The first thing we are going to do is to create a new pipeline that we’re going to name RNAseq
.
So let’s create a new directory for our project:
mkdir nf4-science/nf-core-rnaseq
cd nf4-science/nf-core-rnaseq
We are going to use the data material provided in the Nextflow for RNAseq. To do so we are going to create a symbolic link to the data of that training
ln -s ../rnaseq/data/
Check that everything is working well:
ls -lh
This command should display the link that we just created
total 0
lrwxrwxrwx 1 root root 15 Sep 11 14:03 data -> ../rnaseq/data/
We can see that the system is indicating that the data
file is a link (l
at the beginning of lrwxrwxrwx
) and that it’s pointing to ../rnaseq/data/
. ..
is a relative path indicating that the rnaseq
directory containing the data
directory can be found in the parent directory of our current working directory. That’s why we call them relative path: they are relative to the directory we are currently in.
To know your current working directory, you can use the command pwd
:
$ pwd
/workspaces/training/nf4-science/nf-core-rnaseq
Now, let’s create our pipeline using the nf-core template
nf-core pipelines create
This command will display a wizard to help you create the pipeline. Click on the let’s go! button at the end of the page. It will then prompt you to choose the pipeline type, click on custom. Now fill in the form with the following content:
- GitHub organisation:
core
- Workflow name:
RNAseq
- A short description of your pipeline:
A basic RNAseq pipeline using nf-core template
- Name of the main author(s):
< YOUR NAME >
Then click on Next. Select the following configurations:
- Add testing profiles
- Use nf-core components
- Use nf-schema
- Add configuration files
- Add documentation
Then click on continue and on Finish button. Wait until the pipeline is created, then click on Continue. Finally click on Finish without creating a repo and on Close.
Now you can look at the content of the repository created:
$ tree core-rnaseq/
core-rnaseq/
├── assets
│ ├── samplesheet.csv
│ └── schema_input.json
├── conf
│ ├── base.config
│ ├── modules.config
│ ├── test.config
│ └── test_full.config
├── docs
│ ├── output.md
│ ├── README.md
│ └── usage.md
├── main.nf
├── modules.json
├── nextflow.config
├── nextflow_schema.json
├── README.md
├── subworkflows
│ ├── local
│ │ └── utils_nfcore_rnaseq_pipeline
│ │ └── main.nf
│ └── nf-core
│ ├── utils_nextflow_pipeline
│ │ ├── main.nf
│ │ ├── meta.yml
│ │ └── tests
│ │ ├── main.function.nf.test
│ │ ├── main.function.nf.test.snap
│ │ ├── main.workflow.nf.test
│ │ ├── nextflow.config
│ │ └── tags.yml
│ ├── utils_nfcore_pipeline
│ │ ├── main.nf
│ │ ├── meta.yml
│ │ └── tests
│ │ ├── main.function.nf.test
│ │ ├── main.function.nf.test.snap
│ │ ├── main.workflow.nf.test
│ │ ├── main.workflow.nf.test.snap
│ │ ├── nextflow.config
│ │ └── tags.yml
│ └── utils_nfschema_plugin
│ ├── main.nf
│ ├── meta.yml
│ └── tests
│ ├── main.nf.test
│ ├── nextflow.config
│ └── nextflow_schema.json
└── workflows
└── rnaseq.nf
There are a lot of files! We are going to explore and change the important ones later.
Now go inside the newly created directory:
cd core-rnaseq/
and check that the pipeline works as expected:
nextflow run . -profile docker,test --outdir ../core-results
Download the nf-core modules needed
Now, we are going to download the nf-core modules that we need for the pipeline. You can check all available nf-core modules on their website here.
You can also do it immediately from your terminal :
nf-core modules list remote # list all available modules
You can also add a pattern to this command in order to see if a particular module is present:
nf-core modules list remote fastq # list the modules containing the pattern `fastq`
,--./,-.
___ __ __ __ ___ /,-._.--~\
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/tools version 3.3.2 - https://nf-co.re
INFO Modules available from https://github.com/nf-core/modules.git (master) matching pattern 'fastq':
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Module Name ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ ... │
│ cat/fastq │
│ cellranger/mkfastq │
│ cellrangerarc/mkfastq │
│ cellrangeratac/mkfastq │
│ fastqc │
│ fastqdl │
│ ... │
│ wipertools/fastqwiper │
└────────────────────────────┘
To list the locally installed modules, you can launch:
nf-core modules list local
It should display something like this, indicating that you don’t have any module installed in your pipeline yet.
,--./,-.
___ __ __ __ ___ /,-._.--~\
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/tools version 3.3.2 - https://nf-co.re
INFO Repository type: pipeline
INFO Reinstalling modules found in 'modules.json' but missing from directory:
INFO Modules installed in '.':
┏━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━┓
┃ Module Name ┃ Repository ┃ Version SHA ┃ Message ┃ Date ┃
┡━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━┩
└─────────────┴────────────┴─────────────┴─────────┴──────┘
Now, let’s check if a fastqc
module is available at nf-core.
nf-core modules list remote fastqc
,--./,-.
___ __ __ __ ___ /,-._.--~\
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/tools version 3.3.2 - https://nf-co.re
INFO Modules available from https://github.com/nf-core/modules.git (master) matching pattern 'fastqc':
┏━━━━━━━━━━━━━┓
┃ Module Name ┃
┡━━━━━━━━━━━━━┩
│ fastqc │
└─────────────┘
Ok ! It seems that the fastqc
module indeed exists but, does it do a quality check over fastq data ? To find out, we are going to use the nf-core modules info [MODULE]
command
nf-core modules info fastqc
,--./,-.
___ __ __ __ ___ /,-._.--~\
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/tools version 3.3.2 - https://nf-co.re
INFO Reinstalling modules found in 'modules.json' but missing from directory:
🌐 Repository: https://github.com/nf-core/modules.
📖 Description: Run FastQC on sequenced reads
[INPUT OUTPUT DESCRIPTION]
💻 Installation command: nf-core modules install fastqc
The nf-core modules info
command gives us a lot of information, we have a description of the module and the input and output it produces. Finally it gives us a command to install the given module.
As you can see, the fastqc module corresponds to what we want to do so we can download it with
nf-core modules install fastqc
This command should display
...
INFO Reinstalling modules found in 'modules.json' but missing from directory:
INFO Installing 'fastqc'
INFO Use the following statement to include this module:
include { FASTQC } from '../modules/nf-core/fastqc/main'
It gives us a helpful message in the end. We have the line to include and use this module. We are going to use it later in the workflows/rnaseq.nf
file.
Let’s check what this command added in our directory:
tree modules
modules
└── nf-core
└── fastqc
├── environment.yml
├── main.nf
├── meta.yml
└── tests
├── main.nf.test
└── main.nf.test.snap
environment.yml
: contains data about the conda environment needed to make the module workmeta.yml
: contains metadata about the modulemain.nf
: contains the code of theprocess
fastqctest/
: data needed to test if the module works properly
We still need to install the other needed modules:
# In order to know exactly why we are downloading those you can Use
# nf-core modules list remote [module]
# nf-core modules info [module]
nf-core modules install trimgalore
nf-core modules install hisat2/build
nf-core modules install hisat2/align
We are not going to install the MultiQC module because if you check the module here you will see that it needs many optional input files that we are not going to use. Optional inputs files are a bit tricky to handle, you can check here in order to know how to process.
Let’s check that the modules needed are installed locally by running this command:
nf-core modules list local
This should display this:
INFO Repository type: pipeline
INFO Modules installed in '.':
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Module Name ┃ Repository ┃ Version SHA ┃ Message ┃ Date ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ fastqc │ nf-core/modules │ 41dfa3f │ update meta.yml of all modules (#8747) │ 2025-07-07 │
│ hisat2/align │ nf-core/modules │ 41dfa3f │ update meta.yml of all modules (#8747) │ 2025-07-07 │
│ hisat2/build │ nf-core/modules │ 41dfa3f │ update meta.yml of all modules (#8747) │ 2025-07-07 │
│ trimgalore │ nf-core/modules │ 41dfa3f │ update meta.yml of all modules (#8747) │ 2025-07-07 │
└──────────────┴─────────────────┴─────────────┴────────────────────────────────────────┴────────────┘
Create our own MultiQC module using nf-core module template
There is still one module missing: the multiqc
module. We are going to create it using the nf-core module template. To create a module run the following command:
nf-core modules create
It will prompt you questions about your modules in order to create it:
- Name of tool/subtool:
mymultiqc
- Do you want to enter a different Bioconda package name? [y/n]:
y
- Name of Bioconda package:
multiqc
- GitHub Username: (@author): <@username>
- Process resource label:
process_single
- Will the module require a meta map of sample information? [y/n] (y):
n
During this process, you may have seen that nf-core found a bioconda MultiQC module : bioconda::multiqc=1.31
. You can see that in the file modules/local/mymultiqc/environment.yml
.
It also prompted you to choose a process label. These labels are used by nf-core to identify the resources a process requires. You can check the conf/base.config
file, which contains the default configuration. You can see in this file that many labels are defined and used in order to specify the resources needed for the modules with those labels.
Many configuration file exists as a part of nf-core for specific institutional clusters: you can check them out here.
A config file for the CBPsmn
is available in nf-core config file. To use it simply launch your pipeline with
nextflow run . -profile psmm,test --outdir ../core-results
Note that it will not work on the github codespace
So it’s time to change the content of the new module created. Go on the file modules/local/mymultiqc/main.nf
. We are going to update its content.
This file contains at the top many comments that can help you build a module the nf-core way. We are going to remove those for better visibility.
The new first line of the file:
{
process MYMULTIQC '$bam'
tag 'process_single'
label ...
Now let’s remove the line tag '$bam'
. Indeed, it is usefull to know which data is being processed by a module but as we are going to launch it only once, it is not necessary.
You can see the line:
"${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
container 'https://depot.galaxyproject.org/singularity/YOUR-TOOL-HERE':
'biocontainers/YOUR-TOOL-HERE' }"
This line indicates that if we are using singularity
as a container engine and not docker
then we should pull a singularity container otherwise we need to pull a docker container.
Replace it by this one: this is the code we can find in the real MultiQC module
"${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
container 'https://community-cr-prod.seqera.io/docker/registry/v2/blobs/sha256/ef/eff0eafe78d5f3b65a6639265a16b89fdca88d06d18894f90fcdb50142004329/data' :
'community.wave.seqera.io/library/multiqc:1.31--1efbafd542a23882' }"
Then we are going to change the input section. Replace this section with this one:
:
inputpath(reports)
This tells Nextflow that this process only takes in input report file paths with no additional information.
Now let’s update the output section to this one:
:
output"multiqc.html", emit: report
path "multiqc_data", emit: data
path "versions.yml", emit: versions path
We are going to get the MultiQC html report from the mymultiqc
process and the data directory it produce. We are also going to retrieve the version of MultiQC used
Now let’s update the script section to this one:
:
script"""
multiqc . -n multiqc.html
cat <<-END_VERSIONS > versions.yml
"${task.process}":
multiqc: \$(multiqc --version)
END_VERSIONS
"""
This is the main part of the module. It will execute the MultiQC command on the process working directory that will contain every report file and output a file named multiqc.html
. It will also produce a version.yml
file containing the version of MultiQC used.
Finally let’s update the stub part of the MultiQC module:
:
stub"""
touch multiqc.html
mkdir multiqc_data
touch multiqc_data/multiqc.log
cat <<-END_VERSIONS > versions.yml
"${task.process}":
multiqc: \$(multiqc --version)
END_VERSIONS
"""
The stub
command replaces the actual process when the -stub
command line option is enabled. This makes it easier to prototype a workflow logic without the real command.
If a nextflow pipeline is executed with the -stub
option, if a process as no stub
section defined, then the script
section is executed.
The final code of the module should look like this:
{
process MYMULTIQC 'process_single'
label
// TODO nf-core: See section in main README for further information regarding finding and adding container addresses to the section below.
"${moduleDir}/environment.yml"
conda "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
container 'https://community-cr-prod.seqera.io/docker/registry/v2/blobs/sha256/ef/eff0eafe78d5f3b65a6639265a16b89fdca88d06d18894f90fcdb50142004329/data' :
'community.wave.seqera.io/library/multiqc:1.31--1efbafd542a23882' }"
:
inputpath(reports)
:
output"multiqc.html", emit: report
path "multiqc_data", emit: data
path "versions.yml", emit: versions
path
:
when.ext.when == null || task.ext.when
task
:
script"""
multiqc . -n multiqc.html
cat <<-END_VERSIONS > versions.yml
"${task.process}":
multiqc: \$(multiqc --version)
END_VERSIONS
"""
:
stub"""
touch multiqc.html
mkdir multiqc_data
touch multiqc_data/multiqc.log
cat <<-END_VERSIONS > versions.yml
"${task.process}":
multiqc: \$(multiqc --version)
END_VERSIONS
"""
}
That is it for the mymultiqc module, now let’s pull all the module together in order to execute them in a pipeline !
Assemble the pipeline
In order to make the pipeline execute all the modules, we are going to update the file workflows/rnaseq.nf
. This workflow is called by the main workflow file located in the file main.nf
. Let’s open this last file.
In this file we can see that a process called PIPELINE_INITIALISATION
is called prior to the CORE_RNASEQ
workflow that executes our pipeline. This process can check whether the parameters given to the pipeline match the parameter schema defined in nextflow_schema.json
. Also, if we check what PIPELINE_INITIALISATION
does in the subworkflows/local/utils_nfcore_rnaseq_pipeline/main.nf
we can see that it processes the input file given to the pipeline.
To visualise what we have exactly in the channel ch_samplesheet
in the workflows/rnaseq.nf
file it’s a good practice to use the view
command.
Let’s add at line 22 the following line:
nextflow run . -profile docker,test --outdir ../core-results --input ../data/single-end.csv
Normally this throw an error:
ERROR ~ Validation of pipeline parameters failed!
-- Check '.nextflow.log' file for details
The following invalid input values have been detected:
* --input (../data/single-end.csv): Validation of file failed:
-> Entry 1: Missing required field(s): sample
-> Entry 2: Missing required field(s): sample
-> Entry 3: Missing required field(s): sample
-> Entry 4: Missing required field(s): sample
-> Entry 5: Missing required field(s): sample
-> Entry 6: Missing required field(s): sample
This is because our input file does not have the required field sample !, instead it has a sample_id
fields. You can check that with the following command:
head -n 1 ../data/single-end.csv
We can fix this error by either changing the input file header or updating the schema defined in ./assets/schema_input.json
. We are going to use the last option. Let’s open the assets/schema_input.json
file. In this file replace sample
by sample_id
at line 10 and 31.
Now let’s restart the pipeline again:
nextflow run . -profile docker,test --outdir ../core-results --input ../data/single-end.csv
Ok ! everything seems to work fine ! You should see the following output:
[[id:ENCSR000COQ1, single_end:true], [/workspaces/training/nf4-science/rnaseq/data/reads/ENCSR000COQ1_1.fastq.gz]]
[[id:ENCSR000COQ2, single_end:true], [/workspaces/training/nf4-science/rnaseq/data/reads/ENCSR000COQ2_1.fastq.gz]]
[[id:ENCSR000COR1, single_end:true], [/workspaces/training/nf4-science/rnaseq/data/reads/ENCSR000COR1_1.fastq.gz]]
[[id:ENCSR000COR2, single_end:true], [/workspaces/training/nf4-science/rnaseq/data/reads/ENCSR000COR2_1.fastq.gz]]
[[id:ENCSR000CPO1, single_end:true], [/workspaces/training/nf4-science/rnaseq/data/reads/ENCSR000CPO1_1.fastq.gz]]
[[id:ENCSR000CPO2, single_end:true], [/workspaces/training/nf4-science/rnaseq/data/reads/ENCSR000CPO2_1.fastq.gz]]
We have a channel composed of tuples of two elements. The last value of the tuple is a fastq
file and the first value is the metadata associated with this fastq
file. It’s a groovy map with two keys: id
which was defined in the sample_id column of the input csv file and a single_end
parameter set to true which was inferred by the PIPELINE_INITIALISATION
process.
Ok, so the structure of the elements of our channel implies that the modules using it should have an input of the following form:
:
inputval(meta), path(reads) tuple
Thankfully, all nf-core modules are built this way, for example, let’s look at the nf-core/fastqc/main.nf
and the nf-core/trimgalore/main.nf
files. We can see that we have indeed the expected input file.
So we are good to go to add our modules in the workflows/rnaseq.nf
file !
In order to use them, we first need to import them, we can do so by adding the following lines at the line 8 of the workflows/rnaseq.nf
file.
{ FASTQC } from '../modules/nf-core/fastqc/main'
include { TRIMGALORE } from '../modules/nf-core/trimgalore/main'
include { HISAT2_BUILD } from '../modules/nf-core/hisat2/build/main'
include { HISAT2_ALIGN } from '../modules/nf-core/hisat2/align/main'
include { MYMULTIQC } from '../modules/local/mymultiqc/main' include
Now that we have seen that we can use the channel ch_samplesheet
as input of our TRIMGALORE
and FASTQC
process, let’s add them in the pipeline !
Remove the line and add the following code instead:
FASTQC(ch_samplesheet)
TRIMGALORE(ch_samplesheet)
let’s see if everything works as expected:
nextflow run . -profile docker,test --outdir ../core-results --input ../data/single-end.csv -resume
Oh no, we have an error ! But nextflow is helping us solving it. You can see in the error message the following statements:
ERROR ~ Error executing process > 'CORE_RNASEQ:RNASEQ:FASTQC (ENCSR000COQ2)'
Caused by:
Process requirement exceeds available CPUs -- req: 4; avail: 2
It seems that our FASTQC process wants to use too many CPUs. Let’s look at the FASTQC
and `TRIMGALORE
processes. We can see in the modules/nf-core/fastqc/main.nf
that the label process_medium
is used and for the file modules/nf-core/trimgalore/main.nf
the label process_high
is used.
If we check their configuration in conf/base.config
we have the following resources set for those labels:
withLabel:process_medium {
cpus = { 6 * task.attempt }
memory = { 36.GB * task.attempt }
time = { 8.h * task.attempt }
}
withLabel:process_high {
cpus = { 12 * task.attempt }
memory = { 72.GB * task.attempt }
time = { 16.h * task.attempt }
}
That’s way too much for a test environment ! We can change the values specified in here but this would alter the default resources available to the process if we use real data. Instead we are going to update the resource limits to respect in our test environment. In order to do this, let’s open the conf/test.config
file and update the resource limits:
process {
resourceLimits = [
cpus: 1,
memory: '1.GB',
time: '1.h'
]
}
Let’s try to make the pipeline work !
nextflow run . -profile docker,test --outdir ../core-results --input ../data/single-end.csv -resume
Awesome ! Everything is working fine ! Now let’s update the workflows/rnaseq.nf
to add the hisat2/build
process and the hisat2/align
process in order to build the hisat2 index and the map the reads to a genome respectively. In order to do that, we must create a parameter to supply the reference genome to the pipeline.
Let’s add the following code after the creation of an empty channel for the versions of the process:
ch_genome = Channel.fromPath(params.genome, checkIfExists: true).map {it -> [[id: it.baseName], it]}
ch_genome.view()
After pipeline execution with…
```bash
nextflow run . -profile docker,test --outdir ../core-results --input ../data/single-end.csv -resume --genome ../data/genome.fa
… you should see the following output
[90/57ae96] CORE_RNASEQ:RNASEQ:FASTQC (ENCSR000CPO2) [100%] 6 of 6, cached: 6 ✔
[6e/88f619] CORE_RNASEQ:RNASEQ:TRIMGALORE (ENCSR000CPO2) [100%] 6 of 6, cached: 6 ✔
[[id:genome], /workspaces/training/nf4-science/nf-core-rnaseq/data/genome.fa]
-[core/rnaseq] Pipeline completed successfully-
Great, now let’s add the process HISAT2_BUILD
. We can see in modules/nf-core/hisat2/build/main.nf
, that this process takes 3 input files, the last two are optional: see lines 40-41. As for now there is no easy way to handle optional inputs we are going to use the methods described here
Let’s create empty files:
touch assets/NOFILE
touch assets/NOFILE2
Then let’s create two dummy channels just after the definition of ch_genome
:
= Channel.fromPath("assets/NOFILE").map { it -> ["none1", it]}
dummy_channel1 = Channel.fromPath("assets/NOFILE2").map { it -> ["none2", it]} dummy_channel2
Then add the HISAT2_BUILD
process like this after the TRIMGALORE
process:
HISAT2_BUILD(ch_genome.collect(), dummy_channel1.collect(), dummy_channel2.collect())
Then we need to add the process HISAT2_ALIGN
in order to map the reads. If we check the file modules/nf-core/hisat2/align/main.nf
we can see that this process is taking three input arguments:
- The trimmed reads
- The indexed genome
- An optional file with the splice sites
So let’s add the following line after the HISAT2_BUILD
process:
HISAT2_ALIGN(TRIMGALORE.out.reads, HISAT2_BUILD.out.index.collect(), dummy_channel1.collect())
Let’s run the pipeline in order to see if everything works fine:
nextflow run . -profile docker,test --outdir ../core-results --input ../data/single-end.csv -resume --genome ../data/genome.fa
Finally let’s run MultiQC on every file report files produced by the previous process
After the HISAT2_ALIGN
in workflow/rnaseq.nf
add the following command to build a channel containing all report files and launch MYMULTIQC
process.
.out.zip.mix(
FASTQC.out.html,
FASTQC.out.log,
TRIMGALORE.out.html,
TRIMGALORE.out.zip,
TRIMGALORE.out.summary
HISAT2_ALIGN).map {it -> it[1] }.flatten().set { ch_reports }
MYMULTIQC(ch_reports.collect())
Check that everything is working fine !
. -profile docker,test --outdir ../core-results --input ../data/single-end.csv -resume --genome ../data/genome.fa nextflow run
You can see that you have some warnings when you launch your pipeline. That’s because we have to update the file nextflow_schema.json
and add under input_output_options
and properties
the following lines:
"genome": {
"type": "string",
"format": "file-path",
"description": "A genome file in fasta format"
},
"hisat2_build_memory": {
"type": "string",
"description": "The memory build for hisat2"
},
"seq_center": {
"type": "boolean",
},
"save_unaligned": {
"type": "boolean",
"description": "true to save unaligned read into a fastq file false else"
}
Then we can add default values under the nextflow.config
file at line 14:
genome = null
hisat2_build_memory = null
seq_center = false
save_unaligned = false
genome = null
This should suppress the warnings.