All command line parameters accepted by all tools in the GATK.
This is a list of options and parameters that are generally available to all tools in the GATK.
There may be a few restrictions, which are indicated in individual argument descriptions. For example the -BQSR argument is only meant to be used with a subset of tools, and the -pedigree argument will only be effectively used by a subset of tools as well. Some arguments conflict with others, and some conversely are dependent on others. This is all indicated in the detailed argument descriptions, so be sure to read those in their entirety rather than just skimming the one-line summaey in the table.
This class is the GATK engine itself, which manages map/reduce data access and runs walkers.
We run command line GATK programs using this class. It gets the command line args, parses them, and hands the gatk all the parsed out information. Pretty much anything dealing with the underlying system should go here; the GATK engine should deal with any data related information.
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
|Argument name(s)||Default value||Summary|
|NA||Name of the tool to run|
||NA||Input covariates table file for on-the-fly base quality score recalibration|
|NA||One or more genomic intervals to exclude from processing|
|NA||Input file containing sequence data (SAM or BAM)|
|NA||One or more genomic intervals over which to operate|
|NA||Exclude read groups based on tags|
|NA||Reference sequence file|
|NA||Set the logging location|
||NA||Type of BAQ calculation to apply in the engine|
|40.0||BAQ gap open penalty|
|-1||Assign a default base quality|
|NA||Target coverage threshold for downsampling to coverage|
|NA||Fraction of reads to downsample to|
|NA||Type of read downsampling to employ at a given locus|
|NA||GATK key file required to run with -et NO_ET|
||-1.0||Global Qscore Bayesian prior to use for BQSR|
|NA||Interval merging rule for abutting intervals|
|0||Amount of padding (in bp) to add to each interval|
|NA||Set merging approach to use for combining interval inputs|
|NA||Set the minimum level of logging|
||-1||Stop execution cleanly as soon as maxRuntime has been reached|
||NA||Unit of time used by maxRuntime|
|NA||Total number of BAM file handles to keep open simultaneously|
|1||Number of CPU threads to allocate per data thread|
|1||Number of data threads to allocate to this analysis|
|NA||Pedigree files for samples|
|NA||Pedigree string for samples|
|NA||Validation strictness for pedigree information|
|NA||Write GATK runtime performance log to this file|
|NA||Run reporting mode|
|6||Don't recalibrate bases with quality scores less than this threshold (with -BQSR)|
|NA||Number of reads per SAM file to buffer in memory|
|NA||Filters to apply to reads before analysis|
||NA||Tag to identify this GATK run as part of a group of runs|
|NA||Enable unsafe operations: nothing will be checked at runtime|
|NA||How strict should we be with validation|
|NA||Ignore warnings about base quality score encoding|
||NA||Turn off on-the-fly creation of indices for output BAM files.|
|NA||Disable printing of base insertion and deletion tags (with -BQSR)|
|NA||Emit the OQ tag with the original base qualities (with -BQSR)|
|NA||Fix mis-encoded base quality scores|
||NA||Enable on-the-fly creation of md5s for output BAM files.|
|NA||Generate the help message|
|NA||Keep program records in the SAM header|
|NA||Enable threading efficiency monitoring|
|NA||Always output all the records in VCF FORMAT fields, even if some are missing|
|NA||Use a non-deterministic random seed|
|NA||refactor cigar string with NDN elements to one element|
|NA||Remove program records from the SAM header|
||NA||Just output sites without genotypes (i.e. only the first 8 columns of the VCF)|
|NA||Use the base quality scores from the OQ tag|
||NA||Output version information|
|NA||Compression level to use for writing BAM files (0 - 9, higher is more compressed)|
||NA||Rename sample IDs on-the-fly at runtime using the provided mapping file|
||-1||Parameter to pass to the VCF/BCF IndexCreator|
||NA||Type of IndexCreator to use for VCF/BCF indices|
||NA||If provided, output BAM files will be simplified to include just key reads for downstream variation discovery analyses (removing duplicates, PF-, non-primary reads), as well stripping all extended tags from the kept reads except the read group identifier|
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
Ignore warnings about base quality score encoding
This flag tells GATK to ignore warnings when encountering base qualities that are too high and that seemingly indicate a problem with the base quality encoding of the BAM file. You should only use this if you really know what you are doing; otherwise you could seriously mess up your data and ruin your analysis.
Name of the tool to run
A complete list of tools (sometimes also called walkers because they "walk" through the data to perform analyses) is available in the online documentation.
Compression level to use for writing BAM files (0 - 9, higher is more compressed)
Type of BAQ calculation to apply in the engine
The --baq argument is an enumerated type (CalculationMode), which can have one of the following values:
BAQ gap open penalty
Phred-scaled gap open penalty for BAQ calculation. Although the default value is 40, a value of 30 may be better for whole genome call sets.
double [ [ 0 ∞ ] ]
Input covariates table file for on-the-fly base quality score recalibration
Enables on-the-fly recalibrate of base qualities, intended primarily for use with BaseRecalibrator and PrintReads (see Best Practices workflow documentation). The covariates tables are produced by the BaseRecalibrator tool. Please be aware that you should only run recalibration with the covariates file created on the same input bam(s).
Assign a default base quality
If reads are missing some or all base quality scores, this value will be used for all base quality scores. By default this is set to -1 to disable default base quality assignment.
byte [ [ 0 127 ] ]
Turn off on-the-fly creation of indices for output BAM files.
Disable printing of base insertion and deletion tags (with -BQSR)
Turns off printing of the base insertion and base deletion tags when using the -BQSR argument. Only the base substitution qualities will be produced.
Target coverage threshold for downsampling to coverage
The principle of this downsampling type is to downsample reads to a given capping threshold coverage. Its purpose is to get rid of excessive coverage, because above a certain depth, having additional data is not informative and imposes unreasonable computational costs. The downsampling process takes two different forms depending on the type of analysis it is used with. For locus-based traversals (LocusWalkers like UnifiedGenotyper and ActiveRegionWalkers like HaplotypeCaller), downsample_to_coverage controls the maximum depth of coverage at each locus. For read-based traversals (ReadWalkers like BaseRecalibrator), it controls the maximum number of reads sharing the same alignment start position. For ReadWalkers you will typically need to use much lower dcov values than you would with LocusWalkers to see an effect. Note that this downsampling option does not produce an unbiased random sampling from all available reads at each locus: instead, the primary goal of the to-coverage downsampler is to maintain an even representation of reads from all alignment start positions when removing excess coverage. For a truly unbiased random sampling of reads, use -dfrac instead. Also note that the coverage target is an approximate goal that is not guaranteed to be met exactly: the downsampling algorithm will under some circumstances retain slightly more or less coverage than requested.
Fraction of reads to downsample to
Reads will be downsampled so the specified fraction remains; e.g. if you specify -dfrac 0.25, three-quarters of the reads will be removed, and the remaining one quarter will be used in the analysis. This method of downsampling is truly unbiased and random. It is typically used to simulate the effect of generating different amounts of sequence data for a given sample. For example, you can use this in a pilot experiment to evaluate how much target coverage you need to aim for in order to obtain enough coverage in all loci of interest.
Type of read downsampling to employ at a given locus
There are several ways to downsample reads, i.e. to removed reads from the pile of reads that will be used for analysis. See the documentation of the individual downsampling options for details on how they work. Note that Many GATK tools specify a default downsampling type and target, but this behavior can be overridden from command line using the downsampling arguments.
The --downsampling_type argument is an enumerated type (DownsampleType), which can have one of the following values:
Emit the OQ tag with the original base qualities (with -BQSR)
By default, the OQ tag in not emitted when using the -BQSR argument. Use this flag to include OQ tags in the output BAM file. Note that this may results in significant file size increase.
One or more genomic intervals to exclude from processing
Use this option to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the command line (e.g. -XL chr1 or -XL chr1:100-200) or by loading in a file containing a list of intervals (e.g. -XL myFile.intervals). Additionally, you can also specify a ROD file (such as a VCF file) in order to exclude specific positions from the analysis based on the records present in the file (e.g. -XL file.vcf).
Fix mis-encoded base quality scores
By default the GATK assumes that base quality scores start at Q0 == ASCII 33 according to the SAM specification. However, encoding in some datasets (especially older Illumina ones) starts at Q64. This argument will fix the encodings on the fly (as the data is read in) by subtracting 31 from every quality score. Note that this argument should NEVER be used by default; you should only use it when you have confirmed that the quality scores in your data are not in the correct encoding.
GATK key file required to run with -et NO_ET
Please see the "phone_home" argument below and the online documentation FAQs for more details on the key system and how to request a key.
Enable on-the-fly creation of md5s for output BAM files.
Global Qscore Bayesian prior to use for BQSR
If specified, this value will be used as the prior for all mismatch quality scores instead of the actual reported quality score.
double [ [ -∞ ∞ ] ]
Generate the help message
This will produce a help message in the terminal with general usage information, listing available arguments as well as tool-specific information if applicable.
Input file containing sequence data (SAM or BAM)
An input file containing sequence data mapped to a reference, in SAM or BAM format, or a text file containing a list of input files (with extension .list). Note that the GATK requires an accompanying index for each SAM or BAM file. Please see our online documentation for more details on input formatting requirements.
Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not actually overlap) into a single continuous interval. However you can change this behavior if you want them to be treated as separate intervals instead.
The --interval_merging argument is an enumerated type (IntervalMergingRule), which can have one of the following values:
Amount of padding (in bp) to add to each interval
Use this to add padding to the intervals specified using -L and/or -XL. For example, '-L chr1:100' with a padding value of 20 would turn into '-L chr1:80-120'. This is typically used to add padding around exons when analyzing exomes. The general Broad exome calling pipeline uses 100 bp padding by default.
int [ [ 0 ∞ ] ]
Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to perform the analysis on positions for which there is a record in a VCF, but restrict this to just those on chromosome 20, you would do -L chr20 -L file.vcf -isr INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will always be merged using UNION). Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.
The --interval_set_rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:
One or more genomic intervals over which to operate
Use this option to perform the analysis over only part of the genome. This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the command line (e.g. -L chr1 or -L chr1:100-200) or by loading in a file containing a list of intervals (e.g. -L myFile.intervals). Additionally, you can also specify a ROD file (such as a VCF file) in order to perform the analysis at specific positions based on the records present in the file (e.g. -L file.vcf). Finally, you can also use this to perform the analysis on the reads that are completely unmapped in the BAM file (i.e. those without a reference contig) by specifying -L unmapped.
Keep program records in the SAM header
Some tools discard program records from the SAM header by default. Use this argument to override that behavior and keep program records in the SAM header.
Set the logging location
File to save the logging output.
Set the minimum level of logging
Setting INFO gets you INFO up to FATAL, setting ERROR gets you ERROR and FATAL level logging, and so on.
Stop execution cleanly as soon as maxRuntime has been reached
This will truncate the run but without exiting with a failure. By default the value is interpreted in minutes, but this can be changed with the maxRuntimeUnits argument.
long [ [ -∞ ∞ ] ]
Unit of time used by maxRuntime
The --maxRuntimeUnits argument is an enumerated type (TimeUnit), which can have one of the following values:
Enable threading efficiency monitoring
Enable GATK to monitor its own threading efficiency, at an itsy-bitsy tiny cost (< 0.1%) in runtime because of turning on the JavaBean. This is largely for debugging purposes. Note that this argument is not compatible with -nt, it only works with -nct.
Always output all the records in VCF FORMAT fields, even if some are missing
The VCF specification permits missing records to be dropped from the end of FORMAT fields, so long as GT is always output. This option prevents GATK from performing that trimming.
For example, given a FORMAT of
GT:AD:DP:PL, GATK will by default emit
./.for a variant with no reads present (ie, the AD, DP, and PL fields are trimmed). If you specify -writeFullFormat, this record would be emitted as
Use a non-deterministic random seed
If this flag is enabled, the random numbers generated will be different in every run, causing GATK to behave non-deterministically.
Total number of BAM file handles to keep open simultaneously
Number of CPU threads to allocate per data thread
Each CPU thread operates the map cycle independently, but may run into earlier scaling problems with IO than data threads. Has the benefit of not requiring X times as much memory per thread as data threads do, but rather only a constant overhead. See online documentation FAQs for more information.
int [ [ 1 ∞ ] ]
Number of data threads to allocate to this analysis
Data threads contains N cpu threads per data thread, and act as completely data parallel processing, increasing the memory usage of GATK by M data threads. Data threads generally scale extremely effectively, up to 24 cores. See online documentation FAQs for more information.
Integer [ [ 1 ∞ ] ]
Pedigree files for samples
Reads PED file-formatted tabular text files describing meta-data about the samples being processed in the GATK.
The PED file is a white-space (space or tab) delimited file: the first six columns are mandatory:
The IDs are alphanumeric: the combination of family and individual ID should uniquely identify a person. A PED file must have 1 and only 1 phenotype in the sixth column. The phenotype can be either a quantitative trait or an affection status column: GATK will automatically detect which type (i.e. based on whether a value other than 0, 1, 2 or the missing genotype code is observed).
If an individual's sex is unknown, then any character other than 1 or 2 can be used.
You can add a comment to a PED or MAP file by starting the line with a # character. The rest of that line will be ignored. Do not start any family IDs with this character therefore.
Affection status should be coded:
If any value outside of -9,0,1,2 is detected than the samples are assumed to phenotype values are interpreted as string phenotype values. In this case -9 uniquely represents the missing value.
Genotypes (column 7 onwards) cannot be specified to the GATK.
For example, here are two individuals (one row = one person):
FAM001 1 0 0 1 2 FAM001 2 0 0 1 2
Each -ped argument can be tagged with NO_FAMILY_ID, NO_PARENTS, NO_SEX, NO_PHENOTYPE to tell the GATK PED parser that the corresponding fields are missing from the ped file.
Note that most GATK walkers do not use pedigree information. Walkers that require pedigree data should clearly indicate so in their arguments and will throw errors if required pedigree information is missing.
Pedigree string for samples
Inline PED records (see -ped argument). Each -pedString STRING can contain one or more valid PED records (see -ped) separated by semi-colons. Supports all tags for each pedString as -ped supports
Validation strictness for pedigree information
How strict should we be in parsing the PED files?
The --pedigreeValidationType argument is an enumerated type (PedigreeValidationType), which can have one of the following values:
Write GATK runtime performance log to this file
The file name for the GATK performance log output, or null if you don't want to generate the detailed performance logging table. This table is suitable for importing into R or any other analysis software that can read tsv files.
Run reporting mode
By default, GATK generates a run report that is uploaded to a cloud-based service. This report contains basic statistics about the run (which tool was used, whether the run was successful etc.) that help us for debugging and development. Up to version 3.2-2 the run report contains a record of the username and hostname associated with the run, but it does **NOT** contain any information that could be used to identify patient data. Nevertheless, if your data is subject to stringent confidentiality clauses (no outside communication) or if your run environment is not connected to the internet, you can disable the reporting system by seeting this option to "NO_ET". You will also need to request a key using the online request form on our website (se FAQs).
The --phone_home argument is an enumerated type (PhoneHomeOption), which can have one of the following values:
Don't recalibrate bases with quality scores less than this threshold (with -BQSR)
This flag tells GATK not to modify quality scores less than this value. Instead they will be written out unmodified in the recalibrated BAM file. In general it's unsafe to change qualities scores below < 6, since base callers use these values to indicate random or bad bases. For example, Illumina writes Q2 bases when the machine has really gone wrong. This would be fine in and of itself, but when you select a subset of these reads based on their ability to align to the reference and their dinucleotide effect, your Q2 bin can be elevated to Q8 or Q10, leading to issues downstream.
int [ [ 0 [ 6 ∞ ] ]
Number of reads per SAM file to buffer in memory
Filters to apply to reads before analysis
Reads that fail the specified filters will not be used in the analysis. Multiple filters can be specified separately, e.g. you can do -rf MalformedRead -rf BadCigar and so on. Available read filters are listed in the online tool documentation. Note that the read name format is e.g. MalformedReadFilter, but at the command line the filter name should be given without the Filter suffix; e.g. -rf MalformedRead (NOT -rf MalformedReadFilter, which is not recognized by the program). Note also that some read filters are applied by default for some analysis tools; this is specified in each tool's documentation. The default filters cannot be disabled.
Exclude read groups based on tags
This will filter out read groups matching
refactor cigar string with NDN elements to one element
This flag tells GATK to refactor cigar string with NDN elements to one element. It intended primarily for use in a RNAseq pipeline since the problem might come up when using RNAseq aligner such as Tophat2 with provided transcriptoms. You should only use this if you know that your reads have that problem.
Reference sequence file
The reference genome against which the sequence data was mapped. The GATK requires an index file and a dictionary file accompanying the reference (please see the online documentation FAQs for more details on these files). Although this argument is indicated as being optional, almost all GATK tools require a reference in order to run. Note also that while GATK can in theory process genomes from any organism with any number of chromosomes or contigs, it is not designed to process draft genome assemblies and performance will decrease as the number of contigs in the reference increases. We strongly discourage the use of unfinished genome assemblies containing more than a few hundred contigs. Contig numbers in the thousands will most probably cause memory-related crashes.
Remove program records from the SAM header
Some tools keep program records in the SAM header by default. Use this argument to override that behavior and discard program records for the SAM header.
Rename sample IDs on-the-fly at runtime using the provided mapping file
On-the-fly sample renaming works only with single-sample BAM and VCF files. Each line of the mapping file must contain the absolute path to a BAM or VCF file, followed by whitespace, followed by the new sample name for that BAM or VCF file. The sample name may contain non-tab whitespace, but leading or trailing whitespace will be ignored. The engine will verify at runtime that each BAM/VCF targeted for sample renaming has only a single sample specified in its header (though, in the case of BAM files, there may be multiple read groups for that sample).
If provided, output BAM files will be simplified to include just key reads for downstream variation discovery analyses (removing duplicates, PF-, non-primary reads), as well stripping all extended tags from the kept reads except the read group identifier
Just output sites without genotypes (i.e. only the first 8 columns of the VCF)
Tag to identify this GATK run as part of a group of runs
The GATKRunReport supports (as of GATK 2.2) tagging GATK runs with an arbitrary tag that can be used to group together runs during later analysis. One use of this capability is to tag runs as GATK performance tests, so that the performance of the GATK over time can be assessed from the logs directly. Note that the tags do not conform to any ontology, so you are free to use any tags that you might find meaningful.
Enable unsafe operations: nothing will be checked at runtime
For expert users only who know what they are doing. We do not support usage of this argument, so we may refuse to help you if you use it and something goes wrong. The one exception to this rule is ALLOW_N_CIGAR_READS, which is necessary for RNAseq analysis.
The --unsafe argument is an enumerated type (TYPE), which can have one of the following values:
Use the base quality scores from the OQ tag
This flag tells GATK to use the original base qualities (that were in the data before BQSR/recalibration) which are stored in the OQ tag, if they are present, rather than use the post-recalibration quality scores. If no OQ tag is present for a read, the standard qual score will be used.
How strict should we be with validation
Keep in mind that if you set this to LENIENT, we may refuse to provide you with support if anything goes wrong.
The --validation_strictness argument is an enumerated type (ValidationStringency), which can have one of the following values:
Parameter to pass to the VCF/BCF IndexCreator
This is either the bin width or the number of features per bin, depending on the indexing strategy
int [ [ -∞ ∞ ] ]
Type of IndexCreator to use for VCF/BCF indices
Specify the Tribble indexing strategy to use for VCFs. LINEAR creates a LinearIndex with bins of equal width, specified by the Bin Width parameter INTERVAL creates an IntervalTreeIndex with bins with an equal amount of features, specified by the Features Per Bin parameter DYNAMIC_SEEK attempts to optimize for minimal seek time by choosing an appropriate strategy and parameter (user-supplied parameter is ignored) DYNAMIC_SIZE attempts to optimize for minimal index size by choosing an appropriate strategy and parameter (user-supplied parameter is ignored)
The --variant_index_type argument is an enumerated type (GATKVCFIndexType), which can have one of the following values:
Output version information
Use this to check the version number of the GATK executable you are invoking. Note that the version number is always included in the output at the start of every run as well as any error message.
GATK version 3.3-0-g37228af built at 2014/10/24 14:40:51. GTD: NA