Documentation

CommandLineGATK

All command line parameters accepted by all tools in the GATK.

Category Engine Parameters (available to all tools)


Overview

Info for general users

This is a list of options and parameters that are generally available to all tools in the GATK.

There may be a few restrictions, which are indicated in individual argument descriptions. For example the -BQSR argument is only meant to be used with a subset of tools, and the -pedigree argument will only be effectively used by a subset of tools as well. Some arguments conflict with others, and some conversely are dependent on others. This is all indicated in the detailed argument descriptions, so be sure to read those in their entirety rather than just skimming the one-line summaey in the table.

Info for developers

This class is the GATK engine itself, which manages map/reduce data access and runs walkers.

We run command line GATK programs using this class. It gets the command line args, parses them, and hands the gatk all the parsed out information. Pretty much anything dealing with the underlying system should go here; the GATK engine should deal with any data related information.


Command-line Arguments

CommandLineGATK specific arguments

This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.

Argument name(s) Default value Summary
Required Parameters
--analysis_type
 -T
NA Name of the tool to run
Optional Inputs
--BQSR
NA Input covariates table file for on-the-fly base quality score recalibration
--excludeIntervals
 -XL
NA One or more genomic intervals to exclude from processing
--input_file
 -I
[] Input file containing sequence data (SAM or BAM)
--intervals
 -L
NA One or more genomic intervals over which to operate
--read_group_black_list
 -rgbl
NA Exclude read groups based on tags
--reference_sequence
 -R
NA Reference sequence file
Optional Outputs
--log_to_file
 -log
NA Set the logging location
Optional Parameters
--baq
OFF Type of BAQ calculation to apply in the engine
--baqGapOpenPenalty
 -baqGOP
40.0 BAQ gap open penalty
--defaultBaseQualities
 -DBQ
-1 Assign a default base quality
--downsample_to_coverage
 -dcov
NA Target coverage threshold for downsampling to coverage
--downsample_to_fraction
 -dfrac
NA Fraction of reads to downsample to
--downsampling_type
 -dt
NA Type of read downsampling to employ at a given locus
--gatk_key
 -K
NA GATK key file required to run with -et NO_ET
--globalQScorePrior
-1.0 Global Qscore Bayesian prior to use for BQSR
--interval_merging
 -im
ALL Interval merging rule for abutting intervals
--interval_padding
 -ip
0 Amount of padding (in bp) to add to each interval
--interval_set_rule
 -isr
UNION Set merging approach to use for combining interval inputs
--logging_level
 -l
INFO Set the minimum level of logging
--maxRuntime
-1 Stop execution cleanly as soon as maxRuntime has been reached
--maxRuntimeUnits
MINUTES Unit of time used by maxRuntime
--num_bam_file_handles
 -bfh
NA Total number of BAM file handles to keep open simultaneously
--num_cpu_threads_per_data_thread
 -nct
1 Number of CPU threads to allocate per data thread
--num_threads
 -nt
1 Number of data threads to allocate to this analysis
--pedigree
 -ped
[] Pedigree files for samples
--pedigreeString
 -pedString
[] Pedigree string for samples
--pedigreeValidationType
 -pedValidationType
STRICT Validation strictness for pedigree information
--performanceLog
 -PF
NA Write GATK runtime performance log to this file
--phone_home
 -et
AWS Run reporting mode
--preserve_qscores_less_than
 -preserveQ
6 Don't recalibrate bases with quality scores less than this threshold (with -BQSR)
--read_buffer_size
 -rbs
NA Number of reads per SAM file to buffer in memory
--read_filter
 -rf
[] Filters to apply to reads before analysis
--tag
NA Tag to identify this GATK run as part of a group of runs
--unsafe
 -U
NA Enable unsafe operations: nothing will be checked at runtime
--validation_strictness
 -S
SILENT How strict should we be with validation
Optional Flags
--allow_potentially_misencoded_quality_scores
 -allowPotentiallyMisencodedQuals
false Ignore warnings about base quality score encoding
--disable_indel_quals
 -DIQ
false Disable printing of base insertion and deletion tags (with -BQSR)
--emit_original_quals
 -EOQ
false Emit the OQ tag with the original base qualities (with -BQSR)
--fix_misencoded_quality_scores
 -fixMisencodedQuals
false Fix mis-encoded base quality scores
--help
 -h
false Generate the help message
--keep_program_records
 -kpr
false Keep program records in the SAM header
--monitorThreadEfficiency
 -mte
false Enable threading efficiency monitoring
--nonDeterministicRandomSeed
 -ndrs
false Use a non-deterministic random seed
--refactor_NDN_cigar_string
 -fixNDN
false refactor cigar string with NDN elements to one element
--remove_program_records
 -rpr
false Remove program records from the SAM header
--useOriginalQualities
 -OQ
false Use the base quality scores from the OQ tag
--version
false Output version information
Advanced Parameters
--sample_rename_mapping_file
NA Rename sample IDs on-the-fly at runtime using the provided mapping file
--variant_index_parameter
-1 Parameter to pass to the VCF/BCF IndexCreator
--variant_index_type
DYNAMIC_SEEK Type of IndexCreator to use for VCF/BCF indices

Argument details

Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.


--allow_potentially_misencoded_quality_scores / -allowPotentiallyMisencodedQuals

Ignore warnings about base quality score encoding
This flag tells GATK to ignore warnings when encountering base qualities that are too high and that seemingly indicate a problem with the base quality encoding of the BAM file. You should only use this if you really know what you are doing; otherwise you could seriously mess up your data and ruin your analysis.

boolean  false


--analysis_type / -T

Name of the tool to run
A complete list of tools (sometimes also called walkers because they "walk" through the data to perform analyses) is available in the online documentation.

R String


--baq / -baq

Type of BAQ calculation to apply in the engine

The --baq argument is an enumerated type (CalculationMode), which can have one of the following values:

OFF
CALCULATE_AS_NECESSARY
RECALCULATE

CalculationMode  OFF


--baqGapOpenPenalty / -baqGOP

BAQ gap open penalty
Phred-scaled gap open penalty for BAQ calculation. Although the default value is 40, a value of 30 may be better for whole genome call sets.

double  40.0  [ [ 0  ? ] ]


--BQSR / -BQSR

Input covariates table file for on-the-fly base quality score recalibration
Enables on-the-fly recalibrate of base qualities, intended primarily for use with BaseRecalibrator and PrintReads (see Best Practices workflow documentation). The covariates tables are produced by the BaseRecalibrator tool. Please be aware that you should only run recalibration with the covariates file created on the same input bam(s).

File


--defaultBaseQualities / -DBQ

Assign a default base quality
If reads are missing some or all base quality scores, this value will be used for all base quality scores. By default this is set to -1 to disable default base quality assignment.

byte  -1  [ [ 0  127 ] ]


--disable_indel_quals / -DIQ

Disable printing of base insertion and deletion tags (with -BQSR)
Turns off printing of the base insertion and base deletion tags when using the -BQSR argument. Only the base substitution qualities will be produced.

boolean  false


--downsample_to_coverage / -dcov

Target coverage threshold for downsampling to coverage
The principle of this downsampling type is to downsample reads to a given capping threshold coverage. Its purpose is to get rid of excessive coverage, because above a certain depth, having additional data is not informative and imposes unreasonable computational costs. The downsampling process takes two different forms depending on the type of analysis it is used with. For locus-based traversals (LocusWalkers like UnifiedGenotyper and ActiveRegionWalkers like HaplotypeCaller), downsample_to_coverage controls the maximum depth of coverage at each locus. For read-based traversals (ReadWalkers like BaseRecalibrator), it controls the maximum number of reads sharing the same alignment start position. For ReadWalkers you will typically need to use much lower dcov values than you would with LocusWalkers to see an effect. Note that this downsampling option does not produce an unbiased random sampling from all available reads at each locus: instead, the primary goal of the to-coverage downsampler is to maintain an even representation of reads from all alignment start positions when removing excess coverage. For a truly unbiased random sampling of reads, use -dfrac instead. Also note that the coverage target is an approximate goal that is not guaranteed to be met exactly: the downsampling algorithm will under some circumstances retain slightly more or less coverage than requested.

Integer


--downsample_to_fraction / -dfrac

Fraction of reads to downsample to
Reads will be downsampled so the specified fraction remains; e.g. if you specify -dfrac 0.25, three-quarters of the reads will be removed, and the remaining one quarter will be used in the analysis. This method of downsampling is truly unbiased and random. It is typically used to simulate the effect of generating different amounts of sequence data for a given sample. For example, you can use this in a pilot experiment to evaluate how much target coverage you need to aim for in order to obtain enough coverage in all loci of interest.

Double


--downsampling_type / -dt

Type of read downsampling to employ at a given locus
There are several ways to downsample reads, i.e. to removed reads from the pile of reads that will be used for analysis. See the documentation of the individual downsampling options for details on how they work. Note that Many GATK tools specify a default downsampling type and target, but this behavior can be overridden from command line using the downsampling arguments.

The --downsampling_type argument is an enumerated type (DownsampleType), which can have one of the following values:

NONE
ALL_READS
BY_SAMPLE

DownsampleType


--emit_original_quals / -EOQ

Emit the OQ tag with the original base qualities (with -BQSR)
By default, the OQ tag in not emitted when using the -BQSR argument. Use this flag to include OQ tags in the output BAM file. Note that this may results in significant file size increase.

boolean  false


--excludeIntervals / -XL

One or more genomic intervals to exclude from processing
Use this option to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the command line (e.g. -XL chr1 or -XL chr1:100-200) or by loading in a file containing a list of intervals (e.g. -XL myFile.intervals). Additionally, you can also specify a ROD file (such as a VCF file) in order to exclude specific positions from the analysis based on the records present in the file (e.g. -XL file.vcf).

List[IntervalBinding[Feature]]


--fix_misencoded_quality_scores / -fixMisencodedQuals

Fix mis-encoded base quality scores
By default the GATK assumes that base quality scores start at Q0 == ASCII 33 according to the SAM specification. However, encoding in some datasets (especially older Illumina ones) starts at Q64. This argument will fix the encodings on the fly (as the data is read in) by subtracting 31 from every quality score. Note that this argument should NEVER be used by default; you should only use it when you have confirmed that the quality scores in your data are not in the correct encoding.

boolean  false


--gatk_key / -K

GATK key file required to run with -et NO_ET
Please see the "phone_home" argument below and the online documentation FAQs for more details on the key system and how to request a key.

File


--globalQScorePrior / -globalQScorePrior

Global Qscore Bayesian prior to use for BQSR
If specified, this value will be used as the prior for all mismatch quality scores instead of the actual reported quality score.

double  -1.0  [ [ -?  ? ] ]


--help / -h

Generate the help message
This will produce a help message in the terminal with general usage information, listing available arguments as well as tool-specific information if applicable.

Boolean  false


--input_file / -I

Input file containing sequence data (SAM or BAM)
An input file containing sequence data mapped to a reference, in SAM or BAM format, or a text file containing a list of input files (with extension .list). Note that the GATK requires an accompanying index for each SAM or BAM file. Please see our online documentation for more details on input formatting requirements.

List[String]  []


--interval_merging / -im

Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not actually overlap) into a single continuous interval. However you can change this behavior if you want them to be treated as separate intervals instead.

The --interval_merging argument is an enumerated type (IntervalMergingRule), which can have one of the following values:

ALL
OVERLAPPING_ONLY

IntervalMergingRule  ALL


--interval_padding / -ip

Amount of padding (in bp) to add to each interval
Use this to add padding to the intervals specified using -L and/or -XL. For example, '-L chr1:100' with a padding value of 20 would turn into '-L chr1:80-120'. This is typically used to add padding around exons when analyzing exomes. The general Broad exome calling pipeline uses 100 bp padding by default.

int  0  [ [ 0  ? ] ]


--interval_set_rule / -isr

Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to perform the analysis on positions for which there is a record in a VCF, but restrict this to just those on chromosome 20, you would do -L chr20 -L file.vcf -isr INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will always be merged using UNION). Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.

The --interval_set_rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:

UNION
Take the union of all intervals
INTERSECTION
Take the intersection of intervals (the subset that overlaps all intervals specified)

IntervalSetRule  UNION


--intervals / -L

One or more genomic intervals over which to operate
Use this option to perform the analysis over only part of the genome. This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the command line (e.g. -L chr1 or -L chr1:100-200) or by loading in a file containing a list of intervals (e.g. -L myFile.intervals). Additionally, you can also specify a ROD file (such as a VCF file) in order to perform the analysis at specific positions based on the records present in the file (e.g. -L file.vcf). Finally, you can also use this to perform the analysis on the reads that are completely unmapped in the BAM file (i.e. those without a reference contig) by specifying -L unmapped.

List[IntervalBinding[Feature]]


--keep_program_records / -kpr

Keep program records in the SAM header
Some tools discard program records from the SAM header by default. Use this argument to override that behavior and keep program records in the SAM header.

boolean  false


--log_to_file / -log

Set the logging location
File to save the logging output.

String


--logging_level / -l

Set the minimum level of logging
Setting INFO gets you INFO up to FATAL, setting ERROR gets you ERROR and FATAL level logging, and so on.

String  INFO


--maxRuntime / -maxRuntime

Stop execution cleanly as soon as maxRuntime has been reached
This will truncate the run but without exiting with a failure. By default the value is interpreted in minutes, but this can be changed with the maxRuntimeUnits argument.

long  -1  [ [ -?  ? ] ]


--maxRuntimeUnits / -maxRuntimeUnits

Unit of time used by maxRuntime

The --maxRuntimeUnits argument is an enumerated type (TimeUnit), which can have one of the following values:

NANOSECONDS
MICROSECONDS
MILLISECONDS
SECONDS
MINUTES
HOURS
DAYS

TimeUnit  MINUTES


--monitorThreadEfficiency / -mte

Enable threading efficiency monitoring
Enable GATK to monitor its own threading efficiency, at an itsy-bitsy tiny cost (< 0.1%) in runtime because of turning on the JavaBean. This is largely for debugging purposes. Note that this argument is not compatible with -nt, it only works with -nct.

Boolean  false


--nonDeterministicRandomSeed / -ndrs

Use a non-deterministic random seed
If this flag is enabled, the random numbers generated will be different in every run, causing GATK to behave non-deterministically.

boolean  false


--num_bam_file_handles / -bfh

Total number of BAM file handles to keep open simultaneously

Integer


--num_cpu_threads_per_data_thread / -nct

Number of CPU threads to allocate per data thread
Each CPU thread operates the map cycle independently, but may run into earlier scaling problems with IO than data threads. Has the benefit of not requiring X times as much memory per thread as data threads do, but rather only a constant overhead. See online documentation FAQs for more information.

int  1  [ [ 1  ? ] ]


--num_threads / -nt

Number of data threads to allocate to this analysis
Data threads contains N cpu threads per data thread, and act as completely data parallel processing, increasing the memory usage of GATK by M data threads. Data threads generally scale extremely effectively, up to 24 cores. See online documentation FAQs for more information.

Integer  1  [ [ 1  ? ] ]


--pedigree / -ped

Pedigree files for samples

Reads PED file-formatted tabular text files describing meta-data about the samples being processed in the GATK.

The PED file is a white-space (space or tab) delimited file: the first six columns are mandatory:

  • Family ID
  • Individual ID
  • Paternal ID
  • Maternal ID
  • Sex (1=male; 2=female; other=unknown)
  • Phenotype

The IDs are alphanumeric: the combination of family and individual ID should uniquely identify a person. A PED file must have 1 and only 1 phenotype in the sixth column. The phenotype can be either a quantitative trait or an affection status column: GATK will automatically detect which type (i.e. based on whether a value other than 0, 1, 2 or the missing genotype code is observed).

If an individual's sex is unknown, then any character other than 1 or 2 can be used.

You can add a comment to a PED or MAP file by starting the line with a # character. The rest of that line will be ignored. Do not start any family IDs with this character therefore.

Affection status should be coded:

  • -9 missing
  • 0 missing
  • 1 unaffected
  • 2 affected

If any value outside of -9,0,1,2 is detected than the samples are assumed to phenotype values are interpreted as string phenotype values. In this case -9 uniquely represents the missing value.

Genotypes (column 7 onwards) cannot be specified to the GATK.

For example, here are two individuals (one row = one person):

   FAM001  1  0 0  1  2
   FAM001  2  0 0  1  2
 

Each -ped argument can be tagged with NO_FAMILY_ID, NO_PARENTS, NO_SEX, NO_PHENOTYPE to tell the GATK PED parser that the corresponding fields are missing from the ped file.

Note that most GATK walkers do not use pedigree information. Walkers that require pedigree data should clearly indicate so in their arguments and will throw errors if required pedigree information is missing.

List[File]  []


--pedigreeString / -pedString

Pedigree string for samples
Inline PED records (see -ped argument). Each -pedString STRING can contain one or more valid PED records (see -ped) separated by semi-colons. Supports all tags for each pedString as -ped supports

List[String]  []


--pedigreeValidationType / -pedValidationType

Validation strictness for pedigree information
How strict should we be in parsing the PED files?

The --pedigreeValidationType argument is an enumerated type (PedigreeValidationType), which can have one of the following values:

STRICT
Require if a pedigree file is provided at all samples in the VCF or BAM files have a corresponding entry in the pedigree file(s).
SILENT
Do not enforce any overlap between the VCF/BAM samples and the pedigree data

PedigreeValidationType  STRICT


--performanceLog / -PF

Write GATK runtime performance log to this file
The file name for the GATK performance log output, or null if you don't want to generate the detailed performance logging table. This table is suitable for importing into R or any other analysis software that can read tsv files.

File


--phone_home / -et

Run reporting mode
By default, GATK generates a run report that is uploaded to a cloud-based service. This report contains basic statistics about the run (which tool was used, whether the run was successful etc.) that help us for debugging and development. Up to version 3.2-2 the run report contains a record of the username and hostname associated with the run, but it does **NOT** contain any information that could be used to identify patient data. Nevertheless, if your data is subject to stringent confidentiality clauses (no outside communication) or if your run environment is not connected to the internet, you can disable the reporting system by seeting this option to "NO_ET". You will also need to request a key using the online request form on our website (se FAQs).

The --phone_home argument is an enumerated type (PhoneHomeOption), which can have one of the following values:

NO_ET
Disable phone home
AWS
Forces the report to go to S3
STDOUT
Force output to STDOUT. For debugging only

PhoneHomeOption  AWS


--preserve_qscores_less_than / -preserveQ

Don't recalibrate bases with quality scores less than this threshold (with -BQSR)
This flag tells GATK not to modify quality scores less than this value. Instead they will be written out unmodified in the recalibrated BAM file. In general it's unsafe to change qualities scores below < 6, since base callers use these values to indicate random or bad bases. For example, Illumina writes Q2 bases when the machine has really gone wrong. This would be fine in and of itself, but when you select a subset of these reads based on their ability to align to the reference and their dinucleotide effect, your Q2 bin can be elevated to Q8 or Q10, leading to issues downstream.

int  6  [ [ 0  [ 6  ? ] ]


--read_buffer_size / -rbs

Number of reads per SAM file to buffer in memory

Integer


--read_filter / -rf

Filters to apply to reads before analysis
Reads that fail the specified filters will not be used in the analysis. Multiple filters can be specified separately, e.g. you can do -rf MalformedRead -rf BadCigar and so on. Available read filters are listed in the online tool documentation. Note that the read name format is e.g. MalformedReadFilter, but at the command line the filter name should be given without the Filter suffix; e.g. -rf MalformedRead (NOT -rf MalformedReadFilter, which is not recognized by the program). Note also that some read filters are applied by default for some analysis tools; this is specified in each tool's documentation. The default filters cannot be disabled.

List[String]  []


--read_group_black_list / -rgbl

Exclude read groups based on tags
This will filter out read groups matching : (e.g. SM:sample1) or a .txt file containing the filter strings one per line.

List[String]


--refactor_NDN_cigar_string / -fixNDN

refactor cigar string with NDN elements to one element
This flag tells GATK to refactor cigar string with NDN elements to one element. It intended primarily for use in a RNAseq pipeline since the problem might come up when using RNAseq aligner such as Tophat2 with provided transcriptoms. You should only use this if you know that your reads have that problem.

boolean  false


--reference_sequence / -R

Reference sequence file
The reference genome against which the sequence data was mapped. The GATK requires an index file and a dictionary file accompanying the reference (please see the online documentation FAQs for more details on these files). Although this argument is indicated as being optional, almost all GATK tools require a reference in order to run. Note also that while GATK can in theory process genomes from any organism with any number of chromosomes or contigs, it is not designed to process draft genome assemblies and performance will decrease as the number of contigs in the reference increases. We strongly discourage the use of unfinished genome assemblies containing more than a few hundred contigs. Contig numbers in the thousands will most probably cause memory-related crashes.

File


--remove_program_records / -rpr

Remove program records from the SAM header
Some tools keep program records in the SAM header by default. Use this argument to override that behavior and discard program records for the SAM header.

boolean  false


--sample_rename_mapping_file / -sample_rename_mapping_file

Rename sample IDs on-the-fly at runtime using the provided mapping file
On-the-fly sample renaming works only with single-sample BAM and VCF files. Each line of the mapping file must contain the absolute path to a BAM or VCF file, followed by whitespace, followed by the new sample name for that BAM or VCF file. The sample name may contain non-tab whitespace, but leading or trailing whitespace will be ignored. The engine will verify at runtime that each BAM/VCF targeted for sample renaming has only a single sample specified in its header (though, in the case of BAM files, there may be multiple read groups for that sample).

File


--tag / -tag

Tag to identify this GATK run as part of a group of runs
The GATKRunReport supports (as of GATK 2.2) tagging GATK runs with an arbitrary tag that can be used to group together runs during later analysis. One use of this capability is to tag runs as GATK performance tests, so that the performance of the GATK over time can be assessed from the logs directly. Note that the tags do not conform to any ontology, so you are free to use any tags that you might find meaningful.

String  NA


--unsafe / -U

Enable unsafe operations: nothing will be checked at runtime
For expert users only who know what they are doing. We do not support usage of this argument, so we may refuse to help you if you use it and something goes wrong. The one exception to this rule is ALLOW_N_CIGAR_READS, which is necessary for RNAseq analysis.

The --unsafe argument is an enumerated type (TYPE), which can have one of the following values:

ALLOW_N_CIGAR_READS
ALLOW_UNINDEXED_BAM
ALLOW_UNSET_BAM_SORT_ORDER
NO_READ_ORDER_VERIFICATION
ALLOW_SEQ_DICT_INCOMPATIBILITY
LENIENT_VCF_PROCESSING
ALL

TYPE


--useOriginalQualities / -OQ

Use the base quality scores from the OQ tag
This flag tells GATK to use the original base qualities (that were in the data before BQSR/recalibration) which are stored in the OQ tag, if they are present, rather than use the post-recalibration quality scores. If no OQ tag is present for a read, the standard qual score will be used.

Boolean  false


--validation_strictness / -S

How strict should we be with validation
Keep in mind that if you set this to LENIENT, we may refuse to provide you with support if anything goes wrong.

The --validation_strictness argument is an enumerated type (ValidationStringency), which can have one of the following values:

STRICT
LENIENT
SILENT

ValidationStringency  SILENT


--variant_index_parameter / -variant_index_parameter

Parameter to pass to the VCF/BCF IndexCreator
This is either the bin width or the number of features per bin, depending on the indexing strategy

int  -1  [ [ -?  ? ] ]


--variant_index_type / -variant_index_type

Type of IndexCreator to use for VCF/BCF indices
Specify the Tribble indexing strategy to use for VCFs. LINEAR creates a LinearIndex with bins of equal width, specified by the Bin Width parameter INTERVAL creates an IntervalTreeIndex with bins with an equal amount of features, specified by the Features Per Bin parameter DYNAMIC_SEEK attempts to optimize for minimal seek time by choosing an appropriate strategy and parameter (user-supplied parameter is ignored) DYNAMIC_SIZE attempts to optimize for minimal index size by choosing an appropriate strategy and parameter (user-supplied parameter is ignored)

The --variant_index_type argument is an enumerated type (GATKVCFIndexType), which can have one of the following values:

DYNAMIC_SEEK
DYNAMIC_SIZE
LINEAR
INTERVAL

GATKVCFIndexType  DYNAMIC_SEEK


--version / -version

Output version information
Use this to check the version number of the GATK executable you are invoking. Note that the version number is always included in the output at the start of every run as well as any error message.

Boolean  false


See also Guide Index | Tool Documentation Index | Support Forum

GATK version 3.2-2-gec30cee built at 2014/07/17 17:54:48. GTD: NA