Tagged with #faq
6 documentation articles | 1 announcement | 0 forum discussions

Created 2012-10-25 20:40:16 | Updated 2013-09-13 18:04:45 | Tags: official faq basic analyst intro gatk-lite lite
Comments (1)

Please note that GATK-Lite was retired in February 2013 when version 2.4 was released. See the announcement here.

You probably know by now that GATK-Lite is a free-for-everyone and completely open-source version of the GATK (licensed under the original MIT license).

But what's in the box? What can GATK-Lite do -- or rather, what can it not do that the full version (let's call it GATK-Full) can? And what does that mean exactly, in terms of functionality, reliability and power?

To really understand the differences between GATK-Lite and GATK-Full, you need some more information on how the GATK works, and how we work to develop and improve it.

First you need to understand what are the two core components of the GATK: the engine and tools (see picture below).

As explained here, the engine handles all the common work that's related to data access, conversion and traversal, as well as high-performance computing features. The engine is supported by an infrastructure of software libraries. If the GATK was a car, that would be the engine and chassis. What we call the **tools* are attached on top of that, and they provide the various analytical and processing functionalities like variant calling and base or variant recalibration. On your car, that would be headlights, airbags and so on.

Core GATK components

Second is how we work on developing the GATK, and what it means for how improvements are shared (or not) between Lite and Full.

We do all our development work on a single codebase. This means that everything --the engine and all tools-- is on one common workbench. There are not different versions that we work on in parallel -- that would be crazy to manage! That's why the version numbers of GATK-Lite and GATK-Full always match: if the latest GATK-Full version is numbered 2.1-13, then the latest GATK-Lite is also numbered 2.1-13.

The most important consequence of this setup is that when we make improvements to the infrastructure and engine, the same improvements will end up in GATK Lite and in GATK Full. So for the purposes of power, speed and robustness of the GATK that is determined by the engine, there is no difference between them.

For the tools, it's a little more complicated -- but not much. When we "build" the GATK binaries (the .jar files), we put everything from the workbench into the Full build, but we only put a subset into the Lite build. Note that this Lite subset is pretty big -- it contains all the tools that were previously available in GATK 1.x versions, and always will. We also reserve the right to add previews or not-fully-featured versions of the new tools that are in Full, at our discretion, to the Lite build.

So there are two basic types of differences between the tools available in the Lite and Full builds (see picture below).

  1. We have a new tool that performs a brand new function (which wasn't available in GATK 1.x), and we only include it in the Full build.

  2. We have a tool that has some new add-on capabilities (which weren't possible in GATK 1.x); we put the tool in both the Lite and the Full build, but the add-ons are only available in the Full build.

Tools in Lite vs. Full

Reprising the car analogy, GATK-Lite and GATK-Full are like two versions of the same car -- the basic version and the fully-equipped one. They both have the exact same engine, and most of the equipment (tools) is the same -- for example, they both have the same airbag system, and they both have headlights. But there are a few important differences:

  1. The GATK-Full car comes with a GPS (sat-nav for our UK friends), for which the Lite car has no equivalent. You could buy a portable GPS unit from a third-party store for your Lite car, but it might not be as good, and certainly not as convenient, as the Full car's built-in one.

  2. Both cars have windows of course, but the Full car has power windows, while the Lite car doesn't. The Lite windows can open and close, but you have to operate them by hand, which is much slower.

So, to summarize:

The underlying engine is exactly the same in both GATK-Lite and GATK-Full. Most functionalities are available in both builds, performed by the same tools. Some functionalities are available in both builds, but they are performed by different tools, and the tool in the Full build is better. New, cutting-edge functionalities are only available in the Full build, and there is no equivalent in the Lite build.

We hope this clears up some of the confusion surrounding GATK-Lite. If not, please leave a comment and we'll do our best to clarify further!

Created 2012-08-11 04:16:24 | Updated 2013-01-15 02:59:32 | Tags: official faq basic analyst intervals
Comments (6)

1. What file formats do you support for interval lists?

We support three types of interval lists, as mentioned here. Interval lists should preferentially be formatted as Picard-style interval lists, with an explicit sequence dictionary, as this prevents accidental misuse (e.g. hg18 intervals on an hg19 file). Note that this file is 1-based, not 0-based (first position in the genome is position 1).

2. I have two (or more) sequencing experiments with different target intervals. How can I combine them?

One relatively easy way to combine your intervals is to use the online tool Galaxy, using the Get Data -> Upload command to upload your intervals, and the Operate on Genomic Intervals command to compute the intersection or union of your intervals (depending on your needs).

Created 2012-08-11 04:08:04 | Updated 2012-10-18 15:00:51 | Tags: official faq basic analyst vcf developer variants
Comments (0)

1. What file formats do you support for variant callsets?

We support the Variant Call Format (VCF) for variant callsets. No other file formats are supported.

2. How can I know if my VCF file is valid?

VCFTools contains a validation tool that will allow you to verify it.

3. Are you planning to include any converters from different formats or allow different input formats than VCF?

No, we like VCF and we think it's important to have a good standard format. Multiplying formats just makes life hard for everyone, both developers and analysts.

Created 2012-08-11 02:40:28 | Updated 2013-03-25 18:27:55 | Tags: official faq basic queue developer scala
Comments (1)

1. What is Scala?

Scala is a combination of an object oriented framework and a functional programming language. For a good introduction see the free online book Programming Scala.

The following are extremely brief answers to frequently asked questions about Scala which often pop up when first viewing or editing QScripts. For more information on Scala there a multitude of resources available around the web including the Scala home page and the online Scala Doc.

2. Where do I learn more about Scala?

  • http://www.scala-lang.org
  • http://programming-scala.labs.oreilly.com
  • http://www.scala-lang.org/docu/files/ScalaByExample.pdf
  • http://devcheatsheet.com/tag/scala/
  • http://davetron5000.github.com/scala-style/index.html

3. What is the difference between var and val?

var is a value you can later modify, while val is similar to final in Java.

4. What is the difference between Scala collections and Java collections? / Why do I get the error: type mismatch?

Because the GATK and Queue are a mix of Scala and Java sometimes you'll run into problems when you need a Scala collection and instead a Java collection is returned.

   MyQScript.scala:39: error: type mismatch;
     found   : java.util.List[java.lang.String]
     required: scala.List[String]
        val wrapped: List[String] = TextFormattingUtils.wordWrap(text, width)

Use the implicit definitions in JavaConversions to automatically convert the basic Java collections to and from Scala collections.

import collection.JavaConversions._

Scala has a very rich collections framework which you should take the time to enjoy. One of the first things you'll notice is that the default Scala collections are immutable, which means you should treat them as you would a String. When you want to 'modify' an immutable collection you need to capture the result of the operation, often assigning the result back to the original variable.

var str = "A"
str + "B"
println(str) // prints: A
str += "C"
println(str) // prints: AC

var set = Set("A")
set + "B"
println(set) // prints: Set(A)
set += "C"
println(set) // prints: Set(A, C)

5. How do I append to a list?

Use the :+ operator for a single value.

  var myList = List.empty[String]
  myList :+= "a"
  myList :+= "b"
  myList :+= "c"

Use ++ for appending a list.

  var myList = List.empty[String]
  myList ++= List("a", "b", "c")

6. How do I add to a set?

Use the + operator.

  var mySet = Set.empty[String]
  mySet += "a"
  mySet += "b"
  mySet += "c"

7. How do I add to a map?

Use the + and -> operators.

  var myMap = Map.empty[String,Int]
  myMap += "a" -> 1
  myMap += "b" -> 2
  myMap += "c" -> 3

8. What are Option, Some, and None?

Option is a Scala generic type that can either be some generic value or None. Queue often uses it to represent primitives that may be null.

  var myNullableInt1: Option[Int] = Some(1)
  var myNullableInt2: Option[Int] = None

9. What is _ / What is the underscore?

François Armand's slide deck is a good introduction: http://www.slideshare.net/normation/scala-dreaded

To quote from his slides:

Give me a variable name but
- I don't care of what it is
- and/or
- don't want to pollute my namespace with it

10. How do I format a String?

Use the .format() method.

This Java snippet:

String formatted = String.format("%s %i", myString, myInt);

In Scala would be:

val formatted = "%s %i".format(myString, myInt)

11. Can I use Scala Enumerations as QScript @Arguments?

No. Currently Scala's Enumeration class does not interact with the Java reflection API in a way that could be used for Queue command line arguments. You can use Java enums if for example you are importing a Java based walker's enum type.

If/when we find a workaround for Queue we'll update this entry. In the meantime try using a String.

Created 2012-08-11 02:28:42 | Updated 2015-02-10 23:51:33 | Tags: faq queue developer qscript
Comments (1)

1. Many of my GATK functions are setup with the same Reference, Intervals, etc. Is there a quick way to reuse these values for the different analyses in my pipeline?


  • Create a trait that extends from CommandLineGATK.
  • In the trait, copy common values from your qscript.
  • Mix the trait into instances of your classes.

For more information, see the ExampleUnifiedGenotyper.scala or examples of using Scala's traits/mixins illustrated in the QScripts documentation.

2. How do I accept a list of arguments to my QScript?

In your QScript, define a var list and annotate it with @Argument. Initialize the value to Nil.

@Argument(doc="filter names", shortName="filter")
var filterNames: List[String] = Nil

On the command line specify the arguments by repeating the argument name.

-filter filter1 -filter filter2 -filter filter3

Then once your QScript is run, the command line arguments will be available for use in the QScript's script method.

  def script {
     var myCommand = new MyFunction
     myCommand.filters = this.filterNames

For a full example of command line arguments see the QScripts documentation.

3. What is the best way to run a utility method at the right time?

Wrap the utility with an InProcessFunction. If your functionality is reusable code you should add it to Sting Utils with Unit Tests and then invoke your new function from your InProcessFunction. Computationally or memory intensive functions should NOT be implemented as InProcessFunctions, and should be wrapped in Queue CommandLineFunctions instead.

    class MySplitter extends InProcessFunction {
      var in: File = _

      var out: List[File] = Nil

      def run {
         StingUtilityMethod.quickSplitFile(in, out)

    var splitter = new MySplitter
    splitter.in = new File("input.txt")
    splitter.out = List(new File("out1.txt"), new File("out2.txt"))

See Queue CommandLineFunctions for more information on how @Input and @Output are used.

4. What is the best way to write a list of files?

Create an instance of a ListWriterFunction and add it in your script method.

import org.broadinstitute.sting.queue.function.ListWriterFunction

val writeBamList = new ListWriterFunction
writeBamList.inputFiles = bamFiles
writeBamList.listFile = new File("myBams.list")

5. How do I add optional debug output to my QScript?

Queue contains a trait mixin you can use to add Log4J support to your classes.

Add the import for the trait Logging to your QScript.

import org.broadinstitute.sting.queue.util.Logging

Mixin the trait to your class.

class MyScript extends Logging {

Then use the mixed in logger to write debug output when the user specifies -l DEBUG.

logger.debug("This will only be displayed when debugging is enabled.")

6. I updated Queue and now I'm getting java.lang.NoClassDefFoundError / java.lang.AbstractMethodError

Try ant clean.

Queue relies on a lot of Scala traits / mixins. These dependencies are not always picked up by the scala/java compilers leading to partially implemented classes. If that doesn't work please let us know in the forum.

7. Do I need to create directories in my QScript?

No. QScript will create all parent directories for outputs.

8. How do I specify the -W 240 for the LSF hour queue at the Broad?

Queue's LSF dispatcher automatically looks up and sets the maximum runtime for whichever LSF queue is specified. If you set your -jobQueue/.jobQueue to hour then you should see something like this under bjobs -l:

240.0 min of gsa3

9. Can I run Queue with GridEngine?

Queue GridEngine functionality is community supported. See here for full details: Queue with Grid Engine.

10. How do I pass advanced java arguments to my GATK commands, such as remote debugging?

The easiest way to do this at the moment is to mixin a trait.

First define a trait which adds your java options:

  trait RemoteDebugging extends JavaCommandLineFunction {
    override def javaOpts = super.javaOpts + " -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005"

Then mix in the trait to your walker and otherwise run it as normal:

  val printReadsDebug = new PrintReads with RemoteDebugging
  printReadsDebug.reference_sequence = "my.fasta"
  // continue setting up your walker...

11. Why does Queue log "Running jobs. ... Done." but doesn't actually run anything?

If you see something like the following, it means that Queue believes that it previously successfully generated all of the outputs.

INFO 16:25:55,049 QCommandLine - Scripting ExampleUnifiedGenotyper 
INFO 16:25:55,140 QCommandLine - Added 4 functions 
INFO 16:25:55,140 QGraph - Generating graph. 
INFO 16:25:55,164 QGraph - Generating scatter gather jobs. 
INFO 16:25:55,714 QGraph - Removing original jobs. 
INFO 16:25:55,716 QGraph - Adding scatter gather jobs. 
INFO 16:25:55,779 QGraph - Regenerating graph. 
INFO 16:25:55,790 QGraph - Running jobs. 
INFO 16:25:55,853 QGraph - 0 Pend, 0 Run, 0 Fail, 10 Done 
INFO 16:25:55,902 QCommandLine - Done 

Queue will not re-run the job if a .done file is found for the all the outputs, e.g.: /path/to/.output.file.done. You can either remove the specific .done files yourself, or use the -startFromScratch command line option.

Created 2012-08-02 14:05:29 | Updated 2014-12-17 17:05:58 | Tags: variantrecalibrator bundle vqsr applyrecalibration faq
Comments (27)

This document describes the resource datasets and arguments that we recommend for use in the two steps of VQSR (i.e. the successive application of VariantRecalibrator and ApplyRecalibration), based on our work with human genomes, to comply with the GATK Best Practices. The recommendations detailed in this document take precedence over any others you may see elsewhere in our documentation (e.g. in Tutorial articles, which are only meant to illustrate usage, or in past presentations, which may be out of date).

The document covers:

  • Explanation of resource datasets
  • Important notes about annotations
  • Important notes about exome experiments
  • Argument recommendations for VariantRecalibrator
  • Argument recommendations for ApplyRecalibration

These recommendations are valid for use with calls generated by both the UnifiedGenotyper and HaplotypeCaller. In the past we made a distinction in how we processed the calls from these two callers, but now we treat them the same way. These recommendations will probably not work properly on calls generated by other (non-GATK) callers.

Note that VQSR must be run twice in succession in order to build a separate error model for SNPs and INDELs (see the VQSR documentation for more details).

Explanation of resource datasets

The human genome training, truth and known resource datasets mentioned in this document are all available from our resource bundle.

If you are working with non-human genomes, you will need to find or generate at least truth and training resource datasets with properties corresponding to those described below. To generate your own resource set, one idea is to first do an initial round of SNP calling and only use those SNPs which have the highest quality scores. These sites which have the most confidence are probably real and could be used as truth data to help disambiguate the rest of the variants in the call set. Another idea is to try using several SNP callers in addition to the UnifiedGenotyper or HaplotypeCaller, and use those sites which are concordant between the different methods as truth data. In either case, you'll need to assign your set a prior likelihood that reflects your confidence in how reliable it is as a truth set. We recommend Q10 as a starting value, which you can then experiment with to find the most appropriate value empirically. There are many possible avenues of research here. Hopefully the model reporting plots that are generated by the recalibration tools will help facilitate this experimentation.

Resources for SNPs

  • True sites training resource: HapMap

    This resource is a SNP call set that has been validated to a very high degree of confidence. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). We will also use these sites later on to choose a threshold for filtering variants based on sensitivity to truth sites. The prior likelihood we assign to these variants is Q15 (96.84%).

  • True sites training resource: Omni

    This resource is a set of polymorphic SNP sites produced by the Omni geno- typing array. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q12 (93.69%).

  • Non-true sites training resource: 1000G
    This resource is a set of high-confidence SNP sites produced by the 1000 Genomes Project. The program will consider that the variants in this re- source may contain true variants as well as false positives (truth=false), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q10 (%). 17

  • Known sites resource, not used in training: dbSNP
    This resource is a call set that has not been validated to a high degree of confidence (truth=false). The program will not use the variants in this resource to train the recalibration model (training=false). However, the program will use these to stratify output metrics such as Ti/Tv ratio by whether variants are present in dbsnp or not (known=true). The prior likelihood we assign to these variants is Q2 (36.90%).

Resources for Indels

  • Known and true sites training resource: Mills
    This resource is an Indel call set that has been validated to a high degree of confidence. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q12 (93.69%).

Important notes about annotations

Some of the annotations included in the recommendations given below might not be the best for your particular dataset. In particular, the following caveats apply:

  • Depth of coverage (the DP annotation invoked by Coverage) should not be used when working with exome datasets since there is extreme variation in the depth to which targets are captured! In whole genome experiments this variation is indicative of error but that is not the case in capture experiments.

  • You may have seen HaplotypeScore mentioned in older documents. That is a statistic produced by UnifiedGenotyper that should only be used if you called your variants with UG. This statistic isn't produced by the HaplotypeCaller because that mathematics is already built into the likelihood function itself when calling full haplotypes with HC.

  • The InbreedingCoeff is a population level statistic that requires at least 10 samples in order to be computed. For projects with fewer samples, or that includes many closely related samples (such as a family) please omit this annotation from the command line.

Important notes for exome capture experiments

In our testing we've found that in order to achieve the best exome results one needs to use an exome SNP and/or indel callset with at least 30 samples. For users with experiments containing fewer exome samples there are several options to explore:

  • Add additional samples for variant calling, either by sequencing additional samples or using publicly available exome bams from the 1000 Genomes Project (this option is used by the Broad exome production pipeline). Be aware that you cannot simply add VCFs from the 1000 Genomes Project. You must either call variants from the original BAMs jointly with your own samples, or (better) use the reference model workflow to generate GVCFs from the original BAMs, and perform joint genotyping on those GVCFs along with your own samples' GVCFs with GenotypeGVCFs.

  • You can also try using the VQSR with the smaller variant callset, but experiment with argument settings (try adding --maxGaussians 4 to your command line, for example). You should only do this if you are working with a non-model organism for which there are no available genomes or exomes that you can use to supplement your own cohort.

Argument recommendations for VariantRecalibrator

The variant quality score recalibrator builds an adaptive error model using known variant sites and then applies this model to estimate the probability that each variant is a true genetic variant or a machine artifact. One major improvement from previous recommended protocols is that hand filters do not need to be applied at any point in the process now. All filtering criteria are learned from the data itself.

Common, base command line

This is the first part of the VariantRecalibrator command line, to which you need to add either the SNP-specific recommendations or the indel-specific recommendations given further below.

java -Xmx4g -jar GenomeAnalysisTK.jar \
   -T VariantRecalibrator \
   -R path/to/reference/human_g1k_v37.fasta \
   -input raw.input.vcf \
   -recalFile path/to/output.recal \
   -tranchesFile path/to/output.tranches \
   -nt 4 \

SNP specific recommendations

For SNPs we use both HapMap v3.3 and the Omni chip array from the 1000 Genomes Project as training data. In addition we take the highest confidence SNPs from the project's callset. These datasets are available in the GATK resource bundle.

   -resource:hapmap,known=false,training=true,truth=true,prior=15.0 hapmap_3.3.b37.sites.vcf \
   -resource:omni,known=false,training=true,truth=true,prior=12.0 1000G_omni2.5.b37.sites.vcf \
   -resource:1000G,known=false,training=true,truth=false,prior=10.0 1000G_phase1.snps.high_confidence.vcf \
   -resource:dbsnp,known=true,training=false,truth=false,prior=2.0 dbsnp.b37.vcf \
   -an QD -an MQ -an MQRankSum -an ReadPosRankSum -an FS -an SOR -an DP -an InbreedingCoeff \
   -mode SNP \

Please note that these recommendations are formulated for whole-genome datasets. For exomes, we do not recommend using DP for variant recalibration (see below for details of why).

Note also that, for the above to work, the input vcf needs to be annotated with the corresponding values (QD, FS, DP, etc.). If any of these values are somehow missing, then VariantAnnotator needs to be run first so that VariantRecalibration can run properly.

Also, using the provided sites-only truth data files is important here as parsing the genotypes for VCF files with many samples increases the runtime of the tool significantly.

You may notice that these recommendations no longer include the --numBadVariants argument. That is because we have removed this argument from the tool, as the VariantRecalibrator now determines the number of variants to use for modeling "bad" variants internally based on the data.

Indel specific recommendations

When modeling indels with the VQSR we use a training dataset that was created at the Broad by strictly curating the (Mills, Devine, Genome Research, 2011) dataset as as well as adding in very high confidence indels from the 1000 Genomes Project. This dataset is available in the GATK resource bundle.

   --maxGaussians 4 \
   -resource:mills,known=false,training=true,truth=true,prior=12.0 Mills_and_1000G_gold_standard.indels.b37.sites.vcf \
   -resource:dbsnp,known=true,training=false,truth=false,prior=2.0 dbsnp.b37.vcf \
   -an QD -an DP -an FS -an SOR -an ReadPosRankSum -an MQRankSum -an InbreedingCoeff \
   -mode INDEL \

Note that indels use a different set of annotations than SNPs. Most annotations related to mapping quality have been removed since there is a conflation with the length of an indel in a read and the degradation in mapping quality that is assigned to the read by the aligner. This covariation is not necessarily indicative of being an error in the same way that it is for SNPs.

You may notice that these recommendations no longer include the --numBadVariants argument. That is because we have removed this argument from the tool, as the VariantRecalibrator now determines the number of variants to use for modeling "bad" variants internally based on the data.

Argument recommendations for ApplyRecalibration

The power of the VQSR is that it assigns a calibrated probability to every putative mutation in the callset. The user is then able to decide at what point on the theoretical ROC curve their project wants to live. Some projects, for example, are interested in finding every possible mutation and can tolerate a higher false positive rate. On the other hand, some projects want to generate a ranked list of mutations that they are very certain are real and well supported by the underlying data. The VQSR provides the necessary statistical machinery to effectively apply this sensitivity/specificity tradeoff.

Common, base command line

This is the first part of the ApplyRecalibration command line, to which you need to add either the SNP-specific recommendations or the indel-specific recommendations given further below.

 java -Xmx3g -jar GenomeAnalysisTK.jar \
   -T ApplyRecalibration \
   -R reference/human_g1k_v37.fasta \
   -input raw.input.vcf \
   -tranchesFile path/to/input.tranches \
   -recalFile path/to/input.recal \
   -o path/to/output.recalibrated.filtered.vcf \

SNP specific recommendations

For SNPs we used HapMap 3.3 and the Omni 2.5M chip as our truth set. We typically seek to achieve 99.5% sensitivity to the accessible truth sites, but this is by no means universally applicable: you will need to experiment to find out what tranche cutoff is right for your data. Generally speaking, projects involving a higher degree of diversity in terms of world populations can expect to achieve a higher truth sensitivity than projects with a smaller scope.

   --ts_filter_level 99.5 \
   -mode SNP \

Indel specific recommendations

For indels we use the Mills / 1000 Genomes indel truth set described above. We typically seek to achieve 99.0% sensitivity to the accessible truth sites, but this is by no means universally applicable: you will need to experiment to find out what tranche cutoff is right for your data. Generally speaking, projects involving a higher degree of diversity in terms of world populations can expect to achieve a higher truth sensitivity than projects with a smaller scope.

   --ts_filter_level 99.0 \
   -mode INDEL \

Created 2012-07-19 18:49:06 | Updated 2012-08-12 13:53:41 | Tags: official gatk2 faq
Comments (1)

GATK 2.0

On July 23rd, 2012, the Genome Sequencing and Analysis (GSA) team will release a beta of GATK 2.0. GATK 2.0 includes all of the original GATK 1.x tools as well as many newer and more advanced tools for error modeling, data compression, and variant calling:

  • Base quality score recalibration (BQSR) v2, an upgrade to BQSR that generates a base substitution, insertion, and deletion error model.
  • ReduceReads, a BAM compression algorithm that reduces file sizes by 20x-100x while preserving all information necessary for accurate SNP and indel calling. ReduceReads enables the GATK to call tens of thousands of deeply sequenced NGS samples simultaneously.
  • The HaplotypeCaller, a multi-sample local de novo assembly and integrated SNP, indel, and short SV caller.
  • Powerful extensions to the Unified Genotyper to support variant calling of pooled samples, mitochondrial DNA, and non-diploid organisms. Additionally, the extended Unified Genotyper introduces a novel error modeling approach that uses a reference sample to build a site-specific error model for SNPs and indels that vastly improves calling accuracy.

GATK 2.0 is moving to a mixed open/closed-source model

The complete GATK 2.0 suite will be distributed as a binary only, without source code for the newest tools. We plan to release the source code for these tools, but its unclear the timeframe for this. The GATK engine and programming libraries will remain open-sourced under the MIT license, as they currently are for GATK 1.0. The current GATK 1.0 tool chain, now called GATK-lite, will remain open-source under the MIT license and distributed as a companion binary to the full GATK binary. GATK-lite includes the original base quality score recalibrator (BQSR), indel realigner, unified genotyper v1, and VQSR v2.

GATK 2.0 beta

The GATK 2.0 tools are under active development but they have matured to the point that non-Broad academics and researchers are welcome to use them. We appreciate feedback on their use, both successes and failures. Please be aware that the GATK 2.0 tool chain may be unstable, slow, not scalable, poorly documented, or not interact seamlessly among each other or with other tools in the suite, so could require more effort from users. With these caveats, these tools provide radically improved calling sensitivity, specificity, and performance, so are worth the exposure as beta software.

GATK licensing during and after the beta

GATK 2.0 is being released under a software license that permits non-commercial research use only. Until the beta ends and the full GATK 2.0 suite is officially launched, commercial activities should use the unrestricted GATK-lite version.

In the fall we intend to release the full version of GATK 2.0. The full version will be free-to-use version for non-commercial entities, just like the beta. A commercial license will be required for commercial entities. This commercial version will include commercial-grade support for installation, configuration, and documentation, as well as long-term support for each commercial release.

Frequently Asked Questions (FAQ)


What does it mean that GATK 2.0 is a beta version? Is it safe to use these tools? Yes, the GATK 2.0 tools are actually quite stable and have been (relatively) widely used by the GSA team to make variation calls for large-scale projects like 1000 Genomes and T2D-GENES. They are beta because they haven’t been used outside of the Broad Institute, and so are likely to have bugs and other usability issues we will need to address over time. Furthermore, they are all evolving rapidly as we improve them and use them in more demanding settings, and so they are expected to change significantly over the next few years as they are perfected, just as happened with the GATK 1.0 suite of tools.

How can I find out best practices for using GATK 2.0 tools? The best place is our newly revised GATK Best Practice v4 guide, which will be finalized on July 23rd here:


Where can I find out more information about the new GATK 2.0 tools? The best place is in our slide archive, where the GSA team has collected many presentations detailing the evolution and analysis of the new 2.0 tools.


How do I download GATK 2.0? From the new GATK website.

How do you support GATK 2.0 tools? Using the new GATK support forum:


Content from the old GetSatisfaction support forum will be transitioned over to the new GATK forums over the next few weeks. These new forums should help users find the answers to their questions much more easily (the new forum is indexed by Google) as well as allow users to more easily follow each other's post and answer each others questions.

What specifically is in the new GATK 2.0 beta license? Please see the new GATK2 license terms here, in its temporary home:


When GATK 2.0 is released to github the new terms will be available:


How do you administer the new GATK 2.0 license? Once you register with the new GATK 2.0 forums, you can download the full GATK 2.0 and agree to the new license terms.

Will you continue to support retired tools like the first version of BQSR? No, in the medium-term (next few months) retired tools will no longer be supported. In the short-term, yes, we will continue to respond to support requests on the new forums while people transition to GATK 2.0.

What is GATK-lite? GATK-lite is a subset of the full GATK 2.0 release that is free-to-use for all entities, including commercial ones. It includes all of the capabilities (if not the exact tools) from GATK 1.6 but none of the exclusive 2.0 tools.

For the tech-savvy, GATK-lite is the binary distribution corresponding to the public GATK source released on github. Everything in GATK-lite is licensed under the MIT license.

GATK-Lite includes the entire GATK map/reduce programming framework, all associated GATK libraries, and the vast majority of GATK tools. For example, most of the Unified Genotyper, all of Indel Realigner, CombineVariants, SelectVariants, and VariantEval are all part of GATK-Lite. All of GATK-Queue is also part of GATK-Lite.

The best way to think about GATK (implicitly -full) and GATK-lite is that both GATK and GATK-lite are GATK2, but -lite includes only the open sourced framework, library, and tools. GATK2 is effectively a superset of GATK-lite, built from the same source code, that includes a few closed-source premium tools.

GATK-Lite isn't a dead-end branch of GATK1. All GATK-Lite infrastructure will be fully supported -- to the same degree as GATK1 -- by the GSA team, as we will rely on these tools day-in and day-out. GATK-Lite is evolve in lock-step with the full GATK, GATK-Lite and GATK(-full) will carry the same release numbers, and will be pushed out by the GSA group simultaneously. As we add new file formats to the GATK (BCF2, for example) these changes will go into the core of GATK, and be available through both GATK and GATK-Lite.


I’m an academic researchers and I’d like to use GATK 2.0, what do I need to do?

Just download the full GATK jar from our new website, after agreeing to the GATK 2.0 license terms.

I run a genome sequencing center / NGS core facility at an academic institution, can I install GATK 2.0 for my users to run?

Yes, you can upgrade to GATK 2.0, assuming your institution is an academic non-commercial entity. The GATK 2.0 beta license explicitly allows the installation in a pipeline.

I work in a clinical lab at a hospital, and we use GATK 1.0 tools in our diagnostics lab, can I use GATK 2.0?

Yes and no. If the lab engages solely in non-commercial activities (such as clinical research) then yes. If the lab “sells” diagnostic services to health care providers then no, you’ll need to wait until the commercial license is available to upgrade. If the lab does a mix of both, you are welcome to provide a 2.0 pipeline for non-commercial uses while maintaining a 1.0 pipeline for commercial users, until you can upgrade to the commercial license.

I work in a government facility, can I use GATK 2.0?

Yes, the GATK development was and is supported by U.S. federal research grants that entitle government researchers to use the GATK.

I work in a commercial entity (i.e., for-profit company), can I use GATK 2.0?

No, you’ll need to wait until a commercial license version is available later this year. The only exception is answered in the next question.

Even though I work for a commercial entity, I’d like to evaluate GATK 2.0 in a non-production way; is that permitted under the GATK 2.0 beta?

Yes, we view evaluating and exploring the new tools in the GATK 2.0 as “research,” but this requires a special license that can only be obtained from the Broad’s business development office. Please contact Issi Rozen (irozen@) to obtain this commercial evaluation license. Note that this license is only valid until the full commercial release which will likely have it’s own trial version for commercial entities.

I built a portal that provides access directly to GATK 2.0 tools, how does the new GATK 2.0 license affect me?

As the GATK 2.0 beta license forbids redistribution of GATK 2.0 tools, you must ensure that these tools are only accessible to users within your institution. You are welcome to install and provide access to tools in GATK-lite, though. GATK-lite contains all of the code -- with a completely non-restrictive MIT license -- available in the latest GATK 1.6 release.

We are actively interested in defining reasonable use terms for third-party pipelines, so please contact Mark DePristo (depristo@) to discuss the matter further.

I work at an academic non-commercial institution, and I built a NGS pipeline that runs GATK tools on sequencing data. We often distribute BAMs and VCFs processed by GATK to our collaborators both within and external to the institution, how does the new GATK 2.0 license affect me?

Very little. The GATK 2.0 license allows academic non-commercial institutions to install, run, and distribute GATK-based results. Commercial institutions, however, are not permitted to use the GATK 2.0 beta, so can only do this using the unrestricted GATK-lite distribution. Note that when the commercial version of the GATK becomes available in late 2012, commercial institutions will have the opportunity to run and distribute the results of GATK 2.0 tools.

I work at an academic institution, and we conduct sponsored research on behalf of commercial entities, how does the new GATK 2.0 license affect me? The GATK 2.0 licence explicitly allows an academic non-commercial entity to run GATK2.0 as part of sponsored research projects.

I make NGS instruments and have embedded GATK in my instrument, how does the new GATK 2.0 license affect me?

The current GATK 2.0 license forbids redistribution of GATK 2.0 binaries, so you will not be able to download GATK 2.0 and redistribute it on your instruments. Of course, you are welcome to redistribute GATK-lite. We envision that instrument manufactures will be able to purchase a commercial, redistribution license when the the full commercial GATK version is available in late 2012.

We are actively interested in defining reasonable use terms for instrument manufacturers who want to embed GATK, so please contact Mark DePristo (depristo@) to discuss the matter further.

I compile and distribute an NGS analysis suite that includes the GATK. How does the new GATK 2.0 license affect me?

The new GATK 2.0 license forbids redistribution of the GATK 2.0 tools. You will not be able to include GATK 2.0 tools in your distribution going forward. You are welcome to install and distribute the tools GATK-lite, though, which is effectively what you have been doing with GATK 1.6. We are actively interested in defining reasonable use terms for third-party redistribution of the premium GATK2 suite, so please contact Mark DePristo (depristo@) to discuss the matter further.

I took the GATK and rewrote parts of the engine or individual tools to make them faster or better. How does the new 2.0 license affect me?

Very little. The GATK map/reduce programming framework and all associated libraries will continue to be available at github under the MIT license. Any improvements you made to the framework will continue to be viable and can be made available to the community in any way you see fit (see below for additional details).

Most of the GATK 1.6 tools remains in the new GATK-lite distribution on github as well, so improvements to any of those tools will remain valuable, and can be redistributed freely as the GATK-lite tools all have an MIT license.

Applying your engine optimizations to the new, protected GATK 2.0 tools can only be explored through a formal collaboration with the GATK team.

I built several NGS tools on top of the GATK, how does the new GATK 2.0 license affect me?

The GATK map/reduce programming framework and all associated libraries will continue to be available at github under the MIT license (i.e., the distribution known as GATK-Lite). We recommend you use and redistribute the GATK framework along with any independently written tools in any way and under any license you choose via the GATK-Lite github distribution mechanism. Several Broad Institute tools from the Cancer Genome Analysis team are distributed in just this way.

See questions about GATK-Lite for more details as well, as this covers the framework and many associated tools in more detail.

Commercial GATK

What features will the commercial version of the GATK have?

The commercial version of the GATK aims to be a slower evolving, better documented, and better supported version of the one released by the GSA team at the Broad. This means that the commercial version will not contain all of the bleeding edge features of the non-commercial GATK release. Moreover, the commercial version will not contain any significant additional features not available in the non-commercial version.

That said, the commercial version of the GATK will come with vastly better support than the non-commercial version. This includes

  • More extensive installation and use documentation.
  • “consulting” services to help configure and optimize GATK for a particular NGS sequencing process
  • A phone number to call when something is broken along with a service-level agreement that guarantees a response to the problem within a reasonable timeframe.
  • Long-term support for each released version of the commercial GATK, a particularly important feature for users interested in using the GATK in certified processes, such as CLIA.

Will the commercial version include features not available in the non-commercial of the GATK? No. See “What will be the relationship between the commercial and GSA released version of the GATK?” below for more information

Will I be able to purchase the commercial version of the GATK even if I work in a non-commercial entity? Yes, you can. With the commercial version you will gain access to the improved documentation and support.

Why did you decide to restrict the GATK 2.0 beta to non-commercial entities? Because commercial entities will be required to purchase a software license to use the full GATK 2.0 suite.

When will the commercial version be available? In late 2012.

How much will the commercial version of the GATK cost? The specific pricing model has yet to be determined.

What will be the relationship between the commercial and GSA released version of the GATK? The GSA team will continue to release GATK versions on the 6-8 week timeframe, following the standard 2.0, 2.1, 2.2, etc. version convention. These will be available to non-commercial entities. The commercial version will evolve at a slower rate and aggregate many GSA GATK versions into larger commercial releases with much more extensive configuration and use documentation. Unlike the GSA released versions, where the current release is the only supported one, the commercial versions will each be supported for much longer periods of time.

Source code

Why did you decide to make parts of the GATK closed source? About a year ago we started to develop the tools that ultimately became part of the binary-only GATK 2.0, including BQSRv2, the advanced UG modules, ReducedReads, and the HaplotypeCaller. From the start these tools were kept private to the master GATK repository, as they were all completely unstable, unusable, and unpublished. As they have evolved into their now usable forms we wanted to share these tools with the community as soon as possible, before any papers, patents, or other forms of intellectual property protection were in place. Releasing binary versions allows us to share our capabilities early while ensuring some IP protection.

Additionally, a closed source model allows us more flexibility in the software licensing terms we enforce with the GATK 2.0. In particular reserving a subset of closed source tools protected by a non-commercial use license allows us to ensure that the research community has access to GATK tools as quickly as possible while preserving the value of a version of GATK licensed to commercial entities.

What GATK source code is available through github? The released GATK source code on github includes the latest GATK programming framework and most GATK tools, including everything from GATK 1.x, as well as associated test files and build scripts. The only material not pushed up to github from the master GATK repository are private tools (not shared in source or binary form) and protected tools (available in binary form only).

Can I get a copy of the source code for the new GATK 2.0 tools like ReducedReads? No, the source code for some of the new GATK 2.0 tools is not being released, and are only available in binary form (i.e., as compiled java JVM instructions).

Are you open to collaboration to obtain access to GATK 2.0 tool source code? Yes, several long-term close collaborators have access to the full GATK repository with public and private libraries. Please contact Mark DePristo (depristo@) to discuss this possibility further.

The GATK makes use of open source libraries, how do you comply with their license restrictions in a closed source GATK? We provide, upon request, a GenomeAnalysisTK.jar file built without pre-packaging any of our dependencies, which can be used to independently link the master GATK jar to any version of our dependencies.

Will you ever make the new GATK 2.0 tools open source? Yes, over time we plan to migrate closed source tools into the open source branch of the GATK.

What source is included with GATK-Lite GATK-Lite is basically everything in GATK 1.6, including the entire GATK programming framework and all associated libraries. It also includes the vast majority of tools in the GATK -- only a few select, premium analysis tools like the HaplotypeCaller are only in the full version of the GATK. Improves to the GATK framework, libraries, and lite tools will all continue to be developed and released as part of the GATK-lite distribution.

No posts found with the requested search criteria.