Tagged with #variantrecalibration
1 documentation article | 0 announcements | 8 forum discussions

Created 2012-07-23 16:49:34 | Updated 2015-05-15 08:33:12 | Tags: variantrecalibrator vqsr applyrecalibration vcf callset variantrecalibration
Comments (23)

This document describes what Variant Quality Score Recalibration (VQSR) is designed to do, and outlines how it works under the hood. For command-line examples and recommendations on what specific resource datasets and arguments to use for VQSR, please see this FAQ article.

As a complement to this document, we encourage you to watch the workshop videos available on our Events webpage.

Slides that explain the VQSR methodology in more detail as well as the individual component variant annotations can be found here in the GSA Public Drop Box.

Detailed information about command line options for VariantRecalibrator can be found here.

Detailed information about command line options for ApplyRecalibration can be found here.


The purpose of variant recalibration is to assign a well-calibrated probability to each variant call in a call set. This enables you to generate highly accurate call sets by filtering based on this single estimate for the accuracy of each call.

The approach taken by variant quality score recalibration is to develop a continuous, covarying estimate of the relationship between SNP call annotations (QD, SB, HaplotypeScore, HRun, for example) and the the probability that a SNP is a true genetic variant versus a sequencing or data processing artifact. This model is determined adaptively based on "true sites" provided as input (typically HapMap 3 sites and those sites found to be polymorphic on the Omni 2.5M SNP chip array, for humans). This adaptive error model can then be applied to both known and novel variation discovered in the call set of interest to evaluate the probability that each call is real. The score that gets added to the INFO field of each variant is called the VQSLOD. It is the log odds ratio of being a true variant versus being false under the trained Gaussian mixture model.

The variant recalibrator contrastively evaluates variants in a two step process, each performed by a distinct tool:

  • VariantRecalibrator
    Create a Gaussian mixture model by looking at the annotations values over a high quality subset of the input call set and then evaluate all input variants. This step produces a recalibration file.

  • ApplyRecalibration
    Apply the model parameters to each variant in input VCF files producing a recalibrated VCF file in which each variant is annotated with its VQSLOD value. In addition, this step will filter the calls based on this new lod score by adding lines to the FILTER column for variants that don't meet the specified lod threshold.

Please see the VQSR tutorial for step-by-step instructions on running these tools.

How VariantRecalibrator works in a nutshell

The tool takes the overlap of the training/truth resource sets and of your callset. It models the distribution of these variants relative to the annotations you specified, and attempts to group them into clusters. Then it uses the clustering to assign VQSLOD scores to all variants. Variants that are closer to the heart of a cluster will get a higher score than variants that are outliers.

How ApplyRecalibration works in a nutshell

During the first part of the recalibration process, variants in your callset were given a score called VQSLOD. At the same time, variants in your training sets were also ranked by VQSLOD. When you specify a tranche sensitivity threshold with ApplyRecalibration, expressed as a percentage (e.g. 99.9%), what happens is that the program looks at what is the VQSLOD value above which 99.9% of the variants in the training callset are included. It then takes that value of VQSLOD and uses it as a threshold to filter your variants. Variants that are above the threshold pass the filter, so the FILTER field will contain PASS. Variants that are below the threshold will be filtered out; they will be written to the output file, but in the FILTER field they will have the name of the tranche they belonged to. So VQSRTrancheSNP99.90to100.00 means that the variant was in the range of VQSLODs corresponding to the remaining 0.1% of the training set, which are basically considered false positives.

Interpretation of the Gaussian mixture model plots

The variant recalibration step fits a Gaussian mixture model to the contextual annotations given to each variant. By fitting this probability model to the training variants (variants considered to be true-positives), a probability can be assigned to the putative novel variants (some of which will be true-positives, some of which will be false-positives). It is useful for users to see how the probability model was fit to their data. Therefore a modeling report is automatically generated each time VariantRecalibrator is run (in the above command line the report will appear as path/to/output.plots.R.pdf). For every pair-wise combination of annotations used in modeling, a 2D projection of the Gaussian mixture model is shown.

The figure shows one page of an example Gaussian mixture model report that is automatically generated by the VQSR from the example HiSeq call set. This page shows the 2D projection of mapping quality rank sum test versus Haplotype score by marginalizing over the other annotation dimensions in the model.

In each page there are four panels which show different ways of looking at the 2D projection of the model. The upper left panel shows the probability density function that was fit to the data. The 2D projection was created by marginalizing over the other annotation dimensions in the model via random sampling. Green areas show locations in the space that are indicative of being high quality while red areas show the lowest probability areas. In general putative SNPs that fall in the red regions will be filtered out of the recalibrated call set.

The remaining three panels give scatter plots in which each SNP is plotted in the two annotation dimensions as points in a point cloud. The scale for each dimension is in normalized units. The data for the three panels is the same but the points are colored in different ways to highlight different aspects of the data. In the upper right panel SNPs are colored black and red to show which SNPs are retained and filtered, respectively, by applying the VQSR procedure. The red SNPs didn't meet the given truth sensitivity threshold and so are filtered out of the call set. The lower left panel colors SNPs green, grey, and purple to give a sense of the distribution of the variants used to train the model. The green SNPs are those which were found in the training sets passed into the VariantRecalibrator step, while the purple SNPs are those which were found to be furthest away from the learned Gaussians and thus given the lowest probability of being true. Finally, the lower right panel colors each SNP by their known/novel status with blue being the known SNPs and red being the novel SNPs. Here the idea is to see if the annotation dimensions provide a clear separation between the known SNPs (most of which are true) and the novel SNPs (most of which are false).

An example of good clustering for SNP calls from the tutorial dataset is shown to the right. The plot shows that the training data forms a distinct cluster at low values for each of the two statistics shown (haplotype score and mapping quality bias). As the SNPs fall off the distribution in either one or both of the dimensions they are assigned a lower probability (that is, move into the red region of the model's PDF) and are filtered out. This makes sense as not only do higher values of HaplotypeScore indicate a lower chance of the data being explained by only two haplotypes but also higher values for mapping quality bias indicate more evidence of bias between the reference bases and the alternative bases. The model has captured our intuition that this area of the distribution is highly enriched for machine artifacts and putative variants here should be filtered out!

Tranches and the tranche plot

The recalibrated variant quality score provides a continuous estimate of the probability that each variant is true, allowing one to partition the call sets into quality tranches. The main purpose of the tranches is to establish thresholds within your data that correspond to certain levels of sensitivity relative to the truth sets. The idea is that with well calibrated variant quality scores, you can generate call sets in which each variant doesn't have to have a hard answer as to whether it is in or out of the set. If a very high accuracy call set is desired then one can use the highest tranche, but if a larger, more complete call set is a higher priority than one can dip down into lower and lower tranches. These tranches are applied to the output VCF file using the FILTER field. In this way you can choose to use some of the filtered records or only use the PASSing records.

The first tranche (from the bottom, with the highest value of truth sensitivity but usually the lowest values of novel Ti/Tv) is exceedingly specific but less sensitive, and each subsequent tranche in turn introduces additional true positive calls along with a growing number of false positive calls. Downstream applications can select in a principled way more specific or more sensitive call sets or incorporate directly the recalibrated quality scores to avoid entirely the need to analyze only a fixed subset of calls but rather weight individual variant calls by their probability of being real. An example tranche plot, automatically generated by the VariantRecalibrator walker, is shown below.

This is an example of a tranches plot generated for a HiSeq call set. The x-axis gives the number of novel variants called while the y-axis shows two quality metrics -- novel transition to transversion ratio and the overall truth sensitivity.

Note that the tranches plot is not applicable for indels and will not be generated when the tool is run in INDEL mode.

Ti/Tv-free recalibration

We use a Ti/Tv-free approach to variant quality score recalibration. This approach requires an additional truth data set, and cuts the VQSLOD at given sensitivities to the truth set. It has several advantages over the Ti/Tv-targeted approach:

  • The truth sensitivity (TS) approach gives you back the novel Ti/Tv as a QC metric
  • The truth sensitivity (TS) approach is conceptual cleaner than deciding on a novel Ti/Tv target for your dataset
  • The TS approach is easier to explain and defend, as saying "I took called variants until I found 99% of my known variable sites" is easier than "I took variants until I dropped my novel Ti/Tv ratio to 2.07"

We have used hapmap 3.3 sites as the truth set (genotypes_r27_nr.b37_fwd.vcf), but other sets of high-quality (~99% truly variable in the population) sets of sites should work just as well. In our experience, with HapMap, 99% is a good threshold, as the remaining 1% of sites often exhibit unusual features like being close to indels or are actually MNPs, and so receive a low VQSLOD score.
Note that the expected Ti/Tv is still an available argument but it is only used for display purposes.

Finally, a couple of Frequently Asked Questions

- Can I use the variant quality score recalibrator with my small sequencing experiment?

This tool is expecting thousands of variant sites in order to achieve decent modeling with the Gaussian mixture model. Whole exome call sets work well, but anything smaller than that scale might run into difficulties.

One piece of advice is to turn down the number of Gaussians used during training. This can be accomplished by adding --maxGaussians 4 to your command line.

maxGaussians is the maximum number of different "clusters" (=Gaussians) of variants the program is "allowed" to try to identify. Lowering this number forces the program to group variants into a smaller number of clusters, which means there will be more variants in each cluster -- hopefully enough to satisfy the statistical requirements. Of course, this decreases the level of discrimination that you can achieve between variant profiles/error modes. It's all about trade-offs; and unfortunately if you don't have a lot of variants you can't afford to be very demanding in terms of resolution.

- Why don't all the plots get generated for me?

The most common problem related to this is not having Rscript accessible in your environment path. Rscript is the command line version of R that gets installed right alongside. We also make use of the ggplot2 library so please be sure to install that package as well. See the Common Problems section of the Guide for more details.

No posts found with the requested search criteria.

Created 2015-05-07 13:45:51 | Updated | Tags: variantrecalibration
Comments (10)

Hi team, I'm have a vcf callset file generated using HaplotypeCaller in --emitRefConfidence GVCF mode with subsequent GenotypeGVCFs. I used the generated output.vcf file as input for VariantRecalibration The command: java -jar $GATK_HOME/GenomeAnalysisTK.jar \ -T VariantRecalibrator \ -R $REFERENCE \ -input exome_set_output.vcf \ -resource:hapmap,known=false,training=true,truth=true,prior=15.0 $GOLD_STANDARD_HAPMAP \ -resource:omni,known=false,training=true,truth=true,prior=12.0 $GOLD_STANDARD_OMNI \ -resource:1000G,known=false,training=true,truth=false,prior=10.0 $GOLD_STANDARD_1000G \ -resource:dbsnp,known=true,training=false,truth=false,prior=2.0 $GOLD_STANDARD_DBSNP \ -an DP -an QD -an FS -an SOR -an MQ -an MQRankSum -an ReadPosRankSum -an InbreedingCoeff \ -mode SNP \ -tranche 100.0 -tranche 99.9 \ -recalFile exome_set_output_SNP.recal \ -tranchesFile recal_SNP.tranches \ -rscriptFile recal_SNP_plots.R \

But I get the following error: ##### ERROR MESSAGE: Bad input: Values for DP annotation not detected for ANY training variant in the input callset. VariantAnnotator may be used to add these annotations.

My input vcf file looks like this: #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT RTN005 RTN007 RTN009 RTN024 RTN028 RTN038 RTN039 RTN045 RTN051 RTN097 RTN102 RTN108 RTN122 RTN126 RTN127 RTN133 1 762273 . G A 23942.06 . AC=31;AF=0.969;AN=32;BaseQRankSum=-9.420e-01;ClippingRankSum=-6.500e-02;DP=743;FS=2.053;GQ_MEAN=139.56;GQ_STDDEV=44.68;InbreedingCoeff=-0.0323;MLEAC=31;MLEAF=0.969;MQ=42.48;MQ0=0;MQRankSum=-2.517e+00;NCC=0;QD=32.27;ReadPosRankSum=-1.485e+00 GT:AD:DP:GQ:PL 1/1:0,42:42:99:1426,126,0 1/1:0,28:28:84:945,84,0 1/1:0,69:69:99:2430,208,0 1/1:0,38:38:99:1295,114,0 1/1:0,54:54:99:1876,162,0 1/1:0,28:28:84:977,84,0 1/1:0,37:37:99:1282,111,0 1/1:0,65:65:99:2207,195,0 1/1:0,46:46:99:1572,138,0 1/1:0,45:45:99:1565,135,0 1/1:0,60:60:99:2019,180,0 1/1:0,52:52:99:1758,156,0 1/1:0,69:69:99:2404,208,0 1/1:0,19:19:57:657,57,0 1/1:0,41:41:99:1413,123,0 0/1:40,9:49:99:152,0,1320 1 762353 . G C 59.07 . AC=1;AF=0.031;AN=32;BaseQRankSum=0.111;ClippingRankSum=-5.560e-01;DP=321;FS=0.000;GQ_MEAN=51.81;GQ_STDDEV=19.63;InbreedingCoeff=-0.0328;MLEAC=1;MLEAF=0.031;MQ=42.47;MQ0=0;MQRankSum=0.779;NCC=0;QD=1.97;ReadPosRankSum=0.501 GT:AD:DP:GQ:PL 0/0:22,0:22:63:0,63,945 0/0:23,0:23:60:0,60,832 0/0:15,0:15:23:0,23,574 0/0:22,0:22:60:0,60,900 0/0:26,0:26:63:0,63,945 0/0:8,0:8:21:0,21,303 0/0:15,0:15:42:0,42,489 0/0:26,0:26:60:0,60,900 0/0:11,0:11:30:0,30,450 0/0:23,0:23:63:0,63,945 0/1:25,5:30:95:95,0,783 0/0:23,0:23:60:0,60,900 0/0:29,0:29:63:0,63,945 0/0:20,0:20:60:0,60,685 0/0:15,0:15:36:0,36,540 0/0:13,0:13:30:0,30,450 1 861630 . G A 1958.44 . AC=22;AF=0.688;AN=32;BaseQRankSum=-1.380e+00;ClippingRankSum=0.198;DP=88;FS=7.101;GQ_MEAN=24.38;GQ_STDDEV=23.34;InbreedingCoeff=0.0754;MLEAC=25;MLEAF=0.781;MQ=60.00;MQ0=0;MQRankSum=0.00;NCC=0;QD=25.43;ReadPosRankSum=0.720 GT:AD:DP:GQ:PL 0/0:3,0:3:0:0,0,51 0/1:4,2:6:46:46,0,146 1/1:0,4:4:12:133,12,0 1/1:0,4:4:12:133,12,0 1/1:0,6:6:18:197,18,0 1/1:0,2:2:6:64,6,0 1/1:0,7:7:21:209,21,0 1/1:0,8:8:24:264,24,0 1/1:0,7:7:21:232,21,0 0/1:4,3:7:76:76,0,137 0/0:4,0:4:0:0,0,93 0/1:2,4:6:64:113,0,64 0/1:2,5:7:51:135,0,51 1/1:0,10:10:30:321,30,0 1/1:0,3:3:9:100,9,0 0/0:4,0:4:0:0,0,88

Can someone point me to what I'm doing wrong? I can see that there are DP values in the vcf file so I don't understand why it complains there aren't any annotations.

Thanks very much, Tesa

Created 2014-11-17 06:13:22 | Updated | Tags: variantrecalibration
Comments (6)

Hi, I tried to recalibrate my rawvariants.vcf generated by HaplotypeCaller.

This error message showing up

ERROR MESSAGE: Input files /mnt/ScratchPool/gisv71/hapmap_3.3.hg19.vcf and reference have incompatible contigs: Relative ordering of overlapping contigs differs, which is unsafe.
ERROR /mnt/ScratchPool/gisv71/hapmap_3.3.hg19.vcf contigs = [chrM, chr1, chr2, chr3, chr4, chr5, chr6, chr7, chr8, chr9, chr10, chr11, chr12, chr13, chr14, chr15, chr16, chr17, chr18, chr19, chr20, chr21, chr22, chrX, chrY]
ERROR reference contigs = [chr1, chr2, chr3, chr4, chr5, chr6, chr7, chr8, chr9, chr10, chr11, chr12, chr13, chr14, chr15, chr16, chr17, chr18, chr19, chr20, chr21, chr22, chrM, chrX, chrY, chr1_gl000191_random, chr1_gl000192_random, chr4_gl000193_random, chr4_gl000194_random, chr7_gl000195_random, chr8_gl000196_random, chr8_gl000197_random, chr9_gl000198_random, chr9_gl000199_random, chr9_gl000200_random, chr9_gl000201_random, chr11_gl000202_random, chr17_gl000203_random, chr17_gl000204_random, chr17_gl000205_random, chr17_gl000206_random, chr18_gl000207_random, chr19_gl000208_random, chr19_gl000209_random, chr21_gl000210_random]

I guess I need to fix the contigs order in my hapmap resource file, dont I? But, I don't know how to do it. I tried with vcftools (http://vcftools.sourceforge.net/perl_module.html#vcf-sort), but it did not work. Can you give me any suggestion?

Many thanks before.

Kind regards, Angelica

Created 2014-10-29 13:07:43 | Updated 2014-10-29 13:11:42 | Tags: applyrecalibration tribble variantrecalibration gatk3
Comments (2)

Hi there,

I'm having a problem using GATK's ApplyRecalibration tool using this command:

java -jar "GenomeAnalysisTK.jar" -T "ApplyRecalibration" -R human_g1k_v37.fasta -input fileContents.vcf --ts_filter_level "99.9" --mode "SNP" --recal_file dataset_1139.dat_0.recal --tranches_file dataset_1140.dat_0.tranches --out dataset_1142.dat_0.vcf

This is the error:

ERROR ------------------------------------------------------------------------------------------
ERROR A USER ERROR has occurred (version 3.3-0-g37228af):
ERROR This means that one or more arguments or inputs in your command are incorrect.
ERROR The error message below tells you what is the problem.
ERROR If the problem is an invalid argument, please check the online documentation guide
ERROR (or rerun your command with --help) to view allowable command-line arguments for this tool.
ERROR Visit our website and forum for extensive documentation and answers to
ERROR commonly asked questions http://www.broadinstitute.org/gatk
ERROR Please do NOT post this error to the GATK forum unless you have really tried to fix it yourself.
ERROR MESSAGE: Invalid command line: No tribble type was provided on the command line and the type of the file could not be determined dynamically. Please add an explicit type tag :NAME listing the correct type from among the supported types:
ERROR Name FeatureType Documentation
ERROR BCF2 VariantContext (this is an external codec and is not documented within GATK)
ERROR VCF VariantContext (this is an external codec and is not documented within GATK)
ERROR VCF3 VariantContext (this is an external codec and is not documented within GATK)
ERROR ------------------------------------------------------------------------------------------

I tried to look for some solutions, I added a :NAME,VCF tag before the vcf file, I tried using another vcf file and I validated the vcf file I'm using, using GATK's ValidateVariants and it was validated with no problem, I tried different versions of GATK including the current version 3.3, and I tried compressing and indexing the vcf file unsing bgzip+tabix but this problem still presists.

The weird thing is that in the log info I get this message before the error: INFO 14:21:11,775 ArgumentTypeDescriptor - Dynamically determined type of fileContents.vcf to be VCF

Any suggestions are appreciated.

Thanks in advance.


Created 2014-08-06 18:43:51 | Updated | Tags: variantrecalibration
Comments (3)

From GATK 2.7 to 3.2, we noticed that the FILTER field changed using VariantRecalibration. Previously, we had the filter names in regards to the model build by the VariantsRacalibration. Now it's either PASS or LOW_VSQLOD

Is there a way to have the ApplyRecalibration walker assign the tranche name as filter, ie VQSRTrancheINDEL99.00to99.90 VQSRTrancheINDEL99.90to100.00 VQSRTrancheSNP99.00to99.90 VQSRTrancheSNP99.90to100.00

instead of only LOW_VSQLOD ?


Created 2013-12-31 03:59:15 | Updated | Tags: variantrecalibration nullpointerexception
Comments (3)

I am following the GATK best practices workflow on Linux. I am using chr21.fa as reference from NCBI and .vcf files are taken from resource bundle and dbsnp.vcf is from NCBI. I am working on the VariantRecalibrator step. It gives me NullPointerException. I am attaching the console error that I have got which contains the command also and raw_variants.vcf which is generated in previous step by HaplotypeCaller tool in txt format. Please let me know what can be the issue.

Created 2012-11-16 07:11:18 | Updated 2013-01-07 20:02:39 | Tags: snp indels variantrecalibration
Comments (1)

Hi All, I have used GATK UnifiedGenotyper to generate a raw.vcf file. Now I want to use GATK VQSR to get a more accurate result ,and I follow this protocol:

snp.model <- BuildErrorModelWithVQSR(raw.vcf, SNP);
indel.model <- BuildErrorModelWithVQSR(raw.vcf, INDEL);
recalibratedSNPs.rawIndels.vcf <- ApplyRecalibration(raw.vcf, snp.model, SNP);
analysisReady.vcf <- ApplyRecalibration(recalibratedSNPs.rawIndels.vcf, indel.model, INDEL);

I wanna know will the it be better if I seperate the SNP and INDEL to run VQSR, like this:

SNP.raw.vcf , INDEL.raw.vcf <- SeperateSNPINDEL(raw.vcf);
snp.model <- BuildErrorModelWithVQSR(SNP.raw.vcf, SNP);
indel.model <- BuildErrorModelWithVQSR( INDEL.raw.vcf, INDEL);
SNP_analysisReady.vcf <- ApplyRecalibration(SNP.raw.vcf, snp.model, SNP);
INDEL_analysisReady.vcf <- ApplyRecalibration(INDEL.raw.vcf, INDEL.model, SNP);

Thanks a lot !

Created 2012-10-19 13:24:52 | Updated 2012-10-22 14:25:47 | Tags: vqsr variantrecalibration
Comments (1)

When performing VQSR, the data set has its variants overlapped with the training set, may I know if all the overlapped variants are used in the training or is it down sampled?

Created 2012-08-29 20:00:22 | Updated 2013-01-07 19:59:32 | Tags: non-human variantrecalibration mouse
Comments (3)

Dear all,

I was calling SNP from Mouse samples using GATK and was in the step of "Variant quality score recalibration". The VariantRecalibrator walker asked me to provide training sets for mouse SNPs. The only SNP data (for the USCS mm9 assembly) I can find right now is the dbSNP. So I tried the run VariantRecalibrator like this:

java -Xmx4g -jar GenomeAnalysisTK.jar -T VariantRecalibrator -R Refseq.fa -input snps.raw.vcf -resource:dbsnp,known=true,training=false,truth=false,prior=6.0 snp128.vcf -an QD -an HaplotypeScore -an MQRankSum -an ReadPosRankSum -an FS -an MQ -an InbreedingCoeff -mode BOTH -recalFile output.recal -tranchesFile output.tranches -rscriptFile output.plots.R

However, the program asked for more:

ERROR MESSAGE: Invalid command line: No training set found! Please provide sets of known polymorphic loci marked with the training=true ROD binding tag. For example, -resource:hapmap,VCF,known=false,training=true,truth=true,prior=12.0 hapmapFile.vcf
ERROR ------------------------------------------------------------------------------------------

I was wondering if I can change the parameters by setting both the training/truth to true:

-resource:dbsnp,known=true,training=true,truth=true,prior=6.0 snp128.vcf

or should I ignore the "-resource" option at the cost of being less accurate?

Any suggestions would be greatly appreciated.