# Tagged with #variantrecalibrator 4 documentation articles | 4 announcements | 63 forum discussions

#### Objective

Recalibrate variant quality scores and produce a callset filtered for the desired levels of sensitivity and specificity.

• TBD

#### Caveats

This document provides a typical usage example including parameter values. However, the values given may not be representative of the latest Best Practices recommendations. When in doubt, please consult the FAQ document on VQSR training sets and parameters, which overrides this document. See that document also for caveats regarding exome vs. whole genomes analysis design.

#### Steps

1. Prepare recalibration parameters for SNPs
a. Specify which call sets the program should use as resources to build the recalibration model
b. Specify which annotations the program should use to evaluate the likelihood of Indels being real
c. Specify the desired truth sensitivity threshold values that the program should use to generate tranches

2. Build the SNP recalibration model

3. Apply the desired level of recalibration to the SNPs in the call set

4. Prepare recalibration parameters for Indels a. Specify which call sets the program should use as resources to build the recalibration model b. Specify which annotations the program should use to evaluate the likelihood of Indels being real c. Specify the desired truth sensitivity threshold values that the program should use to generate tranches d. Determine additional model parameters

5. Build the Indel recalibration model

6. Apply the desired level of recalibration to the Indels in the call set

### 1. Prepare recalibration parameters for SNPs

#### a. Specify which call sets the program should use as resources to build the recalibration model

For each training set, we use key-value tags to qualify whether the set contains known sites, training sites, and/or truth sites. We also use a tag to specify the prior likelihood that those sites are true (using the Phred scale).

• True sites training resource: HapMap

This resource is a SNP call set that has been validated to a very high degree of confidence. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). We will also use these sites later on to choose a threshold for filtering variants based on sensitivity to truth sites. The prior likelihood we assign to these variants is Q15 (96.84%).

• True sites training resource: Omni

This resource is a set of polymorphic SNP sites produced by the Omni genotyping array. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q12 (93.69%).

• Non-true sites training resource: 1000G

This resource is a set of high-confidence SNP sites produced by the 1000 Genomes Project. The program will consider that the variants in this resource may contain true variants as well as false positives (truth=false), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q10 (%).

• Known sites resource, not used in training: dbSNP

This resource is a SNP call set that has not been validated to a high degree of confidence (truth=false). The program will not use the variants in this resource to train the recalibration model (training=false). However, the program will use these to stratify output metrics such as Ti/Tv ratio by whether variants are present in dbsnp or not (known=true). The prior likelihood we assign to these variants is Q2 (36.90%).

The default prior likelihood assigned to all other variants is Q2 (36.90%). This low value reflects the fact that the philosophy of the GATK callers is to produce a large, highly sensitive callset that needs to be heavily refined through additional filtering.

#### b. Specify which annotations the program should use to evaluate the likelihood of SNPs being real

These annotations are included in the information generated for each variant call by the caller. If an annotation is missing (typically because it was omitted from the calling command) it can be added using the VariantAnnotator tool.

Total (unfiltered) depth of coverage. Note that this statistic should not be used with exome datasets; see caveat detailed in the VQSR arguments FAQ doc.

Variant confidence (from the QUAL field) / unfiltered depth of non-reference samples.

Measure of strand bias (the variation being seen on only the forward or only the reverse strand). More bias is indicative of false positive calls. This complements the StrandOddsRatio (SOR) annotation.

Measure of strand bias (the variation being seen on only the forward or only the reverse strand). More bias is indicative of false positive calls. This complements the FisherStrand (FS) annotation.

The rank sum test for mapping qualities. Note that the mapping quality rank sum test can not be calculated for sites without a mixture of reads showing both the reference and alternate alleles.

The rank sum test for the distance from the end of the reads. If the alternate allele is only seen near the ends of reads, this is indicative of error. Note that the read position rank sum test can not be calculated for sites without a mixture of reads showing both the reference and alternate alleles.

Estimation of the overall mapping quality of reads supporting a variant call.

Evidence of inbreeding in a population. See caveats regarding population size and composition detailed in the VQSR arguments FAQ doc.

#### c. Specify the desired truth sensitivity threshold values that the program should use to generate tranches

• First tranche threshold 100.0

• Second tranche threshold 99.9

• Third tranche threshold 99.0

• Fourth tranche threshold 90.0

Tranches are essentially slices of variants, ranked by VQSLOD, bounded by the threshold values specified in this step. The threshold values themselves refer to the sensitivity we can obtain when we apply them to the call sets that the program uses to train the model. The idea is that the lowest tranche is highly specific but less sensitive (there are very few false positives but potentially many false negatives, i.e. missing calls), and each subsequent tranche in turn introduces additional true positive calls along with a growing number of false positive calls. This allows us to filter variants based on how sensitive we want the call set to be, rather than applying hard filters and then only evaluating how sensitive the call set is using post hoc methods.

### 2. Build the SNP recalibration model

#### Action

Run the following GATK command:

java -jar GenomeAnalysisTK.jar \
-T VariantRecalibrator \
-R reference.fa \
-input raw_variants.vcf \
-resource:hapmap,known=false,training=true,truth=true,prior=15.0 hapmap.vcf \
-resource:omni,known=false,training=true,truth=true,prior=12.0 omni.vcf \
-resource:1000G,known=false,training=true,truth=false,prior=10.0 1000G.vcf \
-resource:dbsnp,known=true,training=false,truth=false,prior=2.0 dbsnp.vcf \
-an DP \
-an QD \
-an FS \
-an SOR \
-an MQ \
-an MQRankSum \
-an InbreedingCoeff \
-mode SNP \
-tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0 \
-recalFile recalibrate_SNP.recal \
-tranchesFile recalibrate_SNP.tranches \
-rscriptFile recalibrate_SNP_plots.R

#### Expected Result

This creates several files. The most important file is the recalibration report, called recalibrate_SNP.recal, which contains the recalibration data. This is what the program will use in the next step to generate a VCF file in which the variants are annotated with their recalibrated quality scores. There is also a file called recalibrate_SNP.tranches, which contains the quality score thresholds corresponding to the tranches specified in the original command. Finally, if your installation of R and the other required libraries was done correctly, you will also find some PDF files containing plots. These plots illustrated the distribution of variants according to certain dimensions of the model.

For detailed instructions on how to interpret these plots, please refer to the VQSR method documentation and presentation videos.

### 3. Apply the desired level of recalibration to the SNPs in the call set

#### Action

Run the following GATK command:

java -jar GenomeAnalysisTK.jar \
-T ApplyRecalibration \
-R reference.fa \
-input raw_variants.vcf \
-mode SNP \
--ts_filter_level 99.0 \
-recalFile recalibrate_SNP.recal \
-tranchesFile recalibrate_SNP.tranches \
-o recalibrated_snps_raw_indels.vcf

#### Expected Result

This creates a new VCF file, called recalibrated_snps_raw_indels.vcf, which contains all the original variants from the original raw_variants.vcf file, but now the SNPs are annotated with their recalibrated quality scores (VQSLOD) and either PASS or FILTER depending on whether or not they are included in the selected tranche.

Here we are taking the second lowest of the tranches specified in the original recalibration command. This means that we are applying to our data set the level of sensitivity that would allow us to retrieve 99% of true variants from the truth training sets of HapMap and Omni SNPs. If we wanted to be more specific (and therefore have less risk of including false positives, at the risk of missing real sites) we could take the very lowest tranche, which would only retrieve 90% of the truth training sites. If we wanted to be more sensitive (and therefore less specific, at the risk of including more false positives) we could take the higher tranches. In our Best Practices documentation, we recommend taking the second highest tranche (99.9%) which provides the highest sensitivity you can get while still being acceptably specific.

### 4. Prepare recalibration parameters for Indels

#### a. Specify which call sets the program should use as resources to build the recalibration model

For each training set, we use key-value tags to qualify whether the set contains known sites, training sites, and/or truth sites. We also use a tag to specify the prior likelihood that those sites are true (using the Phred scale).

• Known and true sites training resource: Mills

This resource is an Indel call set that has been validated to a high degree of confidence. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q12 (93.69%).

The default prior likelihood assigned to all other variants is Q2 (36.90%). This low value reflects the fact that the philosophy of the GATK callers is to produce a large, highly sensitive callset that needs to be heavily refined through additional filtering.

#### b. Specify which annotations the program should use to evaluate the likelihood of Indels being real

These annotations are included in the information generated for each variant call by the caller. If an annotation is missing (typically because it was omitted from the calling command) it can be added using the VariantAnnotator tool.

Total (unfiltered) depth of coverage. Note that this statistic should not be used with exome datasets; see caveat detailed in the VQSR arguments FAQ doc.

Variant confidence (from the QUAL field) / unfiltered depth of non-reference samples.

Measure of strand bias (the variation being seen on only the forward or only the reverse strand). More bias is indicative of false positive calls. This complements the StrandOddsRatio (SOR) annotation.

Measure of strand bias (the variation being seen on only the forward or only the reverse strand). More bias is indicative of false positive calls. This complements the FisherStrand (FS) annotation.

The rank sum test for mapping qualities. Note that the mapping quality rank sum test can not be calculated for sites without a mixture of reads showing both the reference and alternate alleles.

The rank sum test for the distance from the end of the reads. If the alternate allele is only seen near the ends of reads, this is indicative of error. Note that the read position rank sum test can not be calculated for sites without a mixture of reads showing both the reference and alternate alleles.

Evidence of inbreeding in a population. See caveats regarding population size and composition detailed in the VQSR arguments FAQ doc.

#### c. Specify the desired truth sensitivity threshold values that the program should use to generate tranches

• First tranche threshold 100.0

• Second tranche threshold 99.9

• Third tranche threshold 99.0

• Fourth tranche threshold 90.0

Tranches are essentially slices of variants, ranked by VQSLOD, bounded by the threshold values specified in this step. The threshold values themselves refer to the sensitivity we can obtain when we apply them to the call sets that the program uses to train the model. The idea is that the lowest tranche is highly specific but less sensitive (there are very few false positives but potentially many false negatives, i.e. missing calls), and each subsequent tranche in turn introduces additional true positive calls along with a growing number of false positive calls. This allows us to filter variants based on how sensitive we want the call set to be, rather than applying hard filters and then only evaluating how sensitive the call set is using post hoc methods.

#### d. Determine additional model parameters

• Maximum number of Gaussians (-maxGaussians) 4

This is the maximum number of Gaussians (i.e. clusters of variants that have similar properties) that the program should try to identify when it runs the variational Bayes algorithm that underlies the machine learning method. In essence, this limits the number of different ”profiles” of variants that the program will try to identify. This number should only be increased for datasets that include very many variants.

### 5. Build the Indel recalibration model

#### Action

Run the following GATK command:

java -jar GenomeAnalysisTK.jar \
-T VariantRecalibrator \
-R reference.fa \
-input recalibrated_snps_raw_indels.vcf \
-resource:mills,known=true,training=true,truth=true,prior=12.0 mills.vcf \
-an QD \
-an DP \
-an FS \
-an SOR \
-an MQRankSum \
-an InbreedingCoeff
-mode INDEL \
-tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0 \
--maxGaussians 4 \
-recalFile recalibrate_INDEL.recal \
-tranchesFile recalibrate_INDEL.tranches \
-rscriptFile recalibrate_INDEL_plots.R

#### Expected Result

This creates several files. The most important file is the recalibration report, called recalibrate_INDEL.recal, which contains the recalibration data. This is what the program will use in the next step to generate a VCF file in which the variants are annotated with their recalibrated quality scores. There is also a file called recalibrate_INDEL.tranches, which contains the quality score thresholds corresponding to the tranches specified in the original command. Finally, if your installation of R and the other required libraries was done correctly, you will also find some PDF files containing plots. These plots illustrated the distribution of variants according to certain dimensions of the model.

For detailed instructions on how to interpret these plots, please refer to the online GATK documentation.

### 6. Apply the desired level of recalibration to the Indels in the call set

#### Action

Run the following GATK command:

java -jar GenomeAnalysisTK.jar \
-T ApplyRecalibration \
-R reference.fa \
-input recalibrated_snps_raw_indels.vcf \
-mode INDEL \
--ts_filter_level 99.0 \
-recalFile recalibrate_INDEL.recal \
-tranchesFile recalibrate_INDEL.tranches \
-o recalibrated_variants.vcf

#### Expected Result

This creates a new VCF file, called recalibrated_variants.vcf, which contains all the original variants from the original recalibrated_snps_raw_indels.vcf file, but now the Indels are also annotated with their recalibrated quality scores (VQSLOD) and either PASS or FILTER depending on whether or not they are included in the selected tranche.

Here we are taking the second lowest of the tranches specified in the original recalibration command. This means that we are applying to our data set the level of sensitivity that would allow us to retrieve 99% of true variants from the truth training sets of HapMap and Omni SNPs. If we wanted to be more specific (and therefore have less risk of including false positives, at the risk of missing real sites) we could take the very lowest tranche, which would only retrieve 90% of the truth training sites. If we wanted to be more sensitive (and therefore less specific, at the risk of including more false positives) we could take the higher tranches. In our Best Practices documentation, we recommend taking the second highest tranche (99.9%) which provides the highest sensitivity you can get while still being acceptably specific.

This document describes the resource datasets and arguments that we recommend for use in the two steps of VQSR (i.e. the successive application of VariantRecalibrator and ApplyRecalibration), based on our work with human genomes, to comply with the GATK Best Practices. The recommendations detailed in this document take precedence over any others you may see elsewhere in our documentation (e.g. in Tutorial articles, which are only meant to illustrate usage, or in past presentations, which may be out of date).

The document covers:

• Explanation of resource datasets
• Important notes about exome experiments
• Argument recommendations for VariantRecalibrator
• Argument recommendations for ApplyRecalibration

These recommendations are valid for use with calls generated by both the UnifiedGenotyper and HaplotypeCaller. In the past we made a distinction in how we processed the calls from these two callers, but now we treat them the same way. These recommendations will probably not work properly on calls generated by other (non-GATK) callers.

Note that VQSR must be run twice in succession in order to build a separate error model for SNPs and INDELs (see the VQSR documentation for more details).

### Explanation of resource datasets

The human genome training, truth and known resource datasets mentioned in this document are all available from our resource bundle.

If you are working with non-human genomes, you will need to find or generate at least truth and training resource datasets with properties corresponding to those described below. To generate your own resource set, one idea is to first do an initial round of SNP calling and only use those SNPs which have the highest quality scores. These sites which have the most confidence are probably real and could be used as truth data to help disambiguate the rest of the variants in the call set. Another idea is to try using several SNP callers in addition to the UnifiedGenotyper or HaplotypeCaller, and use those sites which are concordant between the different methods as truth data. In either case, you'll need to assign your set a prior likelihood that reflects your confidence in how reliable it is as a truth set. We recommend Q10 as a starting value, which you can then experiment with to find the most appropriate value empirically. There are many possible avenues of research here. Hopefully the model reporting plots that are generated by the recalibration tools will help facilitate this experimentation.

#### Resources for SNPs

• True sites training resource: HapMap
This resource is a SNP call set that has been validated to a very high degree of confidence. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). We will also use these sites later on to choose a threshold for filtering variants based on sensitivity to truth sites. The prior likelihood we assign to these variants is Q15 (96.84%).

• True sites training resource: Omni
This resource is a set of polymorphic SNP sites produced by the Omni geno- typing array. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q12 (93.69%).

• Non-true sites training resource: 1000G
This resource is a set of high-confidence SNP sites produced by the 1000 Genomes Project. The program will consider that the variants in this re- source may contain true variants as well as false positives (truth=false), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q10 (%). 17

• Known sites resource, not used in training: dbSNP
This resource is a call set that has not been validated to a high degree of confidence (truth=false). The program will not use the variants in this resource to train the recalibration model (training=false). However, the program will use these to stratify output metrics such as Ti/Tv ratio by whether variants are present in dbsnp or not (known=true). The prior likelihood we assign to these variants is Q2 (36.90%).

#### Resources for Indels

• Known and true sites training resource: Mills
This resource is an Indel call set that has been validated to a high degree of confidence. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q12 (93.69%).

Some of the annotations included in the recommendations given below might not be the best for your particular dataset. In particular, the following caveats apply:

• Depth of coverage (the DP annotation invoked by Coverage) should not be used when working with exome datasets since there is extreme variation in the depth to which targets are captured! In whole genome experiments this variation is indicative of error but that is not the case in capture experiments.

• You may have seen HaplotypeScore mentioned in older documents. That is a statistic produced by UnifiedGenotyper that should only be used if you called your variants with UG. This statistic isn't produced by the HaplotypeCaller because that mathematics is already built into the likelihood function itself when calling full haplotypes with HC.

• The InbreedingCoeff is a population level statistic that requires at least 10 samples in order to be computed. For projects with fewer samples, or that includes many closely related samples (such as a family) please omit this annotation from the command line.

### Important notes for exome capture experiments

In our testing we've found that in order to achieve the best exome results one needs to use an exome SNP and/or indel callset with at least 30 samples. For users with experiments containing fewer exome samples there are several options to explore:

• Add additional samples for variant calling, either by sequencing additional samples or using publicly available exome bams from the 1000 Genomes Project (this option is used by the Broad exome production pipeline). Be aware that you cannot simply add VCFs from the 1000 Genomes Project. You must either call variants from the original BAMs jointly with your own samples, or (better) use the reference model workflow to generate GVCFs from the original BAMs, and perform joint genotyping on those GVCFs along with your own samples' GVCFs with GenotypeGVCFs.

• You can also try using the VQSR with the smaller variant callset, but experiment with argument settings (try adding --maxGaussians 4 to your command line, for example). You should only do this if you are working with a non-model organism for which there are no available genomes or exomes that you can use to supplement your own cohort.

### Argument recommendations for VariantRecalibrator

The variant quality score recalibrator builds an adaptive error model using known variant sites and then applies this model to estimate the probability that each variant is a true genetic variant or a machine artifact. One major improvement from previous recommended protocols is that hand filters do not need to be applied at any point in the process now. All filtering criteria are learned from the data itself.

#### Common, base command line

This is the first part of the VariantRecalibrator command line, to which you need to add either the SNP-specific recommendations or the indel-specific recommendations given further below.

java -Xmx4g -jar GenomeAnalysisTK.jar \
-T VariantRecalibrator \
-R path/to/reference/human_g1k_v37.fasta \
-input raw.input.vcf \
-recalFile path/to/output.recal \
-tranchesFile path/to/output.tranches \
-nt 4 \
[SPECIFY TRUTH AND TRAINING SETS] \
[SPECIFY WHICH ANNOTATIONS TO USE IN MODELING] \
[SPECIFY WHICH CLASS OF VARIATION TO MODEL] \

#### SNP specific recommendations

For SNPs we use both HapMap v3.3 and the Omni chip array from the 1000 Genomes Project as training data. In addition we take the highest confidence SNPs from the project's callset. These datasets are available in the GATK resource bundle.

-resource:hapmap,known=false,training=true,truth=true,prior=15.0 hapmap_3.3.b37.sites.vcf \
-resource:omni,known=false,training=true,truth=true,prior=12.0 1000G_omni2.5.b37.sites.vcf \
-resource:1000G,known=false,training=true,truth=false,prior=10.0 1000G_phase1.snps.high_confidence.vcf \
-resource:dbsnp,known=true,training=false,truth=false,prior=2.0 dbsnp.b37.vcf \
-an QD -an MQ -an MQRankSum -an ReadPosRankSum -an FS -an SOR -an DP -an InbreedingCoeff \
-mode SNP \

Please note that these recommendations are formulated for whole-genome datasets. For exomes, we do not recommend using DP for variant recalibration (see below for details of why).

Note also that, for the above to work, the input vcf needs to be annotated with the corresponding values (QD, FS, DP, etc.). If any of these values are somehow missing, then VariantAnnotator needs to be run first so that VariantRecalibration can run properly.

Also, using the provided sites-only truth data files is important here as parsing the genotypes for VCF files with many samples increases the runtime of the tool significantly.

You may notice that these recommendations no longer include the --numBadVariants argument. That is because we have removed this argument from the tool, as the VariantRecalibrator now determines the number of variants to use for modeling "bad" variants internally based on the data.

#### Indel specific recommendations

When modeling indels with the VQSR we use a training dataset that was created at the Broad by strictly curating the (Mills, Devine, Genome Research, 2011) dataset as as well as adding in very high confidence indels from the 1000 Genomes Project. This dataset is available in the GATK resource bundle.

--maxGaussians 4 \
-resource:mills,known=false,training=true,truth=true,prior=12.0 Mills_and_1000G_gold_standard.indels.b37.sites.vcf \
-resource:dbsnp,known=true,training=false,truth=false,prior=2.0 dbsnp.b37.vcf \
-an QD -an DP -an FS -an SOR -an ReadPosRankSum -an MQRankSum -an InbreedingCoeff \
-mode INDEL \

Note that indels use a different set of annotations than SNPs. Most annotations related to mapping quality have been removed since there is a conflation with the length of an indel in a read and the degradation in mapping quality that is assigned to the read by the aligner. This covariation is not necessarily indicative of being an error in the same way that it is for SNPs.

You may notice that these recommendations no longer include the --numBadVariants argument. That is because we have removed this argument from the tool, as the VariantRecalibrator now determines the number of variants to use for modeling "bad" variants internally based on the data.

### Argument recommendations for ApplyRecalibration

The power of the VQSR is that it assigns a calibrated probability to every putative mutation in the callset. The user is then able to decide at what point on the theoretical ROC curve their project wants to live. Some projects, for example, are interested in finding every possible mutation and can tolerate a higher false positive rate. On the other hand, some projects want to generate a ranked list of mutations that they are very certain are real and well supported by the underlying data. The VQSR provides the necessary statistical machinery to effectively apply this sensitivity/specificity tradeoff.

#### Common, base command line

This is the first part of the ApplyRecalibration command line, to which you need to add either the SNP-specific recommendations or the indel-specific recommendations given further below.

java -Xmx3g -jar GenomeAnalysisTK.jar \
-T ApplyRecalibration \
-R reference/human_g1k_v37.fasta \
-input raw.input.vcf \
-tranchesFile path/to/input.tranches \
-recalFile path/to/input.recal \
-o path/to/output.recalibrated.filtered.vcf \
[SPECIFY THE DESIRED LEVEL OF SENSITIVITY TO TRUTH SITES] \
[SPECIFY WHICH CLASS OF VARIATION WAS MODELED] \

#### SNP specific recommendations

For SNPs we used HapMap 3.3 and the Omni 2.5M chip as our truth set. We typically seek to achieve 99.5% sensitivity to the accessible truth sites, but this is by no means universally applicable: you will need to experiment to find out what tranche cutoff is right for your data. Generally speaking, projects involving a higher degree of diversity in terms of world populations can expect to achieve a higher truth sensitivity than projects with a smaller scope.

--ts_filter_level 99.5 \
-mode SNP \

#### Indel specific recommendations

For indels we use the Mills / 1000 Genomes indel truth set described above. We typically seek to achieve 99.0% sensitivity to the accessible truth sites, but this is by no means universally applicable: you will need to experiment to find out what tranche cutoff is right for your data. Generally speaking, projects involving a higher degree of diversity in terms of world populations can expect to achieve a higher truth sensitivity than projects with a smaller scope.

--ts_filter_level 99.0 \
-mode INDEL \

A new tool has been released!

Check out the documentation at VariantRecalibrator.

This document describes what Variant Quality Score Recalibration (VQSR) is designed to do, and outlines how it works under the hood. For command-line examples and recommendations on what specific resource datasets and arguments to use for VQSR, please see this FAQ article.

As a complement to this document, we encourage you to watch the workshop videos available on our Events webpage.

Slides that explain the VQSR methodology in more detail as well as the individual component variant annotations can be found here in the GSA Public Drop Box.

Detailed information about command line options for VariantRecalibrator can be found here.

Detailed information about command line options for ApplyRecalibration can be found here.

### Introduction

The purpose of variant recalibration is to assign a well-calibrated probability to each variant call in a call set. This enables you to generate highly accurate call sets by filtering based on this single estimate for the accuracy of each call.

The approach taken by variant quality score recalibration is to develop a continuous, covarying estimate of the relationship between SNP call annotations (QD, SB, HaplotypeScore, HRun, for example) and the the probability that a SNP is a true genetic variant versus a sequencing or data processing artifact. This model is determined adaptively based on "true sites" provided as input (typically HapMap 3 sites and those sites found to be polymorphic on the Omni 2.5M SNP chip array, for humans). This adaptive error model can then be applied to both known and novel variation discovered in the call set of interest to evaluate the probability that each call is real. The score that gets added to the INFO field of each variant is called the VQSLOD. It is the log odds ratio of being a true variant versus being false under the trained Gaussian mixture model.

The variant recalibrator contrastively evaluates variants in a two step process, each performed by a distinct tool:

• VariantRecalibrator
Create a Gaussian mixture model by looking at the annotations values over a high quality subset of the input call set and then evaluate all input variants. This step produces a recalibration file.

• ApplyRecalibration
Apply the model parameters to each variant in input VCF files producing a recalibrated VCF file in which each variant is annotated with its VQSLOD value. In addition, this step will filter the calls based on this new lod score by adding lines to the FILTER column for variants that don't meet the specified lod threshold.

Please see the VQSR tutorial for step-by-step instructions on running these tools.

#### How VariantRecalibrator works in a nutshell

The tool takes the overlap of the training/truth resource sets and of your callset. It models the distribution of these variants relative to the annotations you specified, and attempts to group them into clusters. Then it uses the clustering to assign VQSLOD scores to all variants. Variants that are closer to the heart of a cluster will get a higher score than variants that are outliers.

#### How ApplyRecalibration works in a nutshell

During the first part of the recalibration process, variants in your callset were given a score called VQSLOD. At the same time, variants in your training sets were also ranked by VQSLOD. When you specify a tranche sensitivity threshold with ApplyRecalibration, expressed as a percentage (e.g. 99.9%), what happens is that the program looks at what is the VQSLOD value above which 99.9% of the variants in the training callset are included. It then takes that value of VQSLOD and uses it as a threshold to filter your variants. Variants that are above the threshold pass the filter, so the FILTER field will contain PASS. Variants that are below the threshold will be filtered out; they will be written to the output file, but in the FILTER field they will have the name of the tranche they belonged to. So VQSRTrancheSNP99.90to100.00 means that the variant was in the range of VQSLODs corresponding to the remaining 0.1% of the training set, which are basically considered false positives.

## Interpretation of the Gaussian mixture model plots

The variant recalibration step fits a Gaussian mixture model to the contextual annotations given to each variant. By fitting this probability model to the training variants (variants considered to be true-positives), a probability can be assigned to the putative novel variants (some of which will be true-positives, some of which will be false-positives). It is useful for users to see how the probability model was fit to their data. Therefore a modeling report is automatically generated each time VariantRecalibrator is run (in the above command line the report will appear as path/to/output.plots.R.pdf). For every pair-wise combination of annotations used in modeling, a 2D projection of the Gaussian mixture model is shown.

The figure shows one page of an example Gaussian mixture model report that is automatically generated by the VQSR from the example HiSeq call set. This page shows the 2D projection of mapping quality rank sum test versus Haplotype score by marginalizing over the other annotation dimensions in the model.

In each page there are four panels which show different ways of looking at the 2D projection of the model. The upper left panel shows the probability density function that was fit to the data. The 2D projection was created by marginalizing over the other annotation dimensions in the model via random sampling. Green areas show locations in the space that are indicative of being high quality while red areas show the lowest probability areas. In general putative SNPs that fall in the red regions will be filtered out of the recalibrated call set.

The remaining three panels give scatter plots in which each SNP is plotted in the two annotation dimensions as points in a point cloud. The scale for each dimension is in normalized units. The data for the three panels is the same but the points are colored in different ways to highlight different aspects of the data. In the upper right panel SNPs are colored black and red to show which SNPs are retained and filtered, respectively, by applying the VQSR procedure. The red SNPs didn't meet the given truth sensitivity threshold and so are filtered out of the call set. The lower left panel colors SNPs green, grey, and purple to give a sense of the distribution of the variants used to train the model. The green SNPs are those which were found in the training sets passed into the VariantRecalibrator step, while the purple SNPs are those which were found to be furthest away from the learned Gaussians and thus given the lowest probability of being true. Finally, the lower right panel colors each SNP by their known/novel status with blue being the known SNPs and red being the novel SNPs. Here the idea is to see if the annotation dimensions provide a clear separation between the known SNPs (most of which are true) and the novel SNPs (most of which are false).

An example of good clustering for SNP calls from the tutorial dataset is shown to the right. The plot shows that the training data forms a distinct cluster at low values for each of the two statistics shown (haplotype score and mapping quality bias). As the SNPs fall off the distribution in either one or both of the dimensions they are assigned a lower probability (that is, move into the red region of the model's PDF) and are filtered out. This makes sense as not only do higher values of HaplotypeScore indicate a lower chance of the data being explained by only two haplotypes but also higher values for mapping quality bias indicate more evidence of bias between the reference bases and the alternative bases. The model has captured our intuition that this area of the distribution is highly enriched for machine artifacts and putative variants here should be filtered out!

### Tranches and the tranche plot

The recalibrated variant quality score provides a continuous estimate of the probability that each variant is true, allowing one to partition the call sets into quality tranches. The main purpose of the tranches is to establish thresholds within your data that correspond to certain levels of sensitivity relative to the truth sets. The idea is that with well calibrated variant quality scores, you can generate call sets in which each variant doesn't have to have a hard answer as to whether it is in or out of the set. If a very high accuracy call set is desired then one can use the highest tranche, but if a larger, more complete call set is a higher priority than one can dip down into lower and lower tranches. These tranches are applied to the output VCF file using the FILTER field. In this way you can choose to use some of the filtered records or only use the PASSing records.

The first tranche (from the bottom, with lowest values) is exceedingly specific but less sensitive, and each subsequent tranche in turn introduces additional true positive calls along with a growing number of false positive calls. Downstream applications can select in a principled way more specific or more sensitive call sets or incorporate directly the recalibrated quality scores to avoid entirely the need to analyze only a fixed subset of calls but rather weight individual variant calls by their probability of being real. An example tranche plot, automatically generated by the VariantRecalibrator walker, is shown below.

This is an example of a tranches plot generated for a HiSeq call set. The x-axis gives the number of novel variants called while the y-axis shows two quality metrics -- novel transition to transversion ratio and the overall truth sensitivity.

Note that the tranches plot is not applicable for indels.

### Ti/Tv-free recalibration

We use a Ti/Tv-free approach to variant quality score recalibration. This approach requires an additional truth data set, and cuts the VQSLOD at given sensitivities to the truth set. It has several advantages over the Ti/Tv-targeted approach:

• The truth sensitivity (TS) approach gives you back the novel Ti/Tv as a QC metric
• The truth sensitivity (TS) approach is conceptual cleaner than deciding on a novel Ti/Tv target for your dataset
• The TS approach is easier to explain and defend, as saying "I took called variants until I found 99% of my known variable sites" is easier than "I took variants until I dropped my novel Ti/Tv ratio to 2.07"

We have used hapmap 3.3 sites as the truth set (genotypes_r27_nr.b37_fwd.vcf), but other sets of high-quality (~99% truly variable in the population) sets of sites should work just as well. In our experience, with HapMap, 99% is a good threshold, as the remaining 1% of sites often exhibit unusual features like being close to indels or are actually MNPs, and so receive a low VQSLOD score.
Note that the expected Ti/Tv is still an available argument but it is only used for display purposes.

### Finally, a couple of Frequently Asked Questions

#### - Can I use the variant quality score recalibrator with my small sequencing experiment?

This tool is expecting thousands of variant sites in order to achieve decent modeling with the Gaussian mixture model. Whole exome call sets work well, but anything smaller than that scale might run into difficulties.

One piece of advice is to turn down the number of Gaussians used during training. This can be accomplished by adding --maxGaussians 4 to your command line.

maxGaussians is the maximum number of different "clusters" (=Gaussians) of variants the program is "allowed" to try to identify. Lowering this number forces the program to group variants into a smaller number of clusters, which means there will be more variants in each cluster -- hopefully enough to satisfy the statistical requirements. Of course, this decreases the level of discrimination that you can achieve between variant profiles/error modes. It's all about trade-offs; and unfortunately if you don't have a lot of variants you can't afford to be very demanding in terms of resolution.

#### - Why don't all the plots get generated for me?

The most common problem related to this is not having Rscript accessible in your environment path. Rscript is the command line version of R that gets installed right alongside. We also make use of the ggplot2 library so please be sure to install that package as well.

GATK 3.2 was released on July 14, 2014. Itemized changes are listed below. For more details, see the user-friendly version highlights.

We also want to take this opportunity to thank super-user Phillip Dexheimer for all of his excellent contributions to the codebase, especially for this release.

## Haplotype Caller

• Various improvements were made to the assembly engine and likelihood calculation, which leads to more accurate genotype likelihoods (and hence better genotypes).
• Reads are now realigned to the most likely haplotype before being used by the annotations, so AD and DP will now correspond directly to the reads that were used to generate the likelihoods.
• The caller is now more conservative in low complexity regions, which significantly reduces false positive indels at the expense of a little sensitivity; mostly relevant for whole genome calling.
• Small performance optimizations to the function to calculate the log of exponentials and to the Smith-Waterman code (thanks to Nigel Delaney).
• Fixed small bug where indel discovery was inconsistent based on the active-region size.
• Removed scary warning messages for "VectorPairHMM".
• Made VECTOR_LOGLESS_CACHING the default implementation for PairHMM.
• When we subset PLs because alleles are removed during genotyping we now also subset the AD.
• Fixed bug where reference sample depth was dropped in the DP annotation.

## Variant Recalibrator

• The -mode argument is now required.
• The plotting script now uses the theme instead of opt functions to work with recent versions of the ggplot2 R library.

## AnalyzeCovariates

• The plotting script now uses the theme instead of opt functions to work with recent versions of the ggplot2 R library.

## Variant Annotator

• SB tables are created even if the ref or alt columns have no counts (used in the FS and SOR annotations).

## Genotype GVCFs

• Added missing arguments so that now it models more closely what's available in the Haplotype Caller.
• Fixed recurring error about missing PLs.
• No longer pulls the headers from all input rods including dbSNP, rather just from the input variants.
• --includeNonVariantSites should now be working.

## Select Variants

• The dreaded "Invalid JEXL expression detected" error is now a kinder user error.

## Indel Realigner

• Now throws a user error when it encounters reads with I operators greater than the number of read bases.
• Fixed bug where reads that are all insertions (e.g. 50I) were causing it to fail.

## CalculateGenotypePosteriors

• Now computes posterior probabilities only for SNP sites with SNP priors (other sites have flat priors applied).
• Now computes genotype posteriors using likelihoods from all members of the trio.
• Added annotations for calling potential de novo mutations.
• Now uses PP tag instead of GP tag because posteriors are Phred-scaled.

## Cat Variants

• Can now process .list files with -V.
• Can now handle BCF and Block-Compressed VCF files.

## Validate Variants

• Now works with gVCF files.
• By default, all strict validations are performed; use --validationTypeToExclude to exclude specific tests.

## FastaAlternateReferenceMaker

• Now use '--use_IUPAC_sample sample_name' to specify which sample's genotypes should be used for the IUPAC encoding with multi-sample VCF files.

## Miscellaneous

• Refactored maven directories and java packages replacing "sting" with "gatk".
• Extended on-the-fly sample renaming feature to VCFs with the --sample_rename_mapping_file argument.
• Added a new read transformer that refactors NDN cigar elements to one N element.
• Now a Tabix index is created for block-compressed output formats.
• Switched outputRoot in SplitSamFile to an empty string instead of null (thanks to Carlos Barroto).
• Enabled the AB annotation in the reference model pipeline (thanks to John Wallace).
• We now check that output files are specified in a writeable location.
• We now allow blank lines in a (non-BAM) list file.
• Added legibility improvements to the Progress Meter.
• Allow for non-tab whitespace in sample names when performing on-the-fly sample-renaming (thanks to Mike McCowan).
• Made IntervalSharder respect the IntervalMergingRule specified on the command line.
• Sam, tribble, and variant jars updated to version 1.109.1722; htsjdk updated to version 1.112.1452.

GATK 3.1 was released on March 18, 2014. Highlights are listed below. Read the detailed version history overview here: http://www.broadinstitute.org/gatk/guide/version-history

## Haplotype Caller

• Added new capabilities to the Haplotype Caller to use hardware-based optimizations. Can be enabled with --pair_hmm_implementation VECTOR_LOGLESS_CACHING. Please see the 3.1 Version Highlights for more details about expected speed ups and some background on the collaboration that made these possible.
• Fixed bugs in computing the weights of edges in the assembly graph. This was causing bad genotypes to be output when running the Haplotype Caller over multiple samples simultaneously (as opposed to creating gVCFs in the new recommended pipeline, which was working as expected).

## Variant Recalibrator

• Fixed issue where output could be non-deterministic with very large data sets.

## CalculateGenotypePosteriors

• Fixed several bugs where bad input were causing the tool to crash instead of gracefully exiting with an error message.

## Miscellaneous

• RandomlySplitVariants can now output splits comprised of more than 2 output files.
• FastaAlternateReferenceMaker can now output heterozygous sites using IUPAC ambiguity encoding.
• Picard, Tribble, and Variant jars updated to version 1.109.1722.

GATK 2.8 was released on December 6, 2013. Highlights are listed below. Read the detailed version history overview here: http://www.broadinstitute.org/gatk/guide/version-history

Note that this release is relatively smaller than previous ones. We are working hard on some new tools and frameworks that we are hoping to make available to everyone for our next release.

## Unified Genotyper

• Fixed bug where indels in very long reads were sometimes being ignored and not used by the caller.

## Haplotype Caller

• Improved the indexing scheme for gVCF outputs using the reference calculation model.
• The reference calculation model now works with reduced reads.
• Fixed bug where an error was being generated at certain homozygous reference sites because the whole assembly graph was getting pruned away.
• Fixed bug for homozygous reference records that aren't GVCF blocks and were being treated incorrectly.

## Variant Recalibrator

• Disable tranche plots in INDEL mode.
• Various VQSR optimizations in both runtime and accuracy. Some particular details include: for very large whole genome datasets with over 2M variants overlapping the training data randomly downsample the training set that gets used to build; annotations are ordered by the difference in means between known and novel instead of by their standard deviation; removed the training set quality score threshold; now uses 2 gaussians by default for the negative model; numBad argument has been removed and the cutoffs are now chosen by the model itself by looking at the LOD scores.

• Fixed bug where mapping quality was being treated as a byte instead of an int, which caused high MQs to be treated as negative.

## Diagnose Targets

• Added calculation for GC content.
• Added an option to filter the bases based on their quality scores.

## Combine Variants

• Fixed bug where annotation values were parsed as Doubles when they should be parsed as Integers due to implicit conversion; submitted by Michael McCowan.

## Select Variants

• Changed the behavior for PL/AD fields when it encounters a record that has lost one or more alternate alleles: instead of stripping them out these fields now get fixed.

## Miscellaneous

• SplitSamFile now produces an index with the BAM.
• Length metric updates to QualifyMissingIntervals.
• Provide close methods to clean up resources used while creating AlignmentContexts from BAM file regions; submitted by Brad Chapman.
• Picard jar updated to version 1.104.1628.
• Tribble jar updated to version 1.104.1628.
• Variant jar updated to version 1.104.1628.

GATK 2.7 was released on August 21, 2013. Highlights are listed below. Read the detailed version history overview here: http://www.broadinstitute.org/gatk/guide/version-history

• Changed the underlying convention of having unstranded reduced reads; instead there are now at least 2 compressed reads at every position, one for each strand (forward and reverse). This allows us to maintain strand information that is useful for downstream filtering.
• Fixed bug where representative depths were arbitrarily being capped at 127 (instead of the expected 255).
• Fixed bug where insertions downstream of a variant region weren't triggering a stop to the compression.
• Fixed bug when using --cancer_mode where alignments were being emitted out of order (and causing the tool to fail).

## Unified Genotyper

• Added --onlyEmitSamples argument that, when provided, instructs that caller to emit only the selected samples into the VCF (even though the calling is performed over all samples present in the provided bam files).
• FPGA support was added to the underlying HMM that is automatically used when the appropriate hardware is available on the machine.
• Added a (very) experimental argument (allSitePLs) that will have the caller emit PLs for all sites (including reference sites). Note that this does not give a fully accurate reference model because it models only SNPs. Full a proper handling of the reference model, please use the Haplotype Caller.

## Haplotype Caller

• Added a still somewhat experimental PCR indel error model to the Haplotype Caller. By default this modeling is turned on and is very useful for removing false positive indel calls associated with PCR slippage around short tandem repeats (esp. homopolymers). Users have the option (with the --pcr_indel_model argument) of turning it off or making it even more aggressive (at the expense of losing some true positives too).
• Added the ability to emit accurate likelihoods for non-variant positions (i.e. what we call a "reference model" that incorporates indels as well as SNP confidences at every position). The output format can be either a record for every position or use the gVCF style recording of blocks. See the --emitRefConfidence argument for more details; note that this replaces the use of "--output_mode EMIT_ALL_SITES" in the HaplotypeCaller.
• Improvements to the internal likelihoods that are generated by the Haplotype Caller. Specifically, this tool now uses a tri-state correction like the Unified Genotyper, corrects for overlapping read pairs (from the same underlying fragment), and does not run contamination removal (allele-biased downsampling) by default.
• Several small runtime performance improvements were added (although we are still hard at work on larger improvements that will allow calling to scale to many samples; we're just not there yet).
• Fixed bug in how adapter clipping was performed (we now clip only after reverting soft-clipped bases).
• FPGA support was added to the underlying HMM that is automatically used when the appropriate hardware is available on the machine.
• Improved the "dangling tail" recovery in the assembly algorithm, which allows for higher sensitivity in calling variants at the edges of coverage (e.g. near the ends of targets in an exome).
• Added the ability to run allele-biased downsampling with different per-sample values like the Unified Genotyper (contributed by Yossi Farjoun).

## Variant Annotator

• Fixed bug where only the last -comp was being annotated at a site.

## Indel Realigner

• Fixed bug that arises because of secondary alignments and that was causing the tool not to update the alignment start of the mate when a read was realigned.

## Phase By Transmission

• Fixed bug where multi-allelic records were being completely dropped by this tool. Now they are emitted unphased.

## Variant Recalibrator

• General improvements to the Gaussian modeling, mostly centered around separating the parameters for the positive and negative training models.
• Added mode to not emit (at all) variant records that are filtered out.
• This tool now automatically orders the annotation dimensions by their standard deviation instead of the order they were specified on the command-line in order to stabilize the training and have it produce optimal results.
• Fixed bug where the tool occasionally produced bad log10 values internally.

## Miscellaneous

• General performance improvements to the VCF reading code contributed by Michael McCowan.
• Error messages are much less verbose and "scary."
• Fixed the ReadBackedPileup class to represent mapping qualities as ints, not (signed) bytes.
• Added the engine-wide ability to do on-the-fly BAM file sample renaming at runtime (see the documentation for the --sample_rename_mapping_file argument for more details).
• Fixed bug in how the GATK counts filtered reads in the traversal output.
• Added a new tool called Qualify Intervals.
• Fixed major bug in the BCF encoding (the previous version was producing problematic files that were failing when trying to be read back into the GATK).
• Picard/sam/tribble/variant jars updated to version 1.96.1534.

Hi Geraldine, I think that this question has been asked before but I cannot find the way to fix the problem. I have just run VariantRecalibrator tool, and I'm getting this (see attached file) tranches plot for SNPs. I have understood correctly the tranches plot of the best practices manual but, I don't know what is going on here. I know that the tranches are sorted by Ti/Tv but for my, this plot makes no sense.

This is my command: java -Xmx4g -jar GenomeAnalysisTK.jar \ -T VariantRecalibrator -R human_g1k_v37.fasta \ -input all.joinGeno.raw.vcf \ -resource:hapmap,known=false,training=true,truth=true,prior=15.0 hapmap_3.3.b37.vcf \ -resource:omni,known=false,training=true,truth=true,prior=12.0 1000G_omni2.5.b37.vcf \ -resource:1000G,known=false,training=true,truth=false,prior=10.0 1000G_phase1.snps.high_confidence.b37.vcf \ -resource:dbsnp,known=true,training=false,truth=false,prior=2.0 dbsnp_138.b37.vcf \ -an QD -an MQ -an MQRankSum -an ReadPosRankSum -an FS -mode SNP \ -tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0 \ -recalFile recalibrate_SNP.recal \ -tranchesFile recalibrate_SNP.tranches \ -rscriptFile recalibrate_SNP_plots.R

Any help would be appreciated.

Hi, I am working on variant calling for a genotype of rice and I just generated a GVCF file. Here is a snippet of the GVCF file.

Chr1 173923 . C T, 40.74 . DP=3;MLEAC=2,0;MLEAF=1.00,0.00;MQ=60.00;MQ0=0 GT:AD:DP:GQ:PL:SB 1/1:0,3,0:3:9:73,9,0,73,9,73:0,0,2,1 Chr1 247702 . G A, 16.63 . DP=2;MLEAC=2,0;MLEAF=1.00,0.00;MQ=29.00;MQ0=0 GT:AD:DP:GQ:PL:SB 1/1:0,2,0:2:6:48,6,0,48,6,48:0,0,0,0 Chr1 247703 . G . . END=247714 GT:DP:GQ:MIN_DP:PL 0/0:2:5:2:0,6,43 ..... Chr1 248043 . A C, 0 . DP=2;MLEAC=0,0;MLEAF=0.00,0.00;MQ=27.07;MQ0=0 GT:AD:DP:GQ:PL:SB 0/0:1,0,0:1:3:0,3,45,3,45,45:0,0,0,0

For the VQSR step, I only have the dbSNP file for rice (there are no hapmap or 1000G files) which has following annotation: CHROM POS ID REF ALT QUAL FILTER INFO Chr1 84 rs349504122 T C . . RSPOS=84;dbSNPBuildID=138;SAO=0;VC=snp;VP=050000000000000000020100 Chr1 2223 rs349642167 C T . . RSPOS=2223;GENEINFO=4326813:Os01g0100100;dbSNPBuildID=138;SAO=0;VC=snp;VP= 050000000000000000020100 Chr1 2228 rs350936653 A C . . RSPOS=2228;GENEINFO=4326813:Os01g0100100;dbSNPBuildID=138;SAO=0;VC=snp;VP= 050000000000000000020100 Chr1 2417 rs347865191 G A . . RSPOS=2417;GENEINFO=4326813:Os01g0100100;dbSNPBuildID=138;SAO=0;VC=snp;VP= 050000000000000000020100 Chr1 2546 rs349956414 A C . . RSPOS=2546;GENEINFO=4326813:Os01g0100100;dbSNPBuildID=138;SAO=0;VC=snp;VP= 050000000000000000020100 Chr1 2588 rs352913853 A C . . RSPOS=2588;GENEINFO=4326813:Os01g0100100;dbSNPBuildID=138;SAO=0;VC=snp;VP= 050000000000000000020100 Chr1 2634 rs348515762 C A . . RSPOS=2634;GENEINFO=4326813:Os01g0100100;dbSNPBuildID=138;SAO=0;VC=snp;VP= 050000000000000000020100

Now, if I use -an QD for VQSR, it wont work because QD is not present in the input vcf file. So, given the two files above, what annotation can I use for VQSR step? Can I use the VQSR step at all?

And this is the setting in which I am using the dbSNP: -resource:dbSNP,known=true,training=true,truth=true,prior=6.0 Are the settings of known/training/truth correct if dbSNP is the only resource?

I'm sorry I keep posting these questions, but this error has me baffled!

##### ERROR ------------------------------------------------------------------------------------------

Here is the input I use to get it:

java -d64 -Xmx8g -jar $GATKDIR/GenomeAnalysisTK.jar \ -T VariantRecalibrator \ -input k_family_out.vcf \ -R$REF \ -resource:hapmap,VCF,known=false,training=true,truth=true,prior=15.0 $HAPMAP \ -resource:omni,VCF,known=false,training=true,truth=false,prior=12.0$OMNI \ -resource:dbsnp,VCF,known=true,training=false,truth=false,prior=8.0 $DBSNP \ -resource:mills,VCF,known=true,training=true,truth=true,prior=12.0$MILLS \ -an QD -an MQ -an MQRankSum -an ReadPosRankSum \ -mode BOTH \ -U ALL \ -recalFile k_family.recal \ -tranchesFile k_family.tranches \ -rscriptFile k_family.R \ -nt 6 --TStranche 100.0 --TStranche 99.9 --TStranche 99.5 --TStranche 99.0 \ --TStranche 98.0 --TStranche 97.0 --TStranche 96.0 --TStranche 95.0 --TStranche 94.0 \ --TStranche 93.0 --TStranche 92.0 --TStranche 91.0 --TStranche 90.0

I thought this was similar to a multithreading error, so I reduced my pipeline to execute with only a single node, and the problem persists.

Any help will be appreciated.

Hi, I get the following error with VariantRecalibrator when I run it:

java -jar GenomeAnalysisTK.jar -T VariantRecalibrator -R Os_nipponbare_v7.0_genome.fasta -input ESSR.vcf -mode SNP -recalFile ERS468243.raw.snps.recal -tranchesFile ERS468243.raw.snps.tranches -rscriptFile ERS468243.recal.plots.R -resource:dbSNP,known=true,training=true,truth=true,prior=6.0 all_sorted.vcf -an QD -an MQRankSum -an ReadPosRankSum -an FS -an MQ

ERROR MESSAGE: Bad input: Values for QD annotation not detected for ANY training variant in the input callset. VariantAnnotator may be used to add these annotations.

What am I doing wrong? Am I using the wrong annotation? Thanks in advance for any help. Best, NB

I have a quick question re: your GATK Best Practice paper (Curr protoc bioinformatics). Forgive me, since I am quite new to SNP analysis; but I have tried to implement the pipeline using the main Galaxy server. I am more comfortable using Galaxy!

I had no problems until I reached the Variant Recalibrator stage. I can see from the error message that my Hapmap_3.3.hg19.vcf, the Omni one and the 1000g one (all downloaded from the GATK bundle, hg19) do not have the required annotations (DP, FS, QD, MQRankSum and ReadPosRankSum) to fulfill the instructions as per the paper.

I have read all the relevant GATK sections on your website as well, but I am unclear how to use these training sets, since they lack the annotations required. My understanding is that I would need the accompanying BAM files to fully annotate these files? I did attempt to use the Annotator function (without the BAM) on the Hapmap vcf; but I got a message saying that I hadn't enough memory (I have not encounterd such an error with Galaxy before; although I haven't that much Galaxy experience)

Can you suggest where I might find such annotated training sets? (hg19) Or, how to annotate the ones from the bundle?

Hi,

I have generated vcf files using GenotypeGVCFs; each file contains variants corresponding to a different chromosome. I would like to use VQSR to perform the recalibration on all these data combined (for maximum power), but it seems that VQSR only takes a single vcf file, so I would have to combine my vcf files using CombineVariants. Looking at the documentation for CombineVariants, it seems that this tool always produces a union of vcfs. Since each vcf file is chromosome-specific, there are no identical sites across files. My questions are: Is CombineVariants indeed the appropriate tool for me to merge chromosome-specific vcf files, and is there any additional information that I should specify in the command-line when doing this? Do I need to run VariantAnnotator afterwards (I would assume not, since these vcfs were generated using GenotypeGVCFs and the best practices workflow more generally)? I just want to be completely sure that I am proceeding correctly.

Thank you very much in advance, Alva

Hi All. For some reason I'm getting this error message when running GATK 3.3 VariantRecalibrator:

Output:

##### ERROR ------------------------------------------------------------------------------------------

Input: java -d64 -Xmx8g -jar $GATKDIR/GenomeAnalysisTK.jar \ -T VariantRecalibrator \ -input b37.vcf \ -R$REF \ -resource:hapmap,VCF,known=false,training=true,truth=true,prior=15.0 $HAPMAP \ -resource:omni,VCF,known=false,training=true,truth=false,prior=12.0$OMNI \ -resource:dbsnp,VCF,known=true,training=false,truth=false,prior=8.0 $DBSNP \ -resource:mills,VCF,known=true,training=true,truth=true,prior=12.0$MILLS \ -an QD -an HaplotypeScore -an MQRankSum -an ReadPosRankSum -an MQ \ --mode BOTH \ -recalFile b37.recal \ -tranchesFile b37.tranches \ -rscriptFile b37.R \ -nt 6 --TStranche 100.0 --TStranche 99.9 --TStranche 99.5 --TStranche 99.0 \ --TStranche 98.0 --TStranche 97.0 --TStranche 96.0 --TStranche 95.0 --TStranche 94.0 \ --TStranche 93.0 --TStranche 92.0 --TStranche 91.0 --TStranche 90.0

I checked to see if the resource files exists, and they do. I cannot seem to figure out what the problem is.

Any help will be much appreciated

Hi,

I ran VariantRecalibrator and ApplyRecalibration, and everything seems to have worked fine. I just have one question: if there are no reference alleles besides "N" in my recalibrate_SNP.recal and recalibrate_INDEL.recal files, and in the "alt" field simply displays , does that mean that none of my variants were recalibrated? Just wanted to be completely sure. My original file (after running GenotypeGVCFs) has the same number of variants as the recalibrated vcf's.

Thanks, Alva

The documentation on the HaplotypeScore annotation reads:

HaplotypeCaller does not output this annotation because it already evaluates haplotype segregation internally. This annotation is only informative (and available) for variants called by Unified Genotyper.

The annotation used to be part of the best practices:

I will include it in the VQSR model for UG calls from low coverage data. Is this an unwise decision? I guess this is for myself to evaluate. I thought I would ask, in case I have missed something obvious.

Hi, I'm running the VariantRecalibrator with input HC gVCF file that was created by GenotypeGVCFs on a list of 66 samples. I'm getting an error message: "MESSAGE: Line 474547: there aren't enough columns for line".

All input vcf lines contain the same #columns. I tried the same command several times and got same error message with different line number. also, the same command that I used with another input file (with less samples, 57), is running fine. Please advise - are there parameters I should change to handle a larger input file?

Thanks, Lily

Hello GATK team, When I use the VariantRecalibrator, I need to set the resource as

-resource:hapmap,known=false,training=true,truth=true,prior=15.0 hapmap_3.3.b37.sites.vcf

When I use the VariantRecalibrator with Queue, I don't know how to set it in the scala script. In the Queue, we receive the parameter as (key value), but how can it be (key:value value)?

While I run GATK to recalibrate the variants, the error found. I have found some similar posts in the forum, but seems unlucky to find an answer. the log is attached below.

Hi, I'm trying to run GATK recalibration pipeline to my data:

java -jar /opt/bin/GenomeAnalysisTK-3.2-0/GenomeAnalysisTK.jar -T VariantRecalibrator -R ../reference/chrall.fa -input output.vcf \ -resource:hapmap,known=false,training=true,truth=true,prior=15.0 ../hapmap_3.3.hg18.vcf \ -resource:omni,known=false,training=true,truth=true,prior=12.0 ../1000G_omni2.5.hg19.sites.vcf \ -resource:1000G,known=false,training=true,truth=false,prior=10.0 ../1000G_phase1.indels.hg19.sites.vcf \ -resource:dbsnp,known=true,training=false,truth=false,prior=2.0 ../dbsnp_138.hg19.vcf \ -an DP \ -an QD \ -an FS \ -an MQRankSum \ -an ReadPosRankSum -mode SNP \ -tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0 \ -recalFile recalibrate_SNP.recal -tranchesFile recalibrate_SNP.tranches -rscriptFile recalibrate_SNP_plots.R

But I'm getting the folling error:

**##### ERROR

##### ERROR ------------------------------------------------------------------------------------------**

Can anyone tell me what I'm doing wrong? Thanks,

Rômulo

Hi, I am running GATK on rice libraries and I get the following error with VariantRecalibrator. The command I use is:

java -jar GenomeAnalysisTK-2.8-1-g932cd3a/GenomeAnalysisTK.jar -T VariantRecalibrator -R genome.fasta -input chr5.vcf -mode SNP -recalFile chr5.raw.snps.recal -tranchesFile chr5.raw.snps.tranches -rscriptFile chr5.recal.plots.R -resource:dbSNP,known=true,training=true,truth=true,prior=6.0 all_chromosomes.vcf -an QD -an MQRankSum -an ReadPosRankSum -an FS -an MQ

This is how the program terminates:

INFO 21:42:35,099 ProgressMeter - Location processed.sites runtime per.1M.sites completed total.runtime remaining INFO 21:42:35,102 TrainingSet - Found dbSNP track: Known = true Training = true Truth = true Prior = Q6.0

INFO 21:43:05,104 ProgressMeter - Chr11:11654383 9.83e+06 30.0 s 3.0 s 87.7% 34.0 s 4.0 s INFO 21:43:07,413 VariantDataManager - QD: mean = 27.02 standard deviation = 7.70 INFO 21:43:07,440 VariantDataManager - MQRankSum: mean = -0.62 standard deviation = 3.35 INFO 21:43:07,479 VariantDataManager - ReadPosRankSum: mean = 0.25 standard deviation = 1.63 INFO 21:43:07,510 VariantDataManager - FS: mean = 2.11 standard deviation = 9.67 INFO 21:43:07,525 VariantDataManager - MQ: mean = 47.09 standard deviation = 11.34 INFO 21:43:07,660 VariantDataManager - Annotations are now ordered by their information content: [MQ, QD, FS, MQRankSum, ReadPosRankSum] INFO 21:43:07,680 VariantDataManager - Training with 7845 variants after standard deviation thresholding.

INFO 21:43:09,833 VariantRecalibratorEngine - Convergence after 93 iterations! INFO 21:43:09,872 VariantRecalibratorEngine - Evaluating full set of 272372 variants... INFO 21:43:09,884 VariantDataManager - Training with worst 0 scoring variants --> variants with LOD <= -5.0000. INFO 21:43:10,854 GATKRunReport - Uploaded run statistics report to AWS S3

##### ERROR ------------------------------------------------------------------------------------------

"

I hope it can be easily fixed since I desperately need my pipeline working for my current analysis
;)

My command lines are as following:

java -Xmx8g -jar $CLASSPATH/GenomeAnalysisTK.jar \ -T VariantRecalibrator \ -R$GenomeReference \
-input $InputVCF \ -nt 6 \ -resource:mills,known=true,training=true,truth=true,prior=12.0$resource1 \
-resource:dbsnp,known=true,training=false,truth=false,prior=2.0 $resource2 \ -an DP -an QD -an MQRankSum -an ReadPosRankSum -an FS -an MQ \ --maxGaussians 8 \ -mode INDEL \ -tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0 \ -log$IndelsOutput/Indels.log \
-recalFile $IndelsOutput/exome.indels.vcf.recal \ -tranchesFile$IndelsOutput/exome.indels.tranches \

# ##### ERROR ------------------------------------------------------------------------------------------

i don't know what's wrong with my data / command. any one meet the same error ? looking for your help . thanks!

I've having an issue with VariantRecalibrator which I've faced in the past.

This is the Error which I've seen before: ##### ERROR MESSAGE: NaN LOD value assigned. Clustering with this few variants and these annotations is unsafe. Please consider raising the number of variants used to train the negative model (via --minNumBadVariants 5000, for example).

And I understand it's because the total number of variants is to low, but I'm unable to see how that could be an issue for my data set.

wc *_genotyped.vcf = 350678 93598833 1851184230 Which contains 200 background individual + 3 target exomes.

And my command: java -jar -T VariantRecalibrator -R /data/srynearson/gatk_reference/human_g1k_v37_decoy.fasta --minNumBadVariants 5000 --num_threads 30 -resource:mills,known=false,training=true,truth=true,prior=12.0 /data/GATK_Bundle/Mills_and_1000G_gold_standard.indels.b37.vcf -resource:1000G,known=false,training=true,truth=true,prior=10.0 /data/GATK_Bundle/1000G_phase1.indels.b37.vcf -an MQRankSum -an ReadPosRankSum -an FS -input /data2/srynearson/10956R/UGP_Pipeline_Results/Capture_20_genotyped.vcf -recalFile /data2/srynearson/10956R/UGP_Pipeline_Results/Capture_20_indel_recal -tranchesFile /data2/srynearson/10956R/UGP_Pipeline_Results/Capture_20_indel_tranches -rscriptFile /data2/srynearson/10956R/UGP_Pipeline_Results/Capture_20_indel_plots.R -mode INDEL

And my verison: The Genome Analysis Toolkit (GATK) vnightly-2014-06-17-gc231c21, Compiled 2014/06/17 00:01:17

Is the error due to the total number of Indels in the VCF file, because otherwise the error of "to few variants" seem wrong.

I'm trying to run variant recalibration for SNP as part of the GATK best practice workflow for DNA sequence analysis but am getting the following error presumably because after training there are 0 variants with an LOD score <= -5.0.

The error is:

##### ERROR stack trace

I am using GATK 3.2-2. The full output from this stage is attached.

I also got a similar error with GATK 3.1. I have run it a few times now and although there are very slight variations in the numbers it outputs the result is consistently 0 variants with a LOD <= -5.0.

I have run this stage successfully when I process my reads in two separate chunks but when a combine them all into one processing pipeline they fail at this stage.

Any advice would be welcome on to analyse this and determine what I need to change or where I am going wrong.

Regards,

Ally

Hi,

I've recently updated my GATK pipeline for population-level variant calling of dog WGS data to take advantage of the new GenotypeGVCFs module instead of relying on UnifiedGenotyper or HaplotypeCaller to handle dozens of samples simultaneously. Unfortunately, now when I try to recalibrate my variants using the truth set of ~150K SNPs I was using before, the training eventually converges by excluding all the training variants, leading to a crash. At first I was running the command multithreaded, but this also happens when it's run as a single thread (see stack trace below). Any ideas what's going on?

INFO 16:40:52,233 HelpFormatter - -------------------------------------------------------------------------------- INFO 16:40:52,236 HelpFormatter - The Genome Analysis Toolkit (GATK) v3.1-1-g07a4bf8, Compiled 2014/03/18 06:09:21 INFO 16:40:52,236 HelpFormatter - Copyright (c) 2010 The Broad Institute INFO 16:40:52,237 HelpFormatter - For support and documentation go to http://www.broadinstitute.org/gatk INFO 16:40:52,241 HelpFormatter - Program Args: -T VariantRecalibrator -R canFam3.1.fa -input raw.debug.vcf -resource:illumina,known=false,training=true,truth=true,prior=12.0 sites.cf3.1.vcf -an DP -an QD -an MQRankSum -an FS -an ReadPosRankSum -an MQ -mode SNP -tranche 100.0 -tranche 99.9 -tranche 99.5 -tranche 99.0 -tranche 95.0 -tranche 90.0 -recalFile snp.debug.recal -tranchesFile snp.debug.tranches -rscriptFile snp.debug.plots.R INFO 16:40:52,740 HelpFormatter - Executing on Linux 2.6.32-358.14.1.el6.x86_64 amd64; Java HotSpot(TM) 64-Bit Server VM 1.7.0_51-b13. INFO 16:40:52,740 HelpFormatter - Date/Time: 2014/06/02 16:40:52 INFO 16:40:52,741 HelpFormatter - -------------------------------------------------------------------------------- INFO 16:40:52,741 HelpFormatter - -------------------------------------------------------------------------------- INFO 16:40:53,837 GenomeAnalysisEngine - Strictness is SILENT INFO 16:40:54,288 GenomeAnalysisEngine - Downsampling Settings: Method: BY_SAMPLE, Target Coverage: 1000 INFO 16:40:55,647 GenomeAnalysisEngine - Preparing for traversal INFO 16:40:55,686 GenomeAnalysisEngine - Done preparing for traversal INFO 16:40:55,704 ProgressMeter - [INITIALIZATION COMPLETE; STARTING PROCESSING] INFO 16:40:55,722 ProgressMeter - Location processed.sites runtime per.1M.sites completed total.runtime remaining INFO 16:40:56,264 TrainingSet - Found illumina track: Known = false Training = true Truth = true Prior = Q12.0 INFO 16:41:26,324 ProgressMeter - chr1:119694011 1.33e+06 30.0 s 23.0 s 5.0% 10.1 m 9.6 m INFO 16:41:57,528 ProgressMeter - chr10:24041205 1.65e+06 61.0 s 37.0 s 6.1% 16.7 m 15.7 m INFO 16:42:37,192 ProgressMeter - chr12:3177469 3.13e+06 101.0 s 32.0 s 11.2% 15.1 m 13.4 m INFO 16:43:09,795 ProgressMeter - chr14:46556735 5.37e+06 2.2 m 24.0 s 18.6% 12.0 m 9.8 m INFO 16:43:42,333 ProgressMeter - chr17:46123438 7.70e+06 2.8 m 21.0 s 26.2% 10.5 m 7.8 m INFO 16:44:20,667 ProgressMeter - chr2:56532172 1.00e+07 3.4 m 20.0 s 33.9% 10.0 m 6.6 m INFO 16:44:51,269 ProgressMeter - chr22:26635764 1.21e+07 3.9 m 19.0 s 40.7% 9.6 m 5.7 m INFO 16:45:22,151 ProgressMeter - chr25:8747205 1.38e+07 4.4 m 19.0 s 46.6% 9.5 m 5.1 m INFO 16:45:52,154 ProgressMeter - chr27:41179493 1.56e+07 4.9 m 18.0 s 51.7% 9.5 m 4.6 m INFO 16:46:22,532 ProgressMeter - chr3:13905602 1.69e+07 5.4 m 19.0 s 55.9% 9.7 m 4.3 m INFO 16:46:52,534 ProgressMeter - chr3:77994124 1.76e+07 5.9 m 20.0 s 58.6% 10.1 m 4.2 m INFO 16:47:22,536 ProgressMeter - chr30:9518519 1.80e+07 6.4 m 21.0 s 59.6% 10.8 m 4.4 m INFO 16:47:52,538 ProgressMeter - chr30:36929683 1.83e+07 6.9 m 22.0 s 60.7% 11.4 m 4.5 m INFO 16:48:24,345 ProgressMeter - chr31:13298903 1.86e+07 7.5 m 24.0 s 61.4% 12.2 m 4.7 m INFO 16:48:54,346 ProgressMeter - chr31:38887091 1.89e+07 8.0 m 25.0 s 62.4% 12.8 m 4.8 m INFO 16:49:24,348 ProgressMeter - chr32:16826952 1.92e+07 8.5 m 26.0 s 63.2% 13.4 m 4.9 m INFO 16:49:54,350 ProgressMeter - chr33:9665516 1.97e+07 9.0 m 27.0 s 64.5% 13.9 m 4.9 m INFO 16:50:24,352 ProgressMeter - chr34:8616301 2.01e+07 9.5 m 28.0 s 65.8% 14.4 m 4.9 m INFO 16:50:54,354 ProgressMeter - chr34:41058797 2.06e+07 10.0 m 29.0 s 67.1% 14.9 m 4.9 m INFO 16:51:26,984 ProgressMeter - chr35:20891681 2.09e+07 10.5 m 30.0 s 68.0% 15.5 m 4.9 m INFO 16:51:58,253 ProgressMeter - chr38:14260224 2.20e+07 11.0 m 30.0 s 71.4% 15.5 m 4.4 m INFO 16:52:32,120 ProgressMeter - chr4:78859835 2.31e+07 11.6 m 30.0 s 75.1% 15.5 m 3.9 m INFO 16:53:02,719 ProgressMeter - chr6:24630819 2.46e+07 12.1 m 29.0 s 80.2% 15.1 m 3.0 m INFO 16:53:32,858 ProgressMeter - chr7:70954178 2.60e+07 12.6 m 29.0 s 85.3% 14.8 m 2.2 m INFO 16:54:11,822 ProgressMeter - chr9:39193117 2.76e+07 13.3 m 28.0 s 90.4% 14.7 m 84.0 s INFO 16:54:43,302 ProgressMeter - chrUn_JH373262:194062 2.91e+07 13.8 m 28.0 s 97.3% 14.2 m 22.0 s INFO 16:55:17,832 ProgressMeter - chrUn_AAEX03023557:310 2.98e+07 14.4 m 28.0 s 99.2% 14.5 m 6.0 s INFO 16:55:23,341 VariantDataManager - DP: mean = 581.76 standard deviation = 110.45 INFO 16:55:26,420 VariantDataManager - QD: mean = 19.65 standard deviation = 5.17 INFO 16:55:29,554 VariantDataManager - MQRankSum: mean = 0.15 standard deviation = 0.38 INFO 16:55:32,879 VariantDataManager - FS: mean = 3.29 standard deviation = 3.73 INFO 16:55:36,112 VariantDataManager - ReadPosRankSum: mean = 0.25 standard deviation = 0.38 INFO 16:55:39,447 VariantDataManager - MQ: mean = 59.98 standard deviation = 0.43 INFO 16:56:34,985 VariantDataManager - Annotations are now ordered by their information content: [DP, MQ, QD, FS, MQRankSum, ReadPosRankSum] INFO 16:56:35,732 VariantDataManager - Training with 157123 variants after standard deviation thresholding. INFO 16:56:35,740 GaussianMixtureModel - Initializing model with 100 k-means iterations... INFO 16:57:20,532 VariantRecalibratorEngine - Finished iteration 0. INFO 16:57:32,908 VariantRecalibratorEngine - Finished iteration 5. Current change in mixture coefficients = 1.28438 INFO 16:57:45,208 VariantRecalibratorEngine - Finished iteration 10. Current change in mixture coefficients = 0.74161 INFO 16:57:57,279 VariantRecalibratorEngine - Finished iteration 15. Current change in mixture coefficients = 7.59772 INFO 16:58:09,478 VariantRecalibratorEngine - Finished iteration 20. Current change in mixture coefficients = 0.28910 INFO 16:58:22,278 VariantRecalibratorEngine - Finished iteration 25. Current change in mixture coefficients = 0.05857 INFO 16:58:34,804 VariantRecalibratorEngine - Finished iteration 30. Current change in mixture coefficients = 0.05605 INFO 16:58:47,116 VariantRecalibratorEngine - Finished iteration 35. Current change in mixture coefficients = 0.05180 INFO 16:58:59,115 VariantRecalibratorEngine - Finished iteration 40. Current change in mixture coefficients = 0.04196 INFO 16:59:11,329 VariantRecalibratorEngine - Finished iteration 45. Current change in mixture coefficients = 0.03361 INFO 16:59:23,682 VariantRecalibratorEngine - Finished iteration 50. Current change in mixture coefficients = 0.02521 INFO 16:59:38,689 VariantRecalibratorEngine - Finished iteration 55. Current change in mixture coefficients = 0.02037 INFO 16:59:51,346 VariantRecalibratorEngine - Finished iteration 60. Current change in mixture coefficients = 0.01612 INFO 17:00:03,973 VariantRecalibratorEngine - Finished iteration 65. Current change in mixture coefficients = 0.01269 INFO 17:00:16,765 VariantRecalibratorEngine - Finished iteration 70. Current change in mixture coefficients = 0.00867 INFO 17:00:29,690 VariantRecalibratorEngine - Finished iteration 75. Current change in mixture coefficients = 0.00473 INFO 17:00:42,394 VariantRecalibratorEngine - Finished iteration 80. Current change in mixture coefficients = 0.00248 INFO 17:00:47,807 VariantRecalibratorEngine - Convergence after 82 iterations! INFO 17:00:50,993 VariantDataManager - Training with worst 0 scoring variants --> variants with LOD <= -5.0000. INFO 17:01:12,880 GATKRunReport - Uploaded run statistics report to AWS S3

##### ERROR ------------------------------------------------------------------------------------------

the command line i use was

java -jar /home/horus/Instaladores/GenomeAnalysisTK-3.1-1/GenomeAnalysisTK.jar -T VariantRecalibrator -R /home/horus/Escritorio/PGM/primirna/references/hg19usar.fa -input 1_raw_variants.vcf -resource:hapmap,known=false,training=true,truth=true,prior=15.0 /home/horus/Escritorio/PGM/primirna/references/hapmap_3.3.b37.vcf_nuevo -resource:omni,known=false,training=true,truth=true,prior=12.0 /home/horus/Escritorio/PGM/primirna/references/1000G_omni2.5.b37.vcf_nuevo -resource:1000G,known=false,training=true,truth=false,prior=10.0 /home/horus/Escritorio/PGM/primirna/references/Mills_and_1000G_gold_standard.indels.b37.vcf_nuevo -resource:dbsnp,known=true,training=false,truth=false,prior=2.0 /home/horus/Escritorio/PGM/primirna/references/dbsnp_138.hg19.vcf -an DP -an QD -an FS -an MQRankSum -an ReadPosRankSum -mode SNP -tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0 -recalFile 1_recalibrate_SNP.recal -tranchesFile 1_recalibrate_SNP.tranches

Hi, I am planning to filter with ApplyRecal very stringently (I don't mind losing a lot of SNPs to reduce false positives), but I am having trouble viewing the tranches plot produced by VariantRecal to choose my cutoff level. When I run the following command I get the default visualisation of 90, 99, 99.9 & 100 with cumulative TPs & FPs, and everything looks fine:

##### ERROR ------------------------------------------------------------------------------------------

Here is my error .. I lauch the same command on 3 svg generated by the same UnifiedGenotyper commen ..

thx!

java -Djava.io.tmpdir=/scratch/ymb-542-aa/temp -Xmx22g -jar /rap/ymb-542-aa/utils/tools/dna_tools/post_alignment_tools/GenomeAnalysisTK-2.8-1/GenomeAnalysisTK.jar -nt 8 -et NO_ET -K /rap/ymb-542-aa/utils/tools/dna_tools/post_alignment_tools/GenomeAnalysisTK-2.8-1/raphael.poujol_umontreal.ca.key -T VariantRecalibrator -R /scratch/ymb-542-aa/pancreas/Genomes/hg19.fa -input /scratch/ymb-542-aa/pancreas/Calling/sub1_focus.haplo.vcf -mode SNP -resource:hapmap,known=false,training=true,truth=true,prior=15.0 /scratch/ymb-542-aa/pancreas/Genomes/hapmap_3.3.hg19.vcf -resource:omni,known=false,training=true,truth=false,prior=10.0 /scratch/ymb-542-aa/pancreas/Genomes/1000G_omni2.5.hg19.vcf -resource:dbsnp,known=true,training=false,truth=false,prior=2.0 /scratch/ymb-542-aa/pancreas/Genomes/dbsnp_138.hg19_noXY.vcf -an QD -an DP -an FS -an MQ -tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0

##### ERROR ------------------------------------------------------------------------------------------

Hi,

I am running Variant Recalibration on Indels,prior to this I completed Variant Recalibration and ApplyRecalibration on SNPs. So,the input file is the recalibrated VCF file from Apply Recalibration step of SNP's.

Below is the error I am getting.

##### ERROR MESSAGE: Bad input: Values for DP annotation not detected for ANY training variant in the input callset. VariantAnnotator may be used to add these annotations. See http://gatkforums.broadinsti

tute.org/discussion/49/using-variant-annotator

The command I am using for this is :

##### ERROR ------------------------------------------------------------------------------------------

And this is the command i used:

java -XX:+UseParallelGC -XX:ParallelGCThreads=8 -Xmx4g -jar ~/genomekey-data/tools/gatk2.jar -T VariantRecalibrator -R ~/genomekey-data/bwa_references/human_g1k_v37.fasta -input ~/simplex/ngs_test/snp_calling/aln.haplotyper.raw.vcf -resource:dbsnp,known=true,training=false,truth=false,prior=8.0 ~/genomekey-data/bundle/current/dbsnp_137.b37.vcf -resource:hapmap,VCF,known=false,training=true,truth=true,prior=15.0 ~/genomekey-data/bundle/current/hapmap_3.3.b37.vcf -resource:omni,VCF,known=false,training=true,truth=false,prior=12.0 ~/genomekey-data/bundle/current/1000G_omni2.5.b37.vcf -an QD -an MQRankSum -an ReadPosRankSum -recalFile ~/simplex/ngs_test/snp_recal/aln.snp.recal -tranche 100 -tranche 99.9 -tranche 99.0 -tranche 90 -tranchesFile ~/simplex/ngs_test/snp_recal/aln.snp.tranches -rscriptFile ~/simplex/ngs_test/snp_recal/aln.snp.plots.R -mode SNP

Please note that i got the same error running GATK 2.6. Any help or suggestions will be appreciated.

Thanks, Shazly

Both pages read: "The ... rank sum test can not be calculated for sites without a mixture of reads showing both the reference and alternate alleles."

I have quite a few sites for which MQRankSum and ReadPosRankSum are missing. How does VariantRecalibrator handle this missing information?

I use GATK to process a dataset produce by customized target re-sequencing, but I failed in VariantRecalibrator step.

##### ERROR ------------------------------------------------------------------------------------------

I've used all three VCFs in other GATK tools without issues. Any help greatly appreciated!, many thanks, Lavinia.

Hello, I want to know how important it is to have the -L target-intervals.intervals option in UnifiedGenotyper, and if it is recommended in VariantCallRecalibrator too.

I have ran the Unified Genotyper tool with 1 input file at a time or the 2 files I want to compare at once. My command-lines are the following:

java -jar -Xmx15g GenomeAnalysisTK.jar -R ./genome.fa -T UnifiedGenotyper -I ./1.bam --dbsnp ./dbsnp_137.hg19.vcf -o ./1-gatk.vcf  --min_base_quality_score 25 -stand_call_conf 50 -stand_emit_conf 10 -dcov 200 -L ./intervals-1.intervals

java -jar -Xmx15g GenomeAnalysisTK.jar -R ./genome.fa -T UnifiedGenotyper -I ./2.bam --dbsnp ./dbsnp_137.hg19.vcf -o ./2-gatk.vcf  --min_base_quality_score 25 -stand_call_conf 50 -stand_emit_conf 10 -dcov 200 -L ./intervals-2.intervals

java -jar -Xmx15g GenomeAnalysisTK.jar -R ./genome.fa -T UnifiedGenotyper -I ./1.bam -I ./2.bam --dbsnp ./dbsnp_137.hg19.vcf -o ./1vs2-gatk.vcf  --min_base_quality_score 25 -stand_call_conf 50 -stand_emit_conf 10 -dcov 200 -L ./intervals-1.intervals -L ./intervals-2.intervals

Thanks, G.

Hello, I am having a hard time finding the resource vcf files, needed for VariantRecalibration.

hapmap_3.3.b37.sites.vcf
1000G_omni2.5.b37.sites.vcf
dbsnp_135.b37.vcf

I don't seem to find them in the GATK bundle, as suggested:

Any help appreciated Thanks, G.

Hi,

I did some tests with a "best practices"-like pipeline to check if results were deterministic and found that they are not. Some posts already mention that UnifiedGenotyper is non-deterministic when using multi-threading as different seeds are used for downsampling. But I think I'm missing something if single-thread UnifiedGenotyper is deterministic, why would it chose exactly the same reads for downsampling? Wouldn't it always be non-deterministic when downsampling reads? Anyway, the difference was only of 31 variants for an exome sample.

About the VariantRecalibrator I guess the filtering is non-deterministic, but I did not found any reference to this in the forum. The difference between runs is greater in this case. After filtering I had 301 variants only non-filtered in the first run; and 1684 variants only non-filtered in the second run; the non-filtered variants in both runs were 11328.

Hi, I'm encountering this error running VariantRecalibrator with data from 3 samples (I'm testing): Maybe is the problem due to small sample size?

##### ERROR ------------------------------------------------------------------------------------------
##### ERROR stack trace
java.lang.NullPointerException
at org.broadinstitute.sting.gatk.executive.Accumulator$StandardAccumulator.finishTraversal(Accumulator.java:129) at org.broadinstitute.sting.gatk.executive.LinearMicroScheduler.execute(LinearMicroScheduler.java:97) at org.broadinstitute.sting.gatk.GenomeAnalysisEngine.execute(GenomeAnalysisEngine.java:281) at org.broadinstitute.sting.gatk.CommandLineExecutable.execute(CommandLineExecutable.java:113) at org.broadinstitute.sting.commandline.CommandLineProgram.start(CommandLineProgram.java:237) at org.broadinstitute.sting.commandline.CommandLineProgram.start(CommandLineProgram.java:147) at org.broadinstitute.sting.gatk.CommandLineGATK.main(CommandLineGATK.java:94) ##### ERROR ------------------------------------------------------------------------------------------ ##### ERROR A GATK RUNTIME ERROR has occurred (version 2.2-16-g9f648cb): ##### ERROR ##### ERROR Please visit the wiki to see if this is a known problem ##### ERROR If not, please post the error, with stack trace, to the GATK forum ##### ERROR Visit our website and forum for extensive documentation and answers to ##### ERROR commonly asked questions http://www.broadinstitute.org/gatk ##### ERROR ##### ERROR MESSAGE: Code exception (see stack trace for error itself) ##### ERROR ------------------------------------------------------------------------------------------ Thanks experiment: target enrichment and sequencing using Illumina platform raw VCF file from UnifiedGenotyper -> Variantrecalibrator I got the following error, any potential explanation? Thanks ##### **ERROR MESSAGE: Bad input: Error during negative model training. Minimum number of variants to use in training is larger than the whole call set. One can attempt to lower the --minNumBadVariants arugment but this is unsafe.** I tried to lower this number, but different error message came up. script used: java -Xmx4g -jar GenomeAnalysisTK.jar -T VariantRecalibrator \ -mode BOTH -nt 4 \ -R hg19_all_MT.fasta \ -input two.final.vcf \ -resource:hapmap,known=false,training=true,truth=true,prior=15.0 hapmap_3.3.b37.sites.fy_left.vcf \ -resourcemni,known=false,training=true,truth=false,prior=12.0 1000G_omni2.5.b37.site.fy.vcf \ -resource:dbsnp,known=true,training=false,truth=false,prior=8.0 dbsnp137_sort_fy_left.vcf \ -recalFile two.final.vcf.reca \ -tranchesFile two.final.vcf.tranches \ -rscriptFile two.final.vcf.R \ -tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0 -tranche 85.0 -tranche 80.0 \ -an QD -an HaplotypeScore -an MQRankSum -an ReadPosRankSum -an MQ -an FS -an HRun Hello community, I am working with yeast and I am doing the VariantRecalibrator step, as I dont have a truth data set I want to "filter" my initial round of raw SNP in order to have the highest quality score SNP as the gatk team suggest. 1) I was wondering if you have any suggestion about the parameters of filtration... I am working with each strain as different species (WGS), so I have good coverage (80X) but only one "Lane" I tried with: java -Xmx4g -jar GenomeAnalysisTK.jar -R S288c.fasta -T VariantFiltration --variant$1.raw.vcf --filterExpression "QD<2.0 || MQ<45.0 || FS>60 || MQEankSum< -12.5 || ReadPosRankSum<-8.0 " --filterName "hardtovalidate" -o $1.filt.vcf to remove after the LowQual and hardtovalidate snps, that make sense? thanks for your help! 2) Then after, I would do the VariantRecalibrator, but I will have only one truth set, can I use -mode both, or I should try to obtain a truth data set of indels and do the VQSR for SNP and Indels separately? What do you think? java -Xmx4g -jar GenomeAnalysisTK.jar -T VariantRecalibrator -R ncbi_S288c.fasta -input$1.raw.vcf -recalFile $1.raw.recal -tranchesFile$1.raw.tranches -resource:filtered,known=false,training=true,truth=true,prior=15.0 \$1.truth.vcf -an QD -an HaplotypeScore -an MQRankSum -an ReadPosRankSum -an FS -an MQ -an DP **-mode both**

Thanks! Luisa

Hello,

I've just made a long needed update to the most recent version of GATK. I had been toying with the variant quality score recalibrator before but now that I have a great deal more exomes at my disposal I'd like to fully implement it in a meaningful way.

The phrase I'm confused about is "In our testing we've found that in order to achieve the best exome results one needs to use an exome callset with at least 30 samples." How exactly do I arrange these 30+ exomes?

Is there any difference or reason to choose one of the following two workflows over the other?

1. Input 30+ exomes in the "-I" argument of either the UnifiedGenotyper or HaplotypeCaller and then with my multi-sample VCF perform the variant recalibration procedure and then split the individual call sets out of the multi-sample vcf with SelectVariants?

2. Take 30+ individual vcf files, merge them together, and then perform variant recalibration on the merged vcf and then split the individual call sets out of the multi-sample vcf with SelectVariants?

3. Or some third option I'm missing

Any help is appreciated.

Thanks

Hello,

I am running Variant Quality Score Recalibration on indels with the following command.

java -Xmx8g -jar /raid/software/src/GenomeAnalysisTK-1.6-9-g47df7bb/GenomeAnalysisTK.jar \
-T VariantRecalibrator \
-R /raid/references-and-indexes/hg19/bwa/hg19_lite.fa \
-input indel_output_all_chroms_combined.vcf \
--maxGaussians 4 -std 10.0 -percentBad 0.12 \
-resource:mills,known=true,training=true,truth=true,prior=12.0  /raid/Merlot/exome_pipeline_v1/ref/Mills_and_1000G_gold_standard.indels.hg19.sites.vcf \
-an QD -an FS -an HaplotypeScore -an ReadPosRankSum  \
--ts_filter_level 95.0 \
-mode INDEL \
-recalFile /raid2/projects/STFD/indel_output_7.recal \
-tranchesFile /raid2/projects/STFD/indel_output_7.tranches \
-rscriptFile /raid2/projects/STFD/indel_output_7.plots.R

My tranches file reports only false positives for all tranches. When I run VQSR on SNPS, the tranches have many true positives and look similar to other tranch files reported on this site. I am wondering if anyone has similar experiences or suggestions?

Thanks

Hello, I have a new sequenced genome with some samples for this specie, I would like to follow the best practices but I don't have a dbsnp or something similar, but could I use the variants from the samples as a dbsnp? for example get the variants that coincide in all my samples and use it as a dbsnp?

Thanks!