Tagged with #diagnosetargets
1 documentation article | 2 announcements | 10 forum discussions

Created 2012-07-23 23:55:30 | Updated 2012-07-23 23:55:30 | Tags: diagnosetargets gatkdocs
Comments (0)

A new tool has been released!

Check out the documentation at DiagnoseTargets.

Comments (2)

GATK 2.8 was released on December 6, 2013. Highlights are listed below. Read the detailed version history overview here: http://www.broadinstitute.org/gatk/guide/version-history

Note that this release is relatively smaller than previous ones. We are working hard on some new tools and frameworks that we are hoping to make available to everyone for our next release.

Unified Genotyper

  • Fixed bug where indels in very long reads were sometimes being ignored and not used by the caller.

Haplotype Caller

  • Improved the indexing scheme for gVCF outputs using the reference calculation model.
  • The reference calculation model now works with reduced reads.
  • Fixed bug where an error was being generated at certain homozygous reference sites because the whole assembly graph was getting pruned away.
  • Fixed bug for homozygous reference records that aren't GVCF blocks and were being treated incorrectly.

Variant Recalibrator

  • Disable tranche plots in INDEL mode.
  • Various VQSR optimizations in both runtime and accuracy. Some particular details include: for very large whole genome datasets with over 2M variants overlapping the training data randomly downsample the training set that gets used to build; annotations are ordered by the difference in means between known and novel instead of by their standard deviation; removed the training set quality score threshold; now uses 2 gaussians by default for the negative model; numBad argument has been removed and the cutoffs are now chosen by the model itself by looking at the LOD scores.

Reduce Reads

  • Fixed bug where mapping quality was being treated as a byte instead of an int, which caused high MQs to be treated as negative.

Diagnose Targets

  • Added calculation for GC content.
  • Added an option to filter the bases based on their quality scores.

Combine Variants

  • Fixed bug where annotation values were parsed as Doubles when they should be parsed as Integers due to implicit conversion; submitted by Michael McCowan.

Select Variants

  • Changed the behavior for PL/AD fields when it encounters a record that has lost one or more alternate alleles: instead of stripping them out these fields now get fixed.


  • SplitSamFile now produces an index with the BAM.
  • Length metric updates to QualifyMissingIntervals.
  • Provide close methods to clean up resources used while creating AlignmentContexts from BAM file regions; submitted by Brad Chapman.
  • Picard jar updated to version 1.104.1628.
  • Tribble jar updated to version 1.104.1628.
  • Variant jar updated to version 1.104.1628.

Created 2013-03-12 18:00:30 | Updated | Tags: official depthofcoverage diagnosetargets
Comments (9)

We have decided to continue providing and supporting DepthOfCoverage and DiagnoseTargets for the foreseeable future. Going forward, we'll try to integrate them and develop their features to address the main needs of the community. To this end we welcome your continuing feedback, so please feel free to contribute comments and ideas in this thread.

To all who took the time to tell us what you find useful about DoC and DT (and what you wish it could do), a big thank you! This is always very useful to us because it helps us identify which features are most valuable to our users.

Created 2015-05-11 11:27:27 | Updated 2015-05-11 12:08:54 | Tags: depthofcoverage diagnosetargets qualifymissingintervals
Comments (4)

I using GATK for Clinical Whole-Exome Sequencing. I often have to answer questions for evaluating quality the sequencing run:

  1. How good is my genes of interested covered?
  2. Which exons are not well covered?
  3. Which interval are not well covered?.

I've tried several tools which try to address this question (i.e. bcbio-nextgen, chanjo). But now I have a feeling that the tools (mentioned in the title) can answer more or less my questions, except the nice feature of chajo which allows storing/querying statistics across samples.

My question about these tools: When to use which? From the names and reading description of their command line arguments, I can't answer the question clearly. I tend to try all three in this order: DepthOfCoverage -> QualifyMissingIntervals -> DiagnoseTargets.

So again, when to use which?

Thanks Vang

Created 2015-01-05 17:47:22 | Updated 2015-01-05 17:56:46 | Tags: diagnosetargets
Comments (3)


I tried running DiagnoseTargets for the same BAM file using two versions of DiagnoseTargets and i see difference in results. Can you please help me with why this is happening. I see a lot of differences in the BAD_MATE filter. I have put in two sample intervals below

i used GATK 3.1-1-g07a4bf8 and GATK-lite v2.3.9-lite and default command line options

GATK-lite v2.3.9-lite output

CHROM POS ID REF ALT QUAL FILTER INFO FORMAT HG00109 HG00118 HG00134 HG00151 HG00152 HG00156 HG00176 HG00185 HG00237 HG00306 HG00335 HG00377 HG00378 HG00443 HG00452 HG00580 HG01075 NA18982 NA18992 NA19079 NA19116 NA19324 NA19346 NA19431 NA19456 NA19654 NA19664 NA19746 NA20525 NA20801

1 955543 . T

. LOW_MEDIAN_DEPTH AVG_INTERVAL_DP=408.03;END=955763 AVG_INTERVAL_DP:FT:MED:Q1:Q3 19.19:PASS:18.00:9.00:30.00 17.30:PASS:17.00:10.00:25.00 23.40:PASS:23.00:16.00:33.00 14.40:PASS:13.00:8.00:21.00 20.44:PASS:21.00:14.00:27.00 13.71:PASS:13.00:10.00:17.00 22.86:PASS:21.00:14.00:31.00 20.05:PASS:19.00:12.00:26.00 19.88:PASS:19.00:13.00:28.00 19.31:PASS:20.00:12.00:26.00 23.28:PASS:23.00:15.00:32.00 14.91:PASS:14.00:9.00:19.00 3.49:LOW_COVERAGE:4.00:3.00:4.00 7.13:LOW_COVERAGE:8.00:3.00:11.00 3.39:LOW_COVERAGE:4.00:2.00:4.00 8.02:PASS:8.00:5.00:10.00 6.35:LOW_COVERAGE:7.00:3.00:9.00 15.41:PASS:15.00:8.00:21.00 8.85:LOW_COVERAGE:9.00:4.00:13.00 14.95:PASS:15.00:9.00:21.00 16.41:PASS:17.00:7.00:25.00 10.30:LOW_COVERAGE:9.00:2.00:18.00 11.15:PASS:10.00:6.00:15.00 17.19:PASS:15.00:7.00:27.00 10.57:PASS:11.00:5.00:16.00 5.43:LOW_COVERAGE:5.00:2.00:8.00 7.21:BAD_MATE;LOW_COVERAGE:6.00:2.00:12.00 ** 11.43:PASS:12.00:5.00:17.00 **11.11:BAD_MATE:11.00:4.00:18.00 10.91:BAD_MATE:12.00:6.00:14.00

1 957571 . T

. PASS AVG_INTERVAL_DP=578.46;END=957852 AVG_INTERVAL_DP:FT:MED:Q1:Q3 15.64:PASS:15.00:10.00:22.00 _ 22.86:BAD_MATE:21.00:11.00:35.00_ 24.36:PASS:24.00:13.00:34.00 21.54:PASS:20.00:10.00:30.00 31.96:PASS:30.50:20.00:44.00 15.21:PASS:13.00:7.00:23.00 17.68:PASS:17.00:10.00:23.00 21.70:PASS:21.00:12.00:32.00 _19.06:BAD_MATE:18.00:11.00:25.00 17.14:BAD_MATE:15.00:7.00:26.00 _23.21:PASS:22.00:13.00:32.00 11.30:PASS:10.00:4.00:19.00 23.26:PASS:23.50:12.00:34.00 24.19:PASS:22.00:13.00:37.00 22.84:PASS:22.00:14.00:31.00 19.32:PASS:18.00:9.00:28.00 24.58:PASS:23.00:10.00:37.00 ** _15.92:BAD_MATE:15.00:9.00:22.00 _16.41:PASS:16.00:8.00:23.00 9.40:PASS:9.00:4.00:13.00 19.08:PASS:18.00:8.00:27.00 **_12.98:BAD_MATE:11.50:6.00:19.00 _13.06:PASS:13.00:9.00:17.00 15.40:PASS:14.00:8.00:22.00 13.66:PASS:13.00:8.00:19.00 _13.81:BAD_MATE:12.50:7.00:19.00 7.40:BAD_MATE:6.50:3.00:12.00 14.13:BAD_MATE:12.00:6.00:21.00_ 15.21:PASS:12.50:5.00:24.00 56.13:PASS:49.00:28.00:84.00

GATK 3.1-1-g07a4bf8 output

CHROM POS ID REF ALT QUAL FILTER INFO FORMAT HG00109 HG00118 HG00134 HG00151 HG00152 HG00156 HG00176 HG00185 HG00237 HG00306 HG00335 HG00377 HG00378 HG00443 HG00452 HG00580 HG01075 NA18982 NA18992 NA19079 NA19116 NA19324 NA19346 NA19431 NA19456 NA19654 NA19664 NA19746 NA20525 NA20801

1 955543 . T

. PASS END=955763;GC=0.751;IDP=407.84 FT:IDP:LL:ZL PASS:19.15:0:0 PASS:17.30:0:0 PASS:23.40:0:0 PASS:14.40:17:0 PASS:20.27:0:0 PASS:13.71:0:0 PASS:22.87:0:0 PASS:20.05:0:0 PASS:19.88:0:0 PASS:19.31:0:0 PASS:23.28:0:0 PASS:14.91:0:0 LOW_COVERAGE;POOR_QUALITY:3.49:108:0 LOW_COVERAGE:7.13:65:0 LOW_COVERAGE;POOR_QUALITY:3.39:106:0 PASS:8.02:0:0 LOW_COVERAGE:6.35:57:0 PASS:15.41:0:0 LOW_COVERAGE:8.85:51:0 PASS:14.95:0:0 PASS:16.41:0:0 LOW_COVERAGE:10.30:56:10 PASS:11.15:6:0 PASS:17.19:0:0 PASS:10.57:20:0 LOW_COVERAGE:5.43:74:0 LOW_COVERAGE:7.21:91:0 PASS:11.43:39:0 PASS:11.11:26:0 PASS:10.91:7:0

1 957571 . T

. PASS END=957852;GC=0.610;IDP=578.29 FT:IDP:LL:ZL PASS:15.64:10:0 PASS:22.86:15:4 PASS:24.36:14:0 PASS:21.54:20:10 PASS:31.96:11:0 PASS:15.21:19:0 PASS:17.68:19:0 PASS:21.70:8:8 PASS:19.06:8:11 PASS:17.14:14:11 PASS:23.21:25:0 PASS:11.30:41:5 PASS:23.26:11:0 PASS:24.19:30:0 PASS:22.84:8:0 PASS:19.32:30:0 PASS:24.58:25:0 PASS:15.92:7:4 PASS:16.41:13:0 PASS:9.40:43:7 PASS:18.96:27:4 PASS:12.98:37:3 PASS:13.00:21:0 PASS:15.40:13:7 PASS:13.66:11:0 PASS:13.81:19:0 POOR_QUALITY:7.40:32:19 PASS:14.15:27:9 PASS:15.21:26:0 PASS:56.13:11:0

These are the command lines:

GATK-lite v2.3.9-lite

fileformat=VCFv4.1 DiagnoseTargets="analysis_type=DiagnoseTargets input_file=[/sbgenomics/t46801.j412283.PrintReadsLite.work/Sample_group_unknown.base_recalibrated.bam] read_buffer_size=null phone_home=STANDARD gatk_key=null tag=NA read_filter=[] intervals=[/sbgenomics/t46801.j607435.InputWrapper.execute/Projects/7608/TruSightOne.nochr.bed] excludeIntervals=null i nterval_set_rule=UNION interval_merging=ALL interval_padding=0 reference_sequence=/sbgenomics/t46801.j512286.DiagnoseTargetsLite._work/human_g1k_v37_decoy.fasta nonDeterministicRandomSeed=false disableRandomization=false maxRuntime=-1 maxRuntimeUnits=MINUTES downsampling_type=BY_SAMPLE downsample_to_fraction=null downsample_to_coverage=1000 use_leg acy_downsampler=false baq=OFF baqGapOpenPenalty=40.0 fix_misencoded_quality_scores=false allow_potentially_misencoded_quality_scores=false performanceLog=null useOriginalQualities=false BQSR=null quantize_quals=0 disable_indel_quals=false emit_original_quals=false preserve_qscores_less_than=6 defaultBaseQualities=-1 validation_strictness=SILENT rem ove_program_records=false keep_program_records=false unsafe=null num_threads=1 num_cpu_threads_per_data_thread=1 num_io_threads=0 monitorThreadEfficiency=false num_bam_file_handles=null read_group_black_list=null pedigree=[] pedigreeString=[] pedigreeValidationType=STRICT allow_intervals_with_unindexed_bam=false generateShadowBCF=false logging_leve l=INFO log_to_file=null help=false out=org.broadinstitute.sting.gatk.io.stubs.VariantContextWriterStub no_cmdline_in_header=org.broadinstitute.sting.gatk.io.stubs.VariantContextWriterStub sites_only=org.broadinstitute.sting.gatk.io.stubs.VariantContextWriterStub bcf=org.broadinstitute.sting.gatk.io.stubs.VariantContextWriterStub minimum_base_qualit y=20 minimum_mapping_quality=5 minimum_coverage=5 maximum_coverage=1073741823 minimum_median_depth=10 maximum_insert_size=500 voting_status_threshold=0.5 low_median_depth_status_threshold=0.2 bad_mate_status_threshold=0.5 coverage_status_threshold=0.2 excessive_coverage_status_threshold=0.2 quality_status_threshold=0.5 print_debug_log=false filter mismatching_base_and_quals=false"

GATK-lite v2.3.9-lite

GATKCommandLine=<ID=DiagnoseTargets,Version=3.1-1-g07a4bf8,Date="Mon Jan 05 01:46:53 EST 2015",Epoch=1420440413672,CommandLineOptions="analysis_type=DiagnoseTargets input_file=[_7_Sample_group_unknown.base_recalibrated.bam] showFullBamList=false read_buffer_size=null phone_home=AWS gatk_key=null tag=NA read_filter=[] intervals=[//Users/arun/Desktop/recombine_script/infile/TruSightOne.nochr.bed] excludeIntervals=null interval_set_rule=UNION interval_merging=ALL interval_padding=0 reference_sequence=human_g1k_v37.fasta nonDeterministicRandomSeed=false disableDithering=false maxRuntime=-1 maxRuntimeUnits=MINUTES downsampling_type=NONE downsample_to_fraction=null downsample_to_coverage=null baq=OFF baqGapOpenPenalty=40.0 fix_misencoded_quality_scores=false allow_potentially_misencoded_quality_scores=false useOriginalQualities=false defaultBaseQualities=-1 performanceLog=null BQSR=null quantize_quals=0 disable_indel_quals=false emit_original_quals=false preserve_qscores_less_than=6 globalQScorePrior=-1.0 validation_strictness=SILENT remove_program_records=false keep_program_records=false sample_rename_mapping_file=null unsafe=null disable_auto_index_creation_and_locking_when_reading_rods=false num_threads=1 num_cpu_threads_per_data_thread=1 num_io_threads=0 monitorThreadEfficiency=false num_bam_file_handles=null read_group_black_list=null pedigree=[] pedigreeString=[] pedigreeValidationType=STRICT allow_intervals_with_unindexed_bam=false generateShadowBCF=false variant_index_type=DYNAMIC_SEEK variant_index_parameter=-1 logging_level=INFO log_to_file=depthofcoverage.log help=false version=false out=org.broadinstitute.sting.gatk.io.stubs.VariantContextWriterStub no_cmdline_in_header=org.broadinstitute.sting.gatk.io.stubs.VariantContextWriterStub sites_only=org.broadinstitute.sting.gatk.io.stubs.VariantContextWriterStub bcf=org.broadinstitute.sting.gatk.io.stubs.VariantContextWriterStub minimum_base_quality=20 minimum_mapping_quality=20 minimum_coverage=5 maximum_coverage=1073741823 maximum_insert_size=500 voting_status_threshold=0.5 bad_mate_status_threshold=0.5 coverage_status_threshold=0.2 excessive_coverage_status_threshold=0.2 quality_status_threshold=0.5 filter_reads_with_N_cigar=false filter_mismatching_base_and_quals=false filter_bases_not_stored=false">

Thanks Arun

Created 2014-10-17 15:30:47 | Updated | Tags: diagnosetargets read-depth
Comments (4)

Hi again, In my diagnose targets output I have a lot of NO_READS, but most of the targets are ones that have a SNP called by HC on the same set of bam files:

# # # table(rowData(nextseq.dt.vcf)$FILTER)
# # #          LOW_COVERAGE LOW_COVERAGE;NO_READS              NO_READS 
# # #                  2136                  1114                 14775 
# # # NO_READS;POOR_QUALITY                  PASS          POOR_QUALITY 
# # #                   307                 56599                   479 

I have had a look at one site that is flagged as NO_READS, but there are reads:

line from unfiltered VCF from HC

chr1 109808776 . C G 106.25 . AC=1;AF=0.042;AN=24;BaseQRankSum=-1.059e+00;ClippingRankSum=-2.329e+00;DP=127;FS=0.000;GQ_MEAN=36.25;GQ_STDDEV=35.51;InbreedingCoeff=-0.0565;MLEAC=1;MLEAF=0.042;MQ=52.57;MQ0=0;MQRankSum=1.48;NCC=0;QD=6.64;ReadPosRankSum=-2.120e-01 GT:AD:DP:GQ:PL 0/0:4,0:4:12:0,12,87 0/0:12,0:12:36:0,36,297 0/0:7,0:7:21:0,21,169 0/0:21,0:21:60:0,60,514 0/0:8,0:8:24:0,24,172 0/0:6,0:6:6:0,6,90 0/0:11,0:11:24:0,24,254 0/0:11,0:11:27:0,27,288 0/0:11,0:11:27:0,27,255 0/1:9,7:16:99:141,0,1720/0:11,0:11:30:0,30,321 0/0:9,0:9:27:0,27,228

line from DiagnoseTargets

chr1 109808776 . C <DT> . NO_READS END=109808776;GC=1.00;IDP=165.00 FT:IDP:LL:ZL NO_READS:6.00:0:0 NO_READS:20.00:0:0 NO_READS:11.00:0:0 NO_READS:22.00:0:0 NO_READS:10.00:0:0 NO_READS:11.00:0:0 NO_READS:14.00:0:0 PASS:13.00:0:0 NO_READS:9.00:0:0 PASS:15.00:0:0 PASS:19.00:0:0 NO_READS:15.00:0:0

Is there any other filters that these reads are not passing - I couldn't spot anything in the documentation but I wouldn't put it past me! I ran a fairly standard incantation of DiagnoseTargets, I did use --interval_merging OVERLAPPING_ONLY though.

Any thoughts? Many thanks Anna

Created 2014-10-14 15:55:25 | Updated | Tags: diagnosetargets vcf allele
Comments (2)

I have used a VCF file that was produced by GATK for the -L option of DiagnoseTargets, but I get the alternative allele from the original VCF as the reference allele on the output vcf from Diagnose targets: input VCF:

chr1 10425470 . G A 139.99 SNPBHF AC=1;AF=0.042;AN=24;BaseQRankSum=-5.862e+00;Clipp......

chr1 10425470 . A <DT> . PASS END=10425470;GC=1.00;IDP=805.00 FT:IDP:LL:ZL......

The GC content reflects the reference allele though. Is this normal behaviour or a bug?

Created 2014-07-23 10:32:53 | Updated | Tags: diagnosetargets
Comments (6)

When running DiagnoseTargets (on a single sample) I get some results I have problems to understand.

As far as I know (using default values): LOW_COVERAGE means >20% of the bases have a coverage below 5 COVERAGE_GAPS means >20% of the bases have a coverage of 0 NO_READS means there are no reads contained in the interval

Question1: What is the difference between these two, as COVERAGE_GAPS always implies LOW_COVERAGE? Or are bases with zero coverage filtered out first and afterwards >20% of the remaining bases have coverage below 5 in the second case?

chr1    33065688        .       T       <DT>    .       COVERAGE_GAPS   END=33071551;GC=0.035;IDP=20.58 FT:IDP:LL:ZL    COVERAGE_GAPS:20.58:102:5419
chr1    33116749        .       G       <DT>    .       COVERAGE_GAPS;LOW_COVERAGE      END=33116923;GC=0.331;IDP=0.886 FT:IDP:LL:ZL    COVERAGE_GAPS;LOW_COVERAGE:0.886:78:97

Question2: How can I explain these entries?

chr1    33116959        .       C       <DT>    .       LOW_COVERAGE;NO_READS   END=33117033;GC=0.693;IDP=3.68 FT:IDP:LL:ZL    LOW_COVERAGE;NO_READS:3.68:75:0
chr2    242274737       .       A       <DT>    .       LOW_COVERAGE;NO_READS   END=242274766;GC=0.133;IDP=1.00 FT:IDP:LL:ZL    LOW_COVERAGE;NO_READS:1.00:10:20

Created 2013-10-24 15:25:47 | Updated | Tags: diagnosetargets
Comments (5)


  1. Is DiagnoseTargets counting only reads that have mapped uniquely? Is that one of the default filters?

  2. From the vcf I see that IDP - Average depth across the interval. Sum of the depth in a loci divided by interval size. LL - Number of loci for this sample, in this interval with low coverage (below the minimum coverage) but not zero ZL - Number of loci for this sample, in this interval with zero coverage.

I'm interested in the total number of reads mapped to each interval.

Is this true that IDP = #reads_in_this_interval/(LL+ZL) ? so if I want to extract #reads_in_this_interval, I can look at IDP*(LL+ZL)? I have different number of reads in each sample, so I first need to normalize it.

Thanks! Moran.

Created 2013-10-16 20:29:40 | Updated | Tags: diagnosetargets
Comments (1)

Hi again,

is there an automatic way to generate an interval list for every 1kb in the reference sequence? I want to run DiagnoseTargets for every 1kb in the data.


Created 2013-10-16 19:10:38 | Updated | Tags: diagnosetargets
Comments (2)


Is there a way to run DiagnoseTargets with a list of bam files, instead of multiple "-I bam_file" tags?


Created 2013-08-07 11:45:34 | Updated | Tags: diagnosetargets
Comments (8)

I a m running DiagnoseTargets on a list of intervals corresponding to targeted exons. In the result file, intervals are filtered by i.e. PASS, LOW_COVERAGE, COVERAGE_GAPS or NO_READS for each sample as well as for the sample set as a whole. In the individual-sample FORMAT fields, information on TF (filter) and IDP (average sample depth across interval) are given. How come I often see a combination like this?


Filtered as NO_READS, yet average depth of 96.09 across the interval? Of course, PART of the interval may be without reads, even more than the threshold set by --coverage_status_threshold, but this is what I understand is meant by COVERAGE_GAPS. I also wondered whether the reads might all have been totally filtered out due to low quality parameters and adjusted --minimum_base_quality and --minimum_mapping_quality to 0, but NO_READS are still flagged for a number of intervals, despite IDP being far from 0. Is this a bug, or have I misunderstood something?

L. Pihlstrom

Created 2013-03-05 15:52:53 | Updated 2013-03-05 15:56:27 | Tags: depthofcoverage diagnosetargets
Comments (1)

Now that DepthOfCoverage is being retired in GATK 2.4, I decided to investigate DiagnoseTargets. I have some questions.

1) Are there plans to support output formats other than VCF? What was great about DOC is I could easily send the output to anyone and it could be easily read. With DT, that requires additional processing.

2) DOC provided summary for intervals as well as samples. DT only does intervals. Is there a way to get per-sample info?

3) DOC output full intervals. DT only outputs the start positions. To get the end positions, an additional step is required. Can that be adjusted?

4) DOC provided coverage info for all intervals. DT only shows covered intervals, so if an interval is not covered, it will not be listed in the output. Is there a way to output all intervals?

Sorry if I sound too critical. I am a big fan of DepthOfCoverage and am disappointed to see it go.

Thank you.