Tagged with #downsampling
1 documentation article | 0 announcements | 7 forum discussions


Comments (15)

1. Introduction

Reads can be filtered out of traversals by either pileup size through one of our downsampling methods or by read property through our read filtering mechanism. Both techniques and described below.

2. Downsampling

Normal sequencing and alignment protocols can often yield pileups with vast numbers of reads aligned to a single section of the genome in otherwise well-behaved datasets. Because of the frequency of these 'speed bumps', the GATK now downsamples pileup data unless explicitly overridden.

Defaults

The GATK's default downsampler exhibits the following properties:

  • The downsampler treats data from each sample independently, so that high coverage in one sample won't negatively impact calling in other samples.

  • The downsampler attempts to downsample uniformly across the range spanned by the reads in the pileup.

  • The downsampler's memory consumption is proportional to the sampled coverage depth rather than the full coverage depth.

By default, the downsampler is limited to 1000 reads per sample. This value can be adjusted either per-walker or per-run.

Customizing

From the command line:

  • To disable the downsampler, specify -dt NONE.

  • To change the default coverage per-sample, specify the desired coverage to the -dcov option.

To modify the walker's default behavior:

  • Add the @Downsample interface to the top of your walker. Override the downsampling type by changing the by=<value>. Override the downsampling depth by changing the toCoverage=<value>.

Algorithm details

The downsampler algorithm is designed to maintain uniform coverage while preserving a low memory footprint in regions of especially deep data. Given an already established pileup, a single-base locus, and a pile of reads with an alignment start of single-base locus + 1, the outline of the algorithm is as follows:

For each sample:

  • Select reads with the next alignment start.

  • While the number of existing reads + the number of incoming reads is greater than the target sample size:

    Walk backward through each set of reads having the same alignment start. If the count of reads having the same alignment start is > 1, throw out one randomly selected read.

  • If we have n slots avaiable where n is >= 1, randomly select n of the incoming reads and add them to the pileup.

  • Otherwise, we have zero slots available. Choose the read from the existing pileup with the least alignment start. Throw it out and add one randomly selected read from the new pileup.

3. Read filtering

To selectively filter out reads before they reach your walker, implement one or multiple net.sf.picard.filter.SamRecordFilter, and attach it to your walker as follows:

@ReadFilters({Platform454Filter.class, ZeroMappingQualityReadFilter.class})

4. Command-line arguments for read filters

You can add command-line arguments for filters with the @Argument tag, just as with walkers. Here's an example of our new max read length filter:

public class MaxReadLengthFilter implements SamRecordFilter {
    @Argument(fullName = "maxReadLength", shortName = "maxRead", doc="Discard reads with length greater than the specified value", required=false)
    private int maxReadLength;

    public boolean filterOut(SAMRecord read) { return read.getReadLength() > maxReadLength; }
}

Adding this filter to the top of your walker using the @ReadFilters attribute will add a new required command-line argument, maxReadLength, which will filter reads > maxReadLength before your walker is called.

Note that when you specify a read filter, you need to strip the Filter part of its name off! E.g. in the example above, if you want to use MaxReadLengthFilter, you need to call it like this:

--read_filter MaxReadLength

5. Adding filters dynamically using command-line arguments

The --read-filter argument will allow you to apply whatever read filters you'd like to your dataset, before the reads reach your walker. To add the MaxReadLength filter above to PrintReads, you'd add the command line parameters:

--read_filter MaxReadLength --maxReadLength 76

You can add as many filters as you like by using multiple copies of the --read_filter parameter:

--read_filter MaxReadLength --maxReadLength 76 --read_filter ZeroMappingQualityRead
No posts found with the requested search criteria.
Comments (5)

Hello,

I ran PrintReads with option -ds and got the error message showed in the title. Looking into docs, I see that there are other options for downsampling under CommandLineGATK like downsample to coverage, downsample to fraction, downsampling type. Am I right to think that the previous option -ds is equivalent to -dfrac combined with -dt ALL_READS?

Best regards, Jacek

Comments (6)

Hello!

Trying to downsample in an orderly fashion in the name of experimentation, and in doing so would like to specify just one chromosome for the experiment - so I picked chromosome 17 with -L and a coverage of 30x with -dcov 30. This came up:

ERROR MESSAGE: Locus-based traversals (ie., Locus and ActiveRegion walkers) require a minimum -dcov value of 200 when downsampling to coverage. Values less than this can produce problematic downsampling artifacts while providing only insignificant improvements in memory usage in most cases.

I was hoping to poke through results from using the HaplotypeCaller with many different simulated depths of coverage for several samples. I read that one can use -dfrac instead, and that it might even be more appropriate, though I was hoping to find out what level of coverage led to what level of results and using -dfrac feels much less specific as it appears to toss a fraction of however many reads where at a given position, rather then tossing reads over a certain coverage. Thus with -dfrac, I could say that my sample had an average of 30x for this chromosome and I tossed half so theoretically I've simulated 15x depth of coverage...

Which approach would be more representative of reality? Using -dfrac to simulate a certain depth of coverage, or -dcov assuming I didn't have the 200 restriction?

Thanks for any help/discussion! -Tristan

Comments (26)

Hello,

I'm running the Haplotype Caller on 80 bams, multithreaded (-nt 14), with either vcfs or bed files as the intervals file. I am trying to downsample to approximately 200 reads (-dcov 200). However, my output vcfs clearly have much higher depth (over 300-500 reads) in some areas. Also, HC drastically slows down whenever it hits a region of very deep pile-ups (over 1000 reads).

Is there any way to speed up HC on regions of very deep pileups? Is there a way to make downsampling stricter?

Thank you for your help. Elise

Comments (2)

I have noticed some strange structure in the samples produced by the LevelingDownSampler class and was hoping someone might be nice enough to explain what is going on and how to avoid it.

I was attempting to use the unified genotyper to infer frequencies of heteroplasmic mutations in mitochondrial DNA by setting a high ploidy level to account for the multiplicity of mtDNA in a sample. However, when test datasets with different mitochondrial sequences were combined at known frequencies, the frequency inferred by the GATK (alleles at different levels of ploidy) was consistently different from the known mixing frequency. In other words, the recovered frequencies were biased.

As the bias was reduced as the dcov parameter was increased, I looked at the downsampler class to check for issues. It appeared that the levelGroups function was not randomly sampling reads, as there was a lot of structure in the groups it was leveling as well as the way it chose to level them.

I created a toy dataset by placing a SNP in a 300 bp window with high coverage and then merging two BAM files with samtools. I then had the class output at the end of the GATK analysis the number of reads that came from each original alignment position and from each file (or those reads returned by the consumeFinalizedItems method). The two issues were:

First, the downsampler appeared to sample too many reads at the start of the genome segment before leveling off to correct level, before dropping again at the end of the segment being sequenced (though plenty of reads were available). Is it possible to avoid this? Second, although in the middle of the genomic segment the read counts were uniformly distributed, their source in the BAM file was not. At low downsampling settings (<75) there would be bursts where one section of reads came entirely from one of the source bam files, followed by another burst where all the reads came from a different bam file (though these files were combined in the merge). Does anyone know if it is the case that spatial structure in the BAM file will appear as spatial structure in the downsampled reads? Is there any way to avoid this behavior?

Comments (12)

I ran the same sample through a pipeline using GATK twice and received different variants. I am trying to understand the reason behind this. My samples are from a MiSeq/capture kit run and downsampling could be one reason (given in one scenario that variant is called and in other it isn't) the variant is called at 32% when looked into the .bam files.

As I understand the UnifiedGenotyper downsamples my dataset randomly to 250, so I played around with -dcov parameter

  • same sample run twice, 1st run reports a variant; 2nd run doesn't.
  • up -dcov to 1000 neither run reports the variant.
  • up -dcov to 10,000 1st run again reports a variant; 2nd run doesn't.
  • set -dt NONE both runs call that variant

But setting -dt to NONE could be computationally exhaustive for a big sample set. Is there an identifiable reason to why this is happening..?

Curious..!

Comments (3)

I haven't been using GATK for long, but I assumed that downsample_to_coverage feature wouldn't ever be a cause for concern. I just tried running UnifiedGenotyper with -dcov set at 500, 5,000, and 50,000 on the same 1-sample BAM file. One would expect the results to be similar. However, 500 yielded 26 variants, 5,000 yielded 13, and 50,000 yielded just 1. Depth of that one variant was about 1,300 in the 50,000 cutoff. Why are the results so different?

Most of the other variants are in the biggest set were cut off at 500, so some reads were filtered. A few of them are at relatively low frequency, but most are at 25% or higher. If they are appearing by chance, they should not be at such high frequencies.

In addition, there are some variants that are below 500, so they should not be affected by the cutoff. Why are those showing up with the low cutoff and not the higher cutoff?

I am using GATK 2.1-8. I am looking at a single gene only, so that is why there are so few variants and such high coverage.

Comments (4)

I am trying to understand how Variant Annotator functions. I took the vcf file from the output of UnifiedGenotyper and ran Variant Annotator with the same .bam and .bed files I used for running UnifiedGenotyper. I expected that all the calculations in the INFO field will remain the same, since I am using the same input files. However, I find that some fields have different values. Here is an example: UnifiedGenotyper output:

chr22   30094366        .       G       A       172.01  .       AC=1;AF=0.500;AN=2;BaseQRankSum=3.182;DP=244;DS;Dels=0.00;FS=0.000;HaplotypeScore=118.5897;MLEAC=1;MLEAF=0.500;MQ=43.29;MQ0=0;MQRankSum=-0.049;QD=0.70;ReadPosRankSum=1.428;SB=-6.201e+01        GT:AD:DP:GQ:PL  0/1:220,19:244:99:202,0,2693 

VariantAnnotator output:

chr22   30094366        .       G       A       172.01  .       ABHet=0.923;AC=1;AF=0.500;AN=2;BaseQRankSum=3.182;DP=993;DS;Dels=0.00;FS=0.000;HaplotypeScore=454.8386;MLEAC=1;MLEAF=0.500;MQ=43.29;MQ0=0;MQ0Fraction=0.0000;MQRankSum=-0.378;OND=0.034;QD=0.17;ReadPosRankSum=-4.859;SB=-6.201e+01      GT:AD:DP:GQ:PL  0/1:220,19:244:99:202,0,2693

I am running GATKLite 2.1. Notice the DP in the info field has a different value. HaplotypeScore, QD, MQRankSum, etc have different values as well. Am I doing anything wrong? Shouldn't these values be the same when I recalculate these fields using VariantAnnotator?