Memory and R/W space required by UnifiedGenotyper
Posted in Ask the team | Last updated on 2012-10-19 00:51:12


Comments (7)

Background: I am testing GATK (ver. 2.0-39) for use in de novo SNP identification using targeted Illumina seq. against a set of ~2500 genes from 28 different indiv. genotypes, same species. These are PE 50 and PE100 libs. I do not have a defined set of indels or SNPs to use as a reference as per GATK Phase 1 best practices. The genome seq. for this organism is a first draft (2.2 GB with ~ 835,000 clusters/contigs). I decided to first test four libraries (two PE50 and two PE100) and then check the results and tweak switches as necessary before scaling up to the full complement of sample libs. So far I have:

  1. Assigned readgroups and mapped reads (individually) of the 4 test libs. to the reference using bowtie2
  2. Sorted, then combined outputs into a single bam file (12 GB)
  3. Run GATK ReduceReads to generate a 6 GB bam file
  4. Run UnifiedGenotyper with the cmd:

.

java -Djava.io.tmpdir=/path/tmp_dir -jar /path/GenomeAnalysisTK.jar -T UnifiedGenotyper -R speciesname_idx/speciesname.fasta -I 4.libs_reduced.bam -o 4.libs.UG -nt 6

My questions are:

  1. Can GATK be run efficiently without Phase 1 processing?
  2. Is the ref. genome too large, w.r.t. the # of clusters?
  3. Would one expect this approach to require an inordinate amount of time to process a dataset of this size and complexity?

The program initially died because java didn't have enough write space. So I gave it a tmp dir. and it ran for 3 days and died after maxing out a hard, 2 TB directory size limit. I am now running it again with a 4 TB limit.

After 27 hr, I have only traversed 5.2% of the genome (if I'm understanding the stdout correctly).

INFO  16:33:47,746 TraversalEngine - ctg7180006247957:754        1.15e+08   26.9 h       14.0 m      5.2%         3.1 w     2.9 w

So, at this rate, that's ~21 days to process ~15% of the libs. I thought maybe there was an excessive amt of swap occurring that might be slowing things down, but of the 126 GB RAM available only~ 20-30 GB are being utilized among mine several other jobs, so not likely an issue.

I have no experience with this program, but this just seems way too slow for processing a relatively small dataset... and I wonder if it will ever be able to crunch through the full set of 28 libs.

Any suggestions/thoughts as to why this is occurring, and what I might be able do to speed things up would be greatly appreciated!

Walt


Return to top Comment on this article in the forum