Tagged with #dpp
0 documentation articles | 0 announcements | 4 forum discussions

No articles to display.

No articles to display.

Created 2013-01-30 16:36:38 | Updated | Tags: dpp

Comments (5)


when I'm running the DPP I get an error. I think it is because the command '.exec5886755893230759761' is removed from the temp folder while it is being used. I have tried with the '-keepIntermediates' option but the command is still being deleted.

Any suggestions to avoid it?

Thank you in advance, Best,


The error (I have the complete output if needed):

DEBUG 15:42:19,212 FunctionEdge - Starting: /user > 'java' '-Xmx4096m' '-XX:+UseParallelOldGC' '-XX:ParallelGCThreads=4' '-XX:GCTimeLimit=50' '-XX:GCHeapFreeLimit=10' '-Djava.io.tmpdir=/temp' '-cp' '/My-programs/QueueLite-2.3-9-gdcdccbb/QueueLite.jar' 'net.sf.picard.sam.RevertSam' 'INPUT=/My-programs/QueueLite-2.3-9-gdcdccbb/resources/exampleBAM.bam' 'TMP_DIR=/temp' 'OUTPUT=/user/exampleBAM.reverted.bam' 'VALIDATION_STRINGENCY=SILENT' 'SO=queryname' 'CREATE_INDEX=true'

INFO 15:42:19,212 FunctionEdge - Output written to /user/exampleBAM.reverted.bam.out

DEBUG 15:42:19,214 IOUtils - Deleted /user/exampleBAM.reverted.bam.out

DEBUG 15:42:19,260 IOUtils - Deleted /temp/.exec5886755893230759761

ERROR 15:42:19,277 FunctionEdge - Error: 'java' '-Xmx4096m' '-XX:+UseParallelOldGC' '-XX:ParallelGCThreads=4' '-XX:GCTimeLimit=50' '-XX:GCHeapFreeLimit=10' '-Djava.io.tmpdir=/temp' '-cp' '/My-programs/QueueLite-2.3-9-gdcdccbb/QueueLite.jar' 'net.sf.picard.sam.RevertSam' 'INPUT=/My-programs/QueueLite-2.3-9-gdcdccbb/resources/exampleBAM.bam' 'TMP_DIR=/temp' 'OUTPUT=/user/exampleBAM.reverted.bam' 'VALIDATION_STRINGENCY=SILENT' 'SO=queryname' 'CREATE_INDEX=true' org.broadinstitute.sting.utils.exceptions.ReviewedStingException: Unable to start command: sh /lustre/scratch109/sanger/ec8/REDUCED_BAM_files/temp/.exec5886755893230759761 at org.broadinstitute.sting.utils.runtime.ProcessController.exec(ProcessController.java:168) at org.broadinstitute.sting.queue.engine.shell.ShellJobRunner.start(ShellJobRunner.scala:69) at org.broadinstitute.sting.queue.engine.FunctionEdge.start(FunctionEdge.scala:83) at org.broadinstitute.sting.queue.engine.QGraph.runJobs(QGraph.scala:433) at org.broadinstitute.sting.queue.engine.QGraph.run(QGraph.scala:155) at org.broadinstitute.sting.queue.QCommandLine.execute(QCommandLine.scala:169) at org.broadinstitute.sting.commandline.CommandLineProgram.start(CommandLineProgram.java:237) at org.broadinstitute.sting.commandline.CommandLineProgram.start(CommandLineProgram.java:147) at org.broadinstitute.sting.queue.QCommandLine$.main(QCommandLine.scala:61) at org.broadinstitute.sting.queue.QCommandLine.main(QCommandLine.scala)

INFO 15:42:19,280 QGraph - 12 Pend, 1 Run, 0 Fail, 0 Done INFO 15:42:49,189 QGraph - Writing incremental jobs reports... INFO 15:42:49,191 QGraph - 12 Pend, 0 Run, 1 Fail, 0 Done INFO 15:42:49,194 QCommandLine - Script failed with 13 total jobs

Created 2013-01-10 20:43:28 | Updated 2013-01-10 20:49:09 | Tags: dataprocessingpipeline queue dpp performance community

Comments (3)

Is there any rule of thumb for allocating memory through "bsub" for running DataProcessingPipeline per bam file or per number of reads ?


Created 2012-11-27 08:17:54 | Updated 2013-01-07 19:51:45 | Tags: dataprocessingpipeline dpp

Comments (3)

Dear all , I am using Queue and DataProcessingPipeline.scala ( from https://github.com/broadgsa/gatk/blob/master/public/scala/qscript/org/broadinstitute/sting/queue/qscripts/DataProcessingPipeline.scala ) to process my bam file . The input is a sample level bam file which has been processed by BWA aligned , Samtools sampe and Picard merge and duplicate removed . The output bam file (~200GB/sample ) is much larger than the input bam file (~80GB/sample ) . I want to know what information was added into the bam file ? Thanks a lot .My Queue version is Queue-2.1-10 . My script :

java \ -Xmx4g \ -Djava.io.tmpdir=../tmp/ \ -jar ./Queue-2.1-10/Queue.jar \ -S ./DataProcessingPipeline.scala \ -i input.bam \ -R /db/human_g1k_v37.fasta \ -D /db//dbSnp_b137.vcf \ -run

Created 2012-09-28 17:09:50 | Updated 2013-01-07 20:38:01 | Tags: dataprocessingpipeline dpp community

Comments (2)

I had a question that, while it might be more appropriate for a BWA or seqanswers audience, I noticed something in the GATK's "Data Processing Pipeline" under "Methods and Workflows" that made me wonder. The pipeline is described here and there's a nice flowchart as well: http://www.broadinstitute.org/gatk/guide/topic?name=methods-and-workflows#41

The process describes a BAM of reads that are either not aligned or aligned by some process you don't want to use, so the first step seems to be Picard's RevertSam and then a realignment with BWA. I'm wondering why the process described by this GATK document splits it into per-lane BAM files. There doesn't seem to be any process done at the per-lane level other than BWA alignment. I have two guesses.. the first was to allow more parallelization at that step.

But my second guess is that perhaps BWA doesn't play nice with read groups when reading reads from BAM input files. If that is true, that would explain why I'm having trouble with BAM (a single sample, multiple lanes, merged into one file) -> BWA -> realigned-BAM -> GATK (either UnifiedGenotyper or RealignerTargetCreator, etc)--somewhere along the way, read groups are getting lost. So my guess is that the above-described pipeline splits it per-lane so it can manually respecify read groups all over again to BWA?

Is that other people's experience as well?

Also, the Methods and Workflows page describes a Queue script, but there's no link or anything to the actual Queue script. Anyone know where to find it?