Tagged with #output
2 documentation articles | 0 announcements | 5 forum discussions

Created 2012-08-14 20:25:10 | Updated 2012-10-18 15:27:03 | Tags: developer output

Comments (0)

1. Analysis output overview

In theory, any class implementing the OutputStream interface. In practice, three types of classes are commonly used: PrintStreams for plain text files, SAMFileWriters for BAM files, and VCFWriters for VCF files.

2. PrintStream

To declare a basic PrintStream for output, use the following declaration syntax:

public PrintStream out;

And use it just as you would any other PrintStream:

out.println("Hello, world!");

By default, @Output streams prepopulate fullName, shortName, required, and doc. required in this context means that the GATK will always fill in the contents of the out field for you. If the user specifies no --out command-line argument, the 'out' field will be prepopulated with a stream pointing to System.out.

If your walker outputs a custom format that requires more than simple concatenation by [Queue]() you should also implement a custom Gatherer.

3. SAMFileWriter

For some applications, you might need to manage their own SAM readers and writers directly from inside your walker. Current best practice for creating these Readers / Writers is to declare arguments of type SAMFileReader or SAMFileWriter as in the following example:

SAMFileWriter outputBamFile = null;

If you do not specify the full name and short name, the writer will provide system default names for these arguments. Creating a SAMFileWriter in this way will create the type of writer most commonly used by members of the GSA group at the Broad Institute -- it will use the same header as the input BAM and require presorted data. To change either of these attributes, use the StingSAMIterator interface instead:

StingSAMFileWriter outputBamFile = null;

and later, in initialize(), run one or both of the following methods:

outputBAMFile.writeHeader(customHeader); outputBAMFile.setPresorted(false);

You can change the header or presorted state until the first alignment is written to the file.

4. VCFWriter

VCFWriter outputs behave similarly to PrintStreams and SAMFileWriters. Declare a VCFWriter as follows:

@Output(doc="File to which variants should be written",required=true) protected VCFWriter writer = null;

5. Debugging Output

The walkers provide a protected logger instance. Users can adjust the debug level of the walkers using the -l command line option.

Turning on verbose logging can produce more output than is really necessary. To selectively turn on logging for a class or package, specify a log4j.properties property file from the command line as follows:

-Dlog4j.configuration=file:///<your development root>/Sting/java/config/log4j.properties

An example log4j.properties file is available in the java/config directory of the Git repository.

Created 2012-08-11 06:36:39 | Updated 2012-10-18 15:32:05 | Tags: developer output

Comments (0)

1. Introduction

When running either single-threaded or in shared-memory parallelism mode, the GATK guarantees that output written to an output stream created via the @Argument mechanism will ultimately be assembled in genomic order. In order to assemble the final output file, the GATK will write the output generated from each thread into a temporary output file, ultimately assembling the data via a central coordinating thread. There are three major elements in the GATK that facilitate this functionality:

  • Stub

    The front-end interface to the output management system. Stubs will be injected into the walker by the command-line argument system and relay information from the walker to the output management system. There will be one stub per invocation of the GATK.

  • Storage

    The back end interface, responsible for creating, writing and deleting temporary output files as well as merging their contents back into the primary output file. One Storage object will exist per shard processed in the GATK.

  • OutputTracker

    The dispatcher; ultimately connects the stub object's output creation request back to the most appropriate storage object to satisfy that request. One OutputTracker will exist per GATK invocation.

2. Basic Mechanism

Stubs are directly injected into the walker through the GATK's command-line argument parser as a go-between from walker to output management system. When a walker calls into the stub it's first responsibility is to call into the output tracker to retrieve an appropriate storage object. The behavior of the OutputTracker from this point forward depends mainly on the parallelization mode of this traversal of the GATK.

If the traversal is single-threaded:

  • the OutputTracker (implemented as DirectOutputTracker) will create the storage object if necessary and return it to the stub.

  • The stub will forward the request to the provided storage object.

  • At the end of the traversal, the microscheduler will request that the OutputTracker finalize and close the file.

If the traversal is multi-threaded using shared-memory parallelism:

  • The OutputTracker (implemented as ThreadLocalOutputTracker) will look for a storage object associated with this thread via a ThreadLocal.

  • If no such storage object exists, it will be created pointing to a temporary file.

  • At the end of each shard processed, that file will be closed and an OutputMergeTask will be created so that the shared-memory parallelism code can merge the output at its leisure.

  • The shared-memory parallelism code will merge when a fixed number of temporary files appear in the input queue. The constant used to determine this frequency is fixed at compile time (see HierarchicalMicroScheduler.MAX_OUTSTANDING_OUTPUT_MERGES).

3. Using output management

To use the output management system, declare a field in your walker of one of the existing core output types, coupled with either an @Argument or @Output annotation.

@Output(doc="Write output to this BAM filename instead of STDOUT")
SAMFileWriter out;

Currently supported output types are SAM/BAM (declare SAMFileWriter), VCF (declare VCFWriter), and any non-buffering stream extending from OutputStream.

4. Implementing a new output type

To create a new output type, three types must be implemented: Stub, Storage, and ArgumentTypeDescriptor.

To implement Stub

Create a new Stub class, extending/inheriting the core output type's interface and implementing the Stub interface.

OutputStreamStub extends OutputStream implements Stub<OutputStream> {

Implement a register function so that the engine can provide the stub with the session's OutputTracker.

public void register( OutputTracker outputTracker ) {
    this.outputTracker = outputTracker;

Add as fields any parameters necessary for the storage object to create temporary storage.

private final File targetFile;
public File getOutputFile() { return targetFile; }

Implement/override every method in the core output type's interface to pass along calls to the appropriate storage object via the OutputTracker.

public void write( byte[] b, int off, int len ) throws IOException {
    outputTracker.getStorage(this).write(b, off, len);

To implement Storage

Create a Storage class, again extending inheriting the core output type's interface and implementing the Storage interface.

public class OutputStreamStorage extends OutputStream implements Storage<OutputStream> {

Implement constructors that will accept just the Stub or Stub + alternate file path and create a repository for data, and a close function that will close that repository.

public OutputStreamStorage( OutputStreamStub stub ) { ... }
public OutputStreamStorage( OutputStreamStub stub, File file ) { ... }
public void close() { ... }

Implement a mergeInto function capable of reconstituting the file created by the constructor, dumping it back into the core output type's interface, and removing the source file.

public void mergeInto( OutputStream targetStream ) { ... }

Add a block to StorageFactory.createStorage() capable of creating the new storage object. TODO: use reflection to generate the storage classes.

    if(stub instanceof OutputStreamStub) {
        if( file != null )
            storage = new OutputStreamStorage((OutputStreamStub)stub,file);
            storage = new OutputStreamStorage((OutputStreamStub)stub);

To implement ArgumentTypeDescriptor

Create a new object inheriting from type ArgumentTypeDescriptor. Note that the ArgumentTypeDescriptor does NOT need to support the core output type's interface.

public class OutputStreamArgumentTypeDescriptor extends ArgumentTypeDescriptor {

Implement a truth function indicating which types this ArgumentTypeDescriptor can service.

 public boolean supports( Class type ) {
     return SAMFileWriter.class.equals(type) || StingSAMFileWriter.class.equals(type);

Implement a parse function that constructs the new Stub object. The function should register this type as an output by caling engine.addOutput(stub).

 public Object parse( ParsingEngine parsingEngine, ArgumentSource source, Type type, ArgumentMatches matches )  {
     OutputStreamStub stub = new OutputStreamStub(new File(fileName));
     return stub;

Add a creator for this new ArgumentTypeDescriptor in CommandLineExecutable.getArgumentTypeDescriptors().

 protected Collection<ArgumentTypeDescriptor> getArgumentTypeDescriptors() {
     return Arrays.asList( new VCFWriterArgumentTypeDescriptor(engine,System.out,argumentSources),
                           new SAMFileWriterArgumentTypeDescriptor(engine,System.out),
                           new OutputStreamArgumentTypeDescriptor(engine,System.out) );

After creating these three objects, the new output type should be ready for usage as described above.

5. Outstanding issues

  • Only non-buffering iterators are currently supported by the GATK. Of particular note, PrintWriter will appear to drop records if created by the command-line argument system; use PrintStream instead.

  • For efficiency, the GATK does not reduce output files together following the tree pattern used by shared-memory parallelism; output merges happen via an independent queue. Because of this, output merges happening during a treeReduce may not behave correctly.
No articles to display.

Created 2015-12-15 16:52:07 | Updated | Tags: haplotypecaller output

Comments (2)

Hello! On certain runs I get only .vcf file as an output and sometimes I see both .vcf and .vcf.idx as output files. Is there an issue with the runs that yield only .vcf file and not .vcf.idx?

Created 2014-05-28 00:42:16 | Updated | Tags: baserecalibrator output

Comments (3)

I've been running BaseRecalibrator for a while, and I've just realized that I have an empty file with the same name I've given to BaseRecalibrator as output (it may have been created from a previous aborted run). Will the tool write to this file when it is finished (ETA ~19 hours), or will it exit with an error? Also, are there any intermediate files created in generating the recalibration table, and if so, where should I look for them (output directory, log directory, directory from which I called GATK, ...)?

Created 2014-02-25 16:09:27 | Updated 2014-02-25 16:11:31 | Tags: output

Comments (11)

I want to pipe GATK output to standard output.

I am using a command like this (GATK v2.8-1-g932cd3a): java -Xmx4g -jar GenomeAnalysisTK.jar -R human_g1k_v37.fasta -T CombineVariants -V in1.vcf.gz -V in2.vcf.gz -o /dev/stdout

However, GATK echos the INFO information in the standard output, mixing information that is not meant to end up in a VCF file.

I have also tried the following command line: java -Xmx4g -jar GenomeAnalysisTK.jar -R human_g1k_v37.fasta -T CombineVariants -V in1.vcf.gz -V in2.vcf.gz -log /dev/stderr -o /dev/stdout

But this only achieves the result to send the INFO information both to standard output and to standard error.

Is there a way to have GATK not use the standard output to communicate information to the user?

I have checked the documentation at http://www.broadinstitute.org/gatk/gatkdocs/org_broadinstitute_sting_gatk_CommandLineGATK.html#--log_to_file but I don't understand how I could do this.

Created 2013-07-25 04:08:18 | Updated 2013-07-25 04:10:10 | Tags: haplotypecaller output indels

Comments (1)

Dear Team, i am using Haplotype caller to identify the indels in my sequence.. i am running it in command line..

java -jar /illumina/data/galaxy/apps/GenomeAnalysisTK-2.5-2-gf57256b/GenomeAnalysisTK.jar -T HaplotypeCaller -R genome.fa -I S21_full.picard.bam -o indels_S21.vcf

but i couldnt find any entries in the output file, also the file size shows 0..

INFO 09:14:25,280 ProgressMeter - Location processed.active regions runtime per.1M.active regions completed total.runtime remaining

INFO 09:14:55,283 ProgressMeter - chr1:4710472 4.72e+06 30.0 s 6.0 s 0.2% 5.5 h 5.4 h

INFO 09:15:25,284 ProgressMeter - chr1:8280981 8.29e+06 60.0 s 7.0 s 0.3% 6.2 h 6.2 h

INFO 09:15:55,285 ProgressMeter - chr1:11311033 1.13e+07 90.0 s 7.0 s 0.4% 6.8 h 6.8 h

INFO 09:16:25,286 ProgressMeter - chr1:15877953 1.59e+07 120.0 s 7.0 s 0.5% 6.5 h 6.5 h

INFO 09:16:55,287 ProgressMeter - chr1:19156923 1.92e+07 2.5 m 7.0 s 0.6% 6.7 h 6.7 h

INFO 09:17:25,288 ProgressMeter - chr1:22167191 2.22e+07 3.0 m 8.0 s 0.7% 7.0 h 6.9 h

INFO 09:17:55,289 ProgressMeter - chr1:24574489 2.46e+07 3.5 m 8.0 s 0.8% 7.3 h 7.3 h

Please suggest on the same..

Thanks Sridhar

Created 2013-03-01 12:16:11 | Updated | Tags: documentation output

Comments (6)

Hi, I use the option to get extended output from mutect and it works well. Unfortunately I cannot find a detailed description of all the columns, only the ones that are in the regular output. Is it possible to have this kind of documentation? thanks a lot