This is not a bug per se in that it does not cause incorrect output, but I think it would be accurately described as an "unintended consequence" of very poorly compressed VCF output files.
GATK allows for output VCF files to be written using Picard's
BlockCompressedOutputStream when the the output file is specified with the extension
.vcf.gz, which I consider to be very good behavior. However, I noticed after doing some minor external manipulation that the files produced this way are "suboptimally" compressed. By suboptimal, I mean that sometimes the files are even larger than the uncompressed VCF files.
Since the problem occurs in GATK-Lite, I was able to look through the source code to see what is going on. From what I can tell, the issue is that
mWriter.flush() at the end of
VCFWriter.add() for each variant. Per the documentation for
WARNING: flush() affects the output format, because it causes the current contents of uncompressedBuffer to be compressed and written, even if it isn't full.
As a result, instead of the default of blocks of about 64k, the bgzf-formatted
.vcf.gz files produced by GATK have blocks for each line. That reduces the amount repetition for gzip to take advantage of. Not being sure what issues led to requiring a call to flush after every variant, I'm not sure how to best address this, but it may be necessary to wrap BlockCompressedOutputStream when used by VCFWriter to catch this flush in order to get effective compression.
Of course, it is possible to simply write the file and then compress it in a separate step, but this leads to disk IO that should be preventable.