Processing data originated in the Pacific Biosciences RS platform has been evaluated by the GSA and publicly presented in numerous occasions. The guidelines we describe in this document were the result of a systematic technology development experiment on some datasets (human, E. coli and Rhodobacter) from the Broad Institute. These guidelines produced better results than the ones obtained using alternative pipelines up to this date (september 2011) for the datasets tested, but there is no guarantee that it will be the best for every dataset and that other pipelines won't supersede it in the future.
The pipeline we propose here is illustrated in a Q script (PacbioProcessingPipeline.scala) distributed with the GATK as an example for educational purposes. This pipeline has not been extensively tested and is not supported by the GATK team. You are free to use it and modify it for your needs following the guidelines below.
First we take the filtered_subreads.fq file output by the Pacific Biosciences RS SMRT pipeline and align it using BWA. We use BWA with the
bwasw algorithm and allow for relaxing the gap open penalty to account for the excess of insertions and deletions known to be typical error modes of the data. For an idea on what parameters to use check suggestions given by the BWA author in the BWA manual page that are specific to Pacbio. The goal is to account for Pacific Biosciences RS known error mode and benefit from the long reads for a high scoring overall match. (for older versions, you can use the filtered_subreads.fasta and combine the base quality scores extracted from the h5 files using Pacific Biosciences SMRT pipeline python tools)
To produce a BAM file that is sorted by coordinate with adequate read group information we use Picard tools: SortSam and AddOrReplaceReadGroups. These steps are necessary because all subsequent tools require that the BAM file follow these rules. It is also generally considered good practices to have your BAM file conform to these specifications.
Once we have a proper BAM file, it is important to estimate the empirical quality scores using statistics based on a known callset (e.g. latest dbSNP) and the following covariates: QualityScore, Dinucleotide and ReadGroup. You can follow the GATK's Best Practices for Variant Detection according the type of data you have, with the exception of indel realignment, because the tool has not been adapted for Pacific Biosciences RS data.
You will have to adjust your calling thresholds in the Unified Genotyper to allow sites with a higher indel rate to be analyzed.
Be aware that the Unified Genotyper has cutoffs for base quality score and if your data is on average Q20 (a common occurrence with Pacific Biosciences RS data) you may need to adjust your quality thresholds to allow the GATK to analyze your data. There is no right answer here, you have to choose parameters consistent with your average base quality scores, evaluate the calls made with the selected threshold and modify as necessary.
To account for the high insertion and deletion error rate of the Pacific Biosciences data instrument, we often have to set the gap open penalty to be lower than the base mismatch penalty in order to maximize alignment performance. Despite aligning most of the reads successfully, this creates the side effect that the aligner will sometimes prefer to "hide" a true SNP inside an insertion. The result is accurate mapping, albeit with a reference-biased alignment. It is important to note however, that reference bias is an artifact of the alignment process, not the data, and can be greatly reduced by locally realigning the reads based on the reference and the data. Presently, the available software for local realignment is not compatible with the length and the high indel rate of Pacific Bioscience data, but we expect new tools to handle this problem in the future. Ultimately reference bias will mask real calls and you will have to inspect these by hand.
I would like to use PacBio reads to phase variants in a VCF file. ReadBackedPhasing with default parameters puts every variant in a separate block, even those obviously connected by a read. I have started to play with the parameters, and decreasing
--cacheWindowSize helps a bit, but phased blocks still contain only a couple of variants. Before trying out all possible parameter combinations, I would like to ask: Is there a recommended set of parameters for phasing from PacBio reads?
Alternatively, can I use the HaplotypeCaller somehow only for phasing? I would like to re-use the input VCF since it was created from Illumina reads and therefore contains variants of good quality. The PacBio reads, on the other hand, have quite a low coverage (10x - 20x). They should be usable for phasing, but re-calling variants from them would decrease quality.
We have PacBio data where we want to do variant calling. I tried both UnifiedGenotyper and Haplotype caller. I was not very successfull doing that. When I used UnifiedGenotyper I got some output, but just SNPs but NO indels... (I skipped the realigment part there). I tried to play arround with the parameters (indelGapContinuationPenalty, indGapOpenPenalty, min_base_quality_score). Setting the "min_base_quality_score" to a lower value is giving at least this output with only SNPs as mentioned above.
The only manual for PacBio data on GATK I got was this: https://www.broadinstitute.org/gatk/guide/topic?name=methods But this document is pretty old. Are there newer developments regarding GATK for PacBio? Or any more detailed tutorials?
Or would you suggest PBHoney as the right tool to use? Anything else?
Just to mention: For Illumina your toolkit worked like a charm! So basically we are able to work with it...