Regarding the observation elsewhere: "HaplotypeCaller is slower when restricted to intervals": My test-bed is a 48 core, 100GB RAM CentOS 7 system with GenomeAnalysisTK-3.5 on it. If i submit the command:
java -Xmx16g -jar GenomeAnalysisTK.jar -T HaplotypeCaller -I chr12.bam -R hg19.fasta -o out.vcf -L chr12:40600000-40800000 -nct 36
I can see that the system is indeed using all 36 cores through completion.
But if I choose a small subset of ranges such as
chr12:40634200-40634514 chr12:40637254-40637647 chr12:40643603-40643770 chr12:40644926-40645491 chr12:40646596-40646822 chr12:40650957-40651309 chr12:40653220-40653421 chr12:40657469-40657806 chr12:40668336-40668853 chr12:40671529-40672115
, put them in the file test.list, then the command
java -Xmx16g -jar GenomeAnalysisTK.jar -T HaplotypeCaller -I chr12.bam -R hg19.fasta -o out.vcf -L test.list -nct 36
executes, but quickly drops to one core usage for almost the entire run, taking an unexpectedly long time to complete. This does not seem like the efficiency gain anticipated. I have tested this with other interval file formats, but the problem persists and is easily reproducible. This is indeed unfortunate behavior. Examining just the exons of a region should take less time than examining the whole region, not more.
Is there any hope that this will ever operate as expected, or is the problem too deeply embedded in the threading algorithm? Some explanation would be useful. I am doing everything I can to speed up the GATK best practices methods to use on whole exome datasets, but things like this are frustrating.
-- Fred P.