Вы не можете выбрать более 25 тем Темы должны начинаться с буквы или цифры, могут содержать дефисы(-) и должны содержать не более 35 символов.

6 лет назад
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207
  1. **Author:** Huang Yechao
  2. **E-mail:**17210700095@fudan.edu.cn
  3. **Git:** http://choppy.3steps.cn/huangyechao/target-somatic.git
  4. **Last Updates:** 16/1/2019
  5. **Description**
  6. > 本 APP 所构建的是用于二代测序目标区域测序 somatic 分析流程。使用的软件是[Sentieon](http://goldenhelix.com/products/sentieon/index.html):*A fast and accurate solution to variant calling from next-generation sequence data* 。本流程构建所使用的方法是基于流程语言WDL 并将其封装为[Choppy](http://docs.3steps.cn)平台上的APP进行使用。流程图如下所示:
  7. ![somatic](C:\Users\think\Desktop\choopy使用说明\somatic.png)
  8. 1. input:通常为二代目标区域测序所获得的fastq文件,通常包含 **Tumor** 和 **normal** 两种类型的数据;此外还应当包含有测序时使用的 `bed` 文件
  9. 2. Mapping:将测序所得的数据与参考基因组进行比对,找到每一条read在参考基因组上的位置,将结果信息储存在**bam**文件中,并对获得的 **bam** 文件进行质控
  10. 3. Dedup:在制备文库的过程中,由于PCR扩增过程中会存在一些偏差,有的序列会被过量扩增。在比对的时候,这些过量扩增出来的完全相同的序列就会比对到基因组的相同位置。而这些过量扩增的reads并不是基因组自身固有序列,不能作为变异检测的证据,因此,要尽量去除这些由PCR扩增所形成的duplicates,并对去除重复之后的**bam**文件进行质控
  11. 4. Realigner: 将比对到 indel 附近的 reads 进行局部重新比对,将比对的错误率降到最低
  12. 5. BQSR:对bam文件里reads的碱基质量值进行重新校正,使最后输出的bam文件中reads中碱基的质量值能够更加接近真实的与参考基因组之间错配的概率
  13. 6. co-realignment :将配对样本的 T/N 组合在一起
  14. 7. TNseq :变异检测步骤,变异结果主要包括 **SNVs**, **INDELs**
  15. 8. TNscope :变异检测步骤,变异结果主要包括 **SNVs**, **INDELs**以及 **SV**
  16. ## App使用指南
  17. ### 安装App
  18. ```bash
  19. # 激活choppy环境
  20. source activate choppy-latest
  21. # 安装app
  22. choppy install huangyechao/target-somatic:<version>
  23. ```
  24. ### 准备samples文件
  25. `sample.csv` 文件为提交任务时使用的输入文件,其内容是根据`input`文件中定义的信息对应生成的,也可使用 `Choppy` 的 `samples` 功能生成:
  26. ```bash
  27. choppy samples target-somatic --output samples.csv
  28. ```
  29. ```bash
  30. #### samples.csv
  31. normal_fastq_1,normal_fastq_2,tumor_fastq_1,tumor_fastq_2,regions,sample_name,cluster,disk_size,sample_id
  32. ```
  33. 其中`sample_id`对应于所分析样本的索引号,用于生成当前样本提交时的任务信息,应注意不要包含`_`,否则会出现报错。
  34. ### 提交任务
  35. ```bash
  36. choppy batch target-germline samples.csv --project-name your_project
  37. ```
  38. ### APP 构建
  39. ### tasks
  40. `tasks`目录中分析流程中每一个步骤的 **WDL** 文件,如 `mapping.wdl` 如下所示
  41. ```bash
  42. task mapping {
  43. String fasta
  44. File ref_dir
  45. File fastq_1
  46. File fastq_2
  47. String SENTIEON_INSTALL_DIR
  48. String group
  49. String sample
  50. String pl
  51. String docker
  52. String cluster_config
  53. String disk_size
  54. command <<<
  55. set -o pipefail
  56. set -e
  57. export SENTIEON_LICENSE=192.168.0.55:8990
  58. nt=$(nproc)
  59. ${SENTIEON_INSTALL_DIR}/bin/bwa mem -M -R "@RG\tID:${group}\tSM:${sample}\tPL:${pl}" -t $nt ${ref_dir}/${fasta} ${fastq_1} ${fastq_2} | ${SENTIEON_INSTALL_DIR}/bin/sentieon util sort -o ${sample}.sorted.bam -t $nt --sam2bam -i -
  60. >>>
  61. runtime {
  62. dockerTag:docker
  63. cluster: cluster_config
  64. systemDisk: "cloud_ssd 40"
  65. dataDisk: "cloud_ssd " + disk_size + " /cromwell_root/"
  66. }
  67. output {
  68. File sorted_bam = "${sample}.sorted.bam"
  69. File sorted_bam_index = "${sample}.sorted.bam.bai"
  70. }
  71. }
  72. ```
  73. ### workflow
  74. `workflow.wdl` 是定义了每一个步骤的输入文件以及各个步骤之间的以来关系的文件:
  75. ```bash
  76. import "./tasks/mapping.wdl" as mapping
  77. import "./tasks/Metrics.wdl" as Metrics
  78. import "./tasks/Dedup.wdl" as Dedup
  79. import "./tasks/deduped_Metrics.wdl" as deduped_Metrics
  80. import "./tasks/Realigner.wdl" as Realigner
  81. import "./tasks/BQSR.wdl" as BQSR
  82. import "./tasks/corealigner.wdl" as corealigner
  83. import "./tasks/TNseq.wdl" as TNseq
  84. import "./tasks/TNscope.wdl" as TNscope
  85. workflow {{ project_name }} {
  86. File tumor_fastq_1
  87. File tumor_fastq_2
  88. File normal_fastq_1
  89. File normal_fastq_2
  90. String SENTIEON_INSTALL_DIR
  91. String sample
  92. String docker
  93. String fasta
  94. File ref_dir
  95. File dbmills_dir
  96. String db_mills
  97. File dbsnp_dir
  98. File regions
  99. String dbsnp
  100. String disk_size
  101. String cluster_config
  102. call mapping.mapping as tumor_mapping {
  103. input:
  104. group=sample + "tumor",
  105. sample=sample + "tumor",
  106. fastq_1=tumor_fastq_1,
  107. fastq_2=tumor_fastq_2,
  108. SENTIEON_INSTALL_DIR=SENTIEON_INSTALL_DIR,
  109. pl="ILLUMINAL",
  110. fasta=fasta,
  111. ref_dir=ref_dir,
  112. docker=docker,
  113. disk_size=disk_size,
  114. cluster_config=cluster_config
  115. }
  116. call Metrics.Metrics as tumor_Metrics {
  117. input:
  118. SENTIEON_INSTALL_DIR=SENTIEON_INSTALL_DIR,
  119. fasta=fasta,
  120. ref_dir=ref_dir,
  121. sorted_bam=tumor_mapping.sorted_bam,
  122. sorted_bam_index=tumor_mapping.sorted_bam_index,
  123. sample=sample + "tumor",
  124. docker=docker,
  125. disk_size=disk_size,
  126. cluster_config=cluster_config
  127. }
  128. call Dedup.Dedup as tumor_Dedup {
  129. input:
  130. SENTIEON_INSTALL_DIR=SENTIEON_INSTALL_DIR,
  131. sorted_bam=tumor_mapping.sorted_bam,
  132. sorted_bam_index=tumor_mapping.sorted_bam_index,
  133. sample=sample + "tumor",
  134. docker=docker,
  135. disk_size=disk_size,
  136. cluster_config=cluster_config
  137. }
  138. ......
  139. ......
  140. }
  141. ```
  142. 其中文件最上面的 `import` 代表了所要使用的task文件,中间部分`File/String xxx` 表明了任务所传递出需要定义变量及其类型,`call`部分声明了流程的各个步骤及其依赖关系。(文档的具体说明详见[WDL](https://software.broadinstitute.org/wdl/documentation/spec#alternative-heredoc-syntax))
  143. ### input
  144. `input` 文件为整个 **APP** 运行时所要输入的参数,对于可以固定的参数可以直接在`input`文件中给出,对于需要改变的参数用`{{}}`进行引用,将会使得参数在 `samples` 文件中出现;其中`project_name`为所运行的任务的名称,需要在提交任务是进行定义
  145. ```bash
  146. {
  147. "{{ project_name }}.fasta": "GRCh38.d1.vd1.fa",
  148. "{{ project_name }}.ref_dir": "oss://pgx-reference-data/GRCh38.d1.vd1/",
  149. "{{ project_name }}.dbsnp": "dbsnp_146.hg38.vcf",
  150. "{{ project_name }}.dbsnp_dir": "oss://pgx-reference-data/GRCh38.d1.vd1/",
  151. "{{ project_name }}.SENTIEON_INSTALL_DIR": "/opt/sentieon-genomics",
  152. "{{ project_name }}.dbmills_dir": "oss://pgx-reference-data/GRCh38.d1.vd1/",
  153. "{{ project_name }}.db_mills": "Mills_and_1000G_gold_standard.indels.hg38.vcf",
  154. "{{ project_name }}.docker": "localhost:5000/sentieon-genomics:v2018.08.01 oss://pgx-docker-images/dockers",
  155. "{{ project_name }}.sample": "{{ sample_name }}",
  156. "{{ project_name }}.tumor_fastq_2": "{{ tumor_fastq_2 }}",
  157. "{{ project_name }}.tumor_fastq_1": "{{ tumor_fastq_1 }}",
  158. "{{ project_name }}.normal_fastq_1": "{{ normal_fastq_1 }}",
  159. "{{ project_name }}.normal_fastq_2": "{{ normal_fastq_2 }}",
  160. "{{ project_name }}.regions": "{{ regions }}",
  161. "{{ project_name }}.disk_size": "{{ disk_size }}",
  162. "{{ project_name }}.cluster_config": "{{ cluster if cluster != '' else 'OnDemand ecs.sn2ne.2xlarge img-ubuntu-vpc' }}"
  163. }
  164. ```
  165. > `{{ cluster if cluster != '' else 'OnDemand ecs.sn2ne.2xlarge img-ubuntu-vpc' }}`表示当没有指定`cluster` 的配置信息时,则默认使用 **ecs.sn2ne.2xlarge**
  166. ## 更多使用信息
  167. 详见[Choppy使用说明](http://docs.3steps.cn)