Category Archives: Variants

1st Viral Bioinformatics and Genomics Training Course


The first Viral Bioinformatics and Genomics training course held at the University of Glasgow was completed successfully by 14 delegates (nine external and five internal) on 10-14 August 2015. The course took place in the McCall Building computer cluster, and the adjacent Lomond and Dumgoyne Rooms were used for refreshments and lunch.

Joseph Hughes (Course Organiser)
Andrew Davison
Robert Gifford
Sejal Modha
Richard Orton
Sreenu Vattipally
Gavin Wilkie

9:00-10:00 Tea & Coffee in Dumgoyne Room – Arrival of participants

Day one will introduced the participant to the different sequencing technologies available, the ways in which samples are prepared with an emphasis on how this impacts the bioinformatic analyses. The rest of the day aimed to familiarize the researcher with the command line and useful UNIX commands.


10:00-10:45 Next-generation sequencing technologies – Gavin Wilkie
10:45-11:30 Examples of HTS data being used in virology – Richard Orton
11:30-11:45 Brief introduction and history of Unix/Linux – Sreenu Vattipally and Richard Orton
11:45-12:30 The command line anatomy and basic commands – Sreenu Vattipally and Richard Orton

12:30-13:30 Lunch break in Dumgoyne Room

13:30-14:30 Essential UNIX commands – Sreenu Vattipally and Sejal Modha
14:30-15:30 Text processing with grep – Sreenu Vattipally and Sejal Modha
15:30-16:00 Tea & Coffee in Dumgoyne Room
16:00-17:30 sed and awk – Sreenu Vattipally and Sejal Modha


The participant continued to practice UNIX commands and learn how to run basic bioinformatic tools. By the end of the day, the participant were able to analyse HTS data with different reference assemblers and automate the processing of multiple files.

9:30-10:15 Introduction to file formats (fasta, fastq, SAM, BAM, vcf) – Sreenu Vattipally and Richard Orton
10:15-11:00 Quality scores, quality control and trimming (Prinseq and FastQC and trim_galore) – Sreenu Vattipally and Richard Orton
11:00-11:30 Tea & Coffee in Dumgoyne Room
11:30-13:00 Introduction to alignment and assembly programs (Blast, BWA, Bowtie) – Sreenu Vattipally and Richard Orton

13:00-14:00 Lunch break in Dumgoyne Room

14:00-14:45 Continuation of alignment and assembly programs (Novoaligner, Tanoti) – Sreenu Vattipally and Richard Orton
14:45-15:30 Post-assembly processing and alignment visualization (Tablet and UGENE) – Sreenu Vattipally and Richard Orton
15:30-16:00 Tea & Coffee in Dumgoyne Room
16:00-16:45 Workflow design and automation – Sreenu Vattipally and Richard Orton
16:45-17:30 Loops and command line arguments – Sreenu Vattipally and Richard Orton


The third day covered the different approaches used for de-novo assembly and provided hands-on experience. A popular metagenomic pipeline was presented and participants learned how to use it as well as create their own pipeline. (Slides and practical)


9:30-10:15 De-novo assemblers – Sejal Modha and Gavin Wilkie
10:15-11:00 Using different de-novo assemblers (e.g. Meta-idba, Edena, Abyss, Spades) – Sejal Modha and Gavin Wilkie

11:00-11:30 – Tea & Coffee in Lomond Room
11:30-13:00 Scaffolding contigs, filling gaps in assemblies and correcting errors (e.g. phrap, gap_filler, scaffold builder, ICORN2, spades) – Gavin Wilkie and Sejal Modha

13:00-14:00 Lunch break in Lomond Room

14:00-14:45 Practice building a custom de-novo pipeline (BLAST, KronaTools)
14:45-15:30 the MetAmos metagenomic pipeline – Sejal Modha and Sreenu Vattipally
15:30-16:00 Tea & Coffee in Lomond Room
16:00-17:30 Analysis using the pipeline – Sejal Modha and Sreenu Vattipally


Day four gave the participant the opportunity to look into more detail at their assembly, they learned how to create a finished curated full genome (emphasis on finishing) with gene annotations and analysed the variation within their sample

9:30-11:00 Finishing and Annotating genomes – Andrew Davison, Gavin Wilkie and Sreenu Vattipally
11:00-11:30 Tea & Coffee in Dumgoyne Room
11:30-13:00 Annotation transfer from related species – Gavin Wilkie, Andrew Davison and Sreenu Vattipally

13:00-14:00 Lunch break in Dumgoyne Room


14:00-15:30 Error detection and variant calling – Richard Orton
15:30-16:00 Tea & Coffee in Dumgoyne Room
16:00-17:30 Quasi-species characterisation – Richard Orton

17:30 Social evening in the pub/restaurant at Curlers on Byres Road organised by Richard Orton ( 



Researchers worked through several practical examples of using sequences in virology and spent the remaining time analysing their own data with the teachers help.

9:30-10:15 Identifying mutations of interest for individual sequences and within a set of sequences – Robert Gifford
10:15-11:00 Combining phylogenies with traits – Robert Gifford
11:00-11:30 Tea & Coffee in Dumgoyne Room
11:30-12:15 Investigating epidemiology (IDU example) – Robert Gifford
12:15-13:00 Investigate transmission of drug resistance – Robert Gifford

13:00-14:00 Lunch break in Dumgoyne Room

14:00-15:30 Analysing your own data or developing your own pipeline
15:30-16:00 Tea & Coffee in Dumgoyne Room
16:00-17:00 Analysing your own data or developing your own pipeline
17:00 Goodbyes



If you would like to be informed of similar training courses run in the future, please fill in your details here.



A simple method to distinguish low frequency variants from Illumina sequence errors

RNA viruses have high mutation rates and exist within their hosts as large, complex and heterogeneous populations, comprising a spectrum of related but non-identical genome sequences.

Next generation sequencing has revolutionised the study of viral populations by enabling the ultra deep sequencing of their genomes, and the subsequent identification of the full spectrum of variants within the population.

However, the current challenge is to accurately model the errors in the sequence data and distinguish real viral variants, particularly those that exist at low frequency, from errors introduced during sequencing.

A simple method to do this is to utilise the Illumina quality scores. Every sequenced base in every read is assigned a quality score that is a measure of the uncertainty of the base call using the Phred scale. An example FASTQ read is:

A Phred quality score (Q) is logarithmically related to it’s corresponding base-calling error probability (P), and they can be calculated from each other using the following formulas:

In FASTQ files, quality scores are typically displayed with an ASCII character that is a representation of the quality score – one reason for this is to save space – instead of writing “40” (2 characters) one can simply write “I” (1 character). A range of quality scores and their corresponding error probabilities and ASCII characters is presented below (NB: on the Sanger Phred scale the ASCII characters have a +33 offset to the quality score).

Phred Quality (Q) ScoreProbability (P) of Base Calling ErrorQ + 33 (Sanger)ASCII Q Character

At one of the highest quality scores Q40, there is a 1 in 10,000 probability of a base being called by the sequence machine incorrectly. Although this is a very low probability, the ultra-deep coverage of viral samples can result in a coverage of well over 10K at each position in the genome. This implies that even if every base were of the highest quality, we would still expect errors to be introduced via base miscalls from the Illumina machine.

Therefore, in order to identify true viral variants present in the population at low frequency, one must handle base miscall errors from the Illumina machine. A simple way to do this is as follows:

  1. For a given genome position, retrieve all the quality scores of every individual base aligned at that position from the reads.
  2. Convert each quality score to a probability of error using the formulas above
  3. Calculate an average (mean) probability of error at the given genome position.
  4. Test whether the observed number of mutations is above that expected from Illumina sequence errors alone. This can be done statistically by using a binomial distribution where the probability of observing M or more mutations is given by Binom(M; AvP/3, C), where C is the coverage, AvP is the average probability of error calculated from the quality scores, and AvP/3 represents the probability of the specific mutation observed in the reads. Using a 5% threshold, if [1 – Binom(M; AvP/3, C)] is less than 0.05 then one can say the variant is validated.

For example, at genome position 1034 the reference base is an A, we observe a coverage of 35,000 composed of 34,993 As and 7 G mutations, and evaluating the quality scores at this position gives an average probability of error of 0.00032. Using the [1 – Binom(M; AvP/3, C)] formula (where C = 35,000, AvP/3 = 0.00032/3, and M = 7) gives a P value of  0.08476547 which is not below the 0.05 threshold and so is not a validated low frequency mutation.

This is a relatively simple approach that was proposed by Morelli et al. 2013 and is also incorporated into our own tool DiversiTools. However, validation of low frequency variants is further complicated by issues such as Strand Bias and Illumina Systematic Errors, as well as RT-PCR errors and sample contamination.

A good website with a full table of quality scores and error probabilities is available here, along with information on older Illumina quality scores (which utilise slightly different formulas).