The dark arts of Ion Torrent Sequencing
All technologies, have their advantages and disadvantages, and next generation sequencing (NGS) is no different. For the time being, the current dominant forces in NGS are Illumina, which is based upon detecting a flash of light as fluorescent nucleotides are incorporated, or Ion Torrent, where nucleotide incorporation is detected in the form of a change in pH. Illumina has made its technology user friendly and produces high quality data, but is expensive and sensitive to misalignment of the lens used to photograph the flow cell, whereas Ion Torrent is (relatively) cheaper and less sensitive to shakes, but historically results in poorer quality data and user friendliness is far behind Illumina. Ion Torrent’s answer to addressing consistency and user friendliness has been to release the Ion Chef; effectively a robot which performs the steps from library to chip. Life without an Ion Chef is very different. Like any method, the instructions that come with the supplies tells you what to do, but it doesn’t tell you the ‘tricks’: those little touches that make you hit the sweet spot. In the non-chef protocol, for the Ion Proton, there are various points throughout the procedure, from RNA to sequence data, where little touches help.
My experience has been using the Ion Proton for transcriptomics, so any ‘recommendations’ when it comes to Ion Torrent sequencing refer to RNAseq.
Library preparation begins with good quality RNA. The standard way to determine RNA quality is to assess the integrity and ratio of the ribosomal RNA peaks. Originally this was simply an agarose gel, but nowadays the Bioanalyzer or Tapestation from Agilent are commonplace and provide a ‘RIN’ value indicating the RNA quality. For anyone thinking of buying one, given the choice between Bioanalyser and Tapestation, personally I’d choose the Bioanalyzer every time as the reagents seem to be much more stable overtime. For quantification it is widely accepted that dye-based methods of quantitation, e.g. Qubit, are much more reliable than spectrophotometry, e.g. NanoDrop.
The amount of RNA used to generate the library can be very small, but should be sufficient that the resulting library is representative of the RNA population in the sample. Currently, I poly(A) enrich 1.5µg of total RNA and use the entire eluate as the input for the RNAseq V2 kit from Ion Torrent. The protocol provided suggests 3 minutes of fragmentation for <100ng of poly(A) enriched RNA as an input. Fragmenting for this long though will produce short reads, whereas a shorter incubation, e.g. 1min 30sec will result in longer reads, which may be beneficial when it comes to aligning the reads to the reference. When it comes to the clean-up after fragmentation, and this applies to all of the clean-ups, big emphasis is placed upon precise volumes. It’s true, this is important, but just as important – and something not in the manual – is that the ethanol MUST be fresh. As soon as the bottle is open the quality drops. It’s prudent to make aliquots from a freshly opened bottle, especially if you want batch –to-batch consistency among your libraries.
When it comes to barcoding, if possible plan ahead such that the successive runs on the Proton utilise different barcodes. In this way it may help to remove inter-run contamination of data.
Library QC is also important, again, use the Qubit. Once amplified, using 1.5µg of total RNA input results in libraries that need to be diluted a lot to reach the required dilution for the OT2, either 100pM (and use 8µl in the OT2 reaction), or 8pM (and use 100µl in the OT2 reaction). More DNA than this will result in an increase in polyclonal spheres. It may be tedious vortexing the ISPs for a full minute prior to their addition to the OT2 reaction, but it’s important and adding the ISPs is the last thing I do prior to starting the OT2.
After enrichment and loading (adding 10µl of 50% annealing buffer prior to adding to the chip), the foam washes aren’t to be underestimated. The wells may be perfectly loaded with perfectly templated ISPs, but unless the surface is washed properly, the amount of data will be reduced. Whereas the protocol states two washes, I perform a minimum of 4, some run slowly, some faster. The goal each time is to generate a fine foam. For this the pipette tip makes a surprising amount of difference; unless there is a sufficiently narrow opening to the tip, it is difficult to generate a good foam. It is also sometimes possible to ‘over-foam’, whereby pipetting too much will result in the small bubbles becoming bigger.
I’m not sure how much of a difference it makes, but I try to add the polymerase as slowly as possible and leave the chip to sit still for at least 5 minutes before moving it.
These are simply some of the habits I’ve picked up. The protocol is much improved compared to previous versions but, ultimately, the best option would be to buy an Ion Chef!