3D light microscopy and deconvolution provide a means to investigate 3D structure, providing near-confocal quality images without the temporal requirements or potentially damaging phototoxicity associated with other 3D imaging technologies. This article is Part II in a series regarding viability, resolution improvement, and measurement in fluorescence imaging. Part I focused on spectral unmixing.
Scientists studying living systems have found that gleaning a more precise spatial understanding of samples is integral to their ability to understand cellular events. As such, their need to image or measure three-dimensional volume data in live cells continues to grow. Researchers have benefited from a vast improvement in computing power over the past decade; robust three-dimensional deconvolution techniques have become readily accessible without large capital expenditures for computer equipment.
Confocal imaging is often selected by researchers for fluorescence imaging, but it has its limitations. Because fluorophores, particularly those used in life science research, are generally subject to photobleaching, every effort must be made to limit exposure to excitation light. In both laser-scanning and spinning-disk confocal imaging, samples are routinely exposed to extreme levels of excitation light, causing varying degrees of both photobleaching and phototoxicity.
Figure 1: C. elegans hermaphrodite in the late L2 stage. At left, before iterative deconvolution. At right, deconvolution using Olympus imaging software allows for a much crisper and more detailed image. Green: PDZ-protein TAG-60. Expression detected on the lumenal side of intestinal cells. Red: protein AJM-1, which is expressed in the junctions of epithelial cells. Blue: DNA stained by DAPI. (Source: Peter Gutierrez in Prof. Dr. Alex Hajnal’s laboratory at the Institute of Zoology, University of Zurich) |
Additionally, as noted in the previous article in this series, researchers often experience emission wavelength overlap with fluorescent dyes. To compensate, narrow-band emission filters may be used to eliminate bleed-through signal, but use of these filters risks additional photobleaching and phototoxicity.
Widefield imaging requires less excitation energy and can be a useful alternative to confocal imaging when imaging three-dimensional volumes for the purposes of visualization or measurement. The drawback to widefield imaging, however, is the loss of confocality, which provides for clear imaging in individual slices and highly resolvable data in three dimensions. For many scientists, the best of all possible worlds would be to image in widefield mode, minimizing photobleaching and phototoxicity, while generating data that allows for three-dimensional assessments of their experimental systems.
Deconvolution for Cell-friendly Imaging
Iterative deconvolution (not to be confused with deblurring, which is often mistakenly referred to as deconvolution) is one method of approaching this ideal. Deconvolution takes advantage of the properties of light and optics to mathematically correct out-of-focus blur in images. In essence, deconvolution returns out-of-focus signal back to its origin in the sample. The improved contrast and resolution realized by deconvolving volume data improves the visual appearance of the data, creating a “pretty” dataset or image. Perhaps more importantly, consistent application of deconvolution algorithms also improves the ability to quantify volumetric and intensity data across samples as well as experiments. Deconvolution is most correctly performed when imaging conditions are consistent, and variability due to motion and photobleaching are minimized. Utilizing properly collected data, software can be used to compensate and correct for diffraction-related blur, thus allowing the researcher to use less light to collect the same amount of image data. However, inconsistent application of these algorithms can create the reverse—data that cannot be compared image-to-image, sample to sample, or experiment to experiment, despite the visual appearance of the image results.
Deconvolution is not limited to widefield imaging; modalities have been developed for spinning-disk confocal, laser-scanning confocal, and multiphoton imaging modes (Figure 1). In addition, the technique is not limited to processing of volume data—two-dimensional data can be deconvolved in a similar manner. With today’s technology, careful attention to hardware parameters, dataset requirements, and processing procedures can make deconvolution simple and reproducible. (For more extensive background on the optics and techniques of deconvolution, visit the Microscopy Resource Center online at: www.olympusmicro.com/primer/digitalimaging/deconvolution/deconvolutionhome.html).
Start with Proper Experimental Design
Though three-dimensional deconvolution requires no special hardware other than that found in a typical fluorescence microscopy laboratory, it does involve doing some preparatory work in order to collect suitable data. Adequate sampling in the X, Y, and Z dimensions is critical for proper data processing. In addition, imaging in a manner conducive to sample viability and consistency is important. It is imperative that samples are not overexposed to excitation light, and that photobleaching is kept to a minimum during acquisition.
Sampling the X, Y, and Z dimensions should conform to the Nyquist sampling rate and exposure times should be optimized to refrain from overfilling the dynamic range of the image file while still utilizing the full dynamic range of the detector. (For an exhaustive treatment of the mathematics and the practical application of Nyquist sampling, please see: www.olympusmicro.com/primer/techniques/confocal/resolutionintro.html).
With deconvolution, the output data is unlimited and in analog format, so it is possible to saturate a fixed-point image type, and often it is unclear how a particular software implementation will handle saturated data coming from a deconvolution algorithm. To err on the side of caution, researchers capturing image data for deconvolution should underexpose the image without dipping into the noise floor of the imaging system, and should not allow clipping or rescaling of saturated data in the resulting image.
As the scientist proceeds through the specimen to collect Z-axis information on multiple fluorophores (multiple channels), it is best to acquire all of the data for each Z-slice at one time, moving through all of the channels at that depth before moving on to the next slice. It is tempting to try to speed up acquisition by collecting all of the imaging data for each channel at one time, moving through the specimen and then cycling back to collect data for the next channel, but this technique has risks. First, slices can be mismatched when returning to the original Z-position. In addition, by repeatedly exposing the sample to each wavelength of excitation light as the researcher moves through the stack, the sample is repeatedly exposed to the same process of fluorescence stimulation and recovery, and may become photobleached. When imaging, it also is preferable to expose the sample to the longest excitation wavelength (lowest energy wavelength) first and shortest wavelength (highest energy wavelength) last.
Processing Data for Analysis
When deconvolving image data, the first step is to ensure that the experimental data has been collected in a consistent and correct manner—X, Y, and Z sampling is proper, phototoxicity is not a factor and photobleaching is minimized during the collection of the image data. Once the image volumes are collected, deconvolution is a relatively simple process for smaller data sets, yielding data that is suitable for intensity measurement, morphological measurement, and co-occurrence/colocalization analysis. However, working with larger multi-dimensional data sets can be challenging due to the processing time required and complexity of the process. In the past, many researchers who did not have the inclination or the extensive mathematical background may have found deconvolution so complex in practice that they chose not to use it despite its capabilities.
Recently, however, the power of deconvolution has become accessible to more researchers. Some of today’s advanced commercially available image acquisition and analysis software programs offer precise control of image acquisition hardware combined with advanced deconvolution tools, allowing the researcher to rapidly capture and process data sets with improved contrast and resolution while maintaining cell viability through reduced photobleaching and phototoxicity. In addition, recent improvements to deconvolution algorithms have greatly improved processing speed, making processing of larger data sets practical. Olympus cellSens Dimension software, for instance, offers both multi-dimensional acquisition and computationally efficient deconvolution tools, enabling the researcher to acquire meaningful multi-dimensional data and process it with a single click.
By combining new software tools with advances in computer processing power and detector sensitivity and speed, researchers now have the opportunity to design experiments that would not have been possible in the past due to the inability to acquire adequate signal at a temporal resolution that reduces motion artifact. The recent combination of advances in hardware, software, and computing allows scientists to generate quantitative data rapidly while vastly reducing cell mortality. The ability to preserve living cells while acquiring usable multichannel fluorescence volume data, one of the most elusive goals of live cell imaging, is now accessible to more fluorescence imaging laboratories than ever before.
This article was published in Bioscience Technology magazine: Vol. 34, No. 10, October, 2010, pp. 12-14.