Saving Cells: Image Processing for Improved Viability, Part II: Iterative Deconvolution

Featured In: Image Analysis | Microscopy

Monday, October 25, 2024

newsvine diigo google
slashdot
Share
Loading...

3D light microscopy and deconvolution provide a means to investigate 3D structure, providing near-confocal quality images without the temporal requirements or potentially damaging phototoxicity associated with other 3D imaging technologies. This article is Part II in a series regarding viability, resolution improvement, and measurement in fluorescence imaging. Part I focused on spectral unmixing.

Scientists studying living systems have found that gleaning a more precise spatial understanding of samples is integral to their ability to understand cellular events. As such, their need to image or measure three-dimensional volume data in live cells continues to grow. Researchers have benefited from a vast improvement in computing power over the past decade; robust three-dimensional deconvolution techniques have become readily accessible without large capital expenditures for computer equipment.

Confocal imaging is often selected by researchers for fluorescence imaging, but it has its limitations. Because fluorophores, particularly those used in life science research, are generally subject to photobleaching, every effort must be made to limit exposure to excitation light. In both laser-scanning and spinning-disk confocal imaging, samples are routinely exposed to extreme levels of excitation light, causing varying degrees of both photobleaching and phototoxicity.


Figure 1: C. elegans hermaphrodite in the late L2 stage. At left, before iterative deconvolution. At right, deconvolution using Olympus imaging software allows for a much crisper and more detailed image. Green: PDZ-protein TAG-60. Expression detected on the lumenal side of intestinal cells. Red: protein AJM-1, which is expressed in the junctions of epithelial cells. Blue: DNA stained by DAPI. (Source: Peter Gutierrez in Prof. Dr. Alex Hajnal’s laboratory at the Institute of Zoology, University of Zurich)
Additionally, as noted in the previous article in this series, researchers often experience emission wavelength overlap with fluorescent dyes. To compensate, narrow-band emission filters may be used to eliminate bleed-through signal, but use of these filters risks additional photobleaching and phototoxicity.

Widefield imaging requires less excitation energy and can be a useful alternative to confocal imaging when imaging three-dimensional volumes for the purposes of visualization or measurement. The drawback to widefield imaging, however, is the loss of confocality, which provides for clear imaging in individual slices and highly resolvable data in three dimensions. For many scientists, the best of all possible worlds would be to image in widefield mode, minimizing photobleaching and phototoxicity, while generating data that allows for three-dimensional assessments of their experimental systems.

Deconvolution for Cell-friendly Imaging
Iterative deconvolution (not to be confused with deblurring, which is often mistakenly referred to as deconvolution) is one method of approaching this ideal. Deconvolution takes advantage of the properties of light and optics to mathematically correct out-of-focus blur in images. In essence, deconvolution returns out-of-focus signal back to its origin in the sample. The improved contrast and resolution realized by deconvolving volume data improves the visual appearance of the data, creating a “pretty” dataset or image. Perhaps more importantly, consistent application of deconvolution algorithms also improves the ability to quantify volumetric and intensity data across samples as well as experiments. Deconvolution is most correctly performed when imaging conditions are consistent, and variability due to motion and photobleaching are minimized. Utilizing properly collected data, software can be used to compensate and correct for diffraction-related blur, thus allowing the researcher to use less light to collect the same amount of image data. However, inconsistent application of these algorithms can create the reverse—data that cannot be compared image-to-image, sample to sample, or experiment to experiment, despite the visual appearance of the image results.

Deconvolution is not limited to widefield imaging; modalities have been developed for spinning-disk confocal, laser-scanning confocal, and multiphoton imaging modes (Figure 1). In addition, the technique is not limited to processing of volume data—two-dimensional data can be deconvolved in a similar manner. With today’s technology, careful attention to hardware parameters, dataset requirements, and processing procedures can make deconvolution simple and reproducible. (For more extensive background on the optics and techniques of deconvolution, visit the Microscopy Resource Center online at: www.olympusmicro.com/primer/digitalimaging/deconvolution/deconvolutionhome.html).

Start with Proper Experimental Design
Though three-dimensional deconvolution requires no special hardware other than that found in a typical fluorescence microscopy laboratory, it does involve doing some preparatory work in order to collect suitable data. Adequate sampling in the X, Y, and Z dimensions is critical for proper data processing. In addition, imaging in a manner conducive to sample viability and consistency is important. It is imperative that samples are not overexposed to excitation light, and that photobleaching is kept to a minimum during acquisition.

Sampling the X, Y, and Z dimensions should conform to the Nyquist sampling rate and exposure times should be optimized to refrain from overfilling the dynamic range of the image file while still utilizing the full dynamic range of the detector. (For an exhaustive treatment of the mathematics and the practical application of Nyquist sampling, please see: www.olympusmicro.com/primer/techniques/confocal/resolutionintro.html).

With deconvolution, the output data is unlimited and in analog format, so it is possible to saturate a fixed-point image type, and often it is unclear how a particular software implementation will handle saturated data coming from a deconvolution algorithm. To err on the side of caution, researchers capturing image data for deconvolution should underexpose the image without dipping into the noise floor of the imaging system, and should not allow clipping or rescaling of saturated data in the resulting image.

As the scientist proceeds through the specimen to collect Z-axis information on multiple fluorophores (multiple channels), it is best to acquire all of the data for each Z-slice at one time, moving through all of the channels at that depth before moving on to the next slice. It is tempting to try to speed up acquisition by collecting all of the imaging data for each channel at one time, moving through the specimen and then cycling back to collect data for the next channel, but this technique has risks. First, slices can be mismatched when returning to the original Z-position. In addition, by repeatedly exposing the sample to each wavelength of excitation light as the researcher moves through the stack, the sample is repeatedly exposed to the same process of fluorescence stimulation and recovery, and may become photobleached. When imaging, it also is preferable to expose the sample to the longest excitation wavelength (lowest energy wavelength) first and shortest wavelength (highest energy wavelength) last.

Processing Data for Analysis
When deconvolving image data, the first step is to ensure that the experimental data has been collected in a consistent and correct manner—X, Y, and Z sampling is proper, phototoxicity is not a factor and photobleaching is minimized during the collection of the image data. Once the image volumes are collected, deconvolution is a relatively simple process for smaller data sets, yielding data that is suitable for intensity measurement, morphological measurement, and co-occurrence/colocalization analysis. However, working with larger multi-dimensional data sets can be challenging due to the processing time required and complexity of the process. In the past, many researchers who did not have the inclination or the extensive mathematical background may have found deconvolution so complex in practice that they chose not to use it despite its capabilities.

Recently, however, the power of deconvolution has become accessible to more researchers. Some of today’s advanced commercially available image acquisition and analysis software programs offer precise control of image acquisition hardware combined with advanced deconvolution tools, allowing the researcher to rapidly capture and process data sets with improved contrast and resolution while maintaining cell viability through reduced photobleaching and phototoxicity. In addition, recent improvements to deconvolution algorithms have greatly improved processing speed, making processing of larger data sets practical. Olympus cellSens Dimension software, for instance, offers both multi-dimensional acquisition and computationally efficient deconvolution tools, enabling the researcher to acquire meaningful multi-dimensional data and process it with a single click.

By combining new software tools with advances in computer processing power and detector sensitivity and speed, researchers now have the opportunity to design experiments that would not have been possible in the past due to the inability to acquire adequate signal at a temporal resolution that reduces motion artifact. The recent combination of advances in hardware, software, and computing allows scientists to generate quantitative data rapidly while vastly reducing cell mortality. The ability to preserve living cells while acquiring usable multichannel fluorescence volume data, one of the most elusive goals of live cell imaging, is now accessible to more fluorescence imaging laboratories than ever before.

This article was published in Bioscience Technology magazine: Vol. 34, No. 10, October, 2010, pp. 12-14.

Join the Discussion
Rate Article: Average 0 out of 5
register or log in to comment on this article!

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1

Research Exchange

Bringing the Cell Image into Focus

Nov 2

Improvements in transmission electron microscope (TEM) technology increase the power of this imaging tool for the study of cell biology.

Finding a Cure for Spinal Cord Injury with On-Demand LIMS

Oct 25

The Miami Project to Cure Paralysis finds an on-demand laboratory information management system (LIMS) helps to accelerate discovery in its HCS projects.

Saving Cells: Image Processing for Improved Viability, Part II: Iterative Deconvolution

Oct 25

3D light microscopy and deconvolution provide a means to investigate 3D structure, providing near-confocal quality images without the temporal requirements or potentially damaging phototoxicity associated with other 3D imaging technologies. This article is Part II in a series regarding viability, resolution improvement, and measurement in fluorescence imaging. Part I focused on spectral unmixing.

Saving Cells: Image Processing for Improved Viability

Sep 22

This article is Part I of a two-part series regarding viability, resolution improvement, and measurement in fluorescence imaging. Part II will focus on deconvolution.

Selecting Robots for Use in Drug Discovery and Testing

Dec 6

Drug discovery and testing, with their need for speed, repeatability and verification, are ideally suited to benefit from robot automation. It is therefore not surprising that robots have been at the forefront of automation developments in both these areas.

HP Scalable Network Storage Systems for Life Sciences

Sep 13

Life sciences research today is advancing exponentially, each step bringing us closer to the realization of truly personalized medicine–preventive care and treatments designed specifically for each individual. In the near future, PCPGM healthcare researchers expect to be able to use predictive genetic testing to create custom treatment plans for individuals and deliver dramatic improvements over today’s one-size-fits-all approach. But research capabilities are only part of the equation; current storage and operating capacities must also evolve to accommodate ever-expanding amounts of data before the goal of personalized medicine can be realized.

Step up to the MIQE

Mar 30

Over the years, polymerase chain reaction (PCR) has evolved into a readily automated, high throughput quantitative technology. Real-time quantitative PCR (qPCR) has become the industry standard for the detection and quantification of nucleic acids for multiple application, including quantification of RNA levels. But a lack of consensus among researchers on how to best perform and interpret qPCR experiments presents a major hurdle for advancement of the technology. This problem is exacerbated by insufficient experimental detail in published work, which impedes the ability of others to accurately evaluate or replicate reported results.

Fast Optimization of a Multiplex Influenza Identification Panel Using a Thermal Gradient

Mar 30

The year 2009 was marked by the emergence of a novel influenza A (H1N1) virus that infects humans. There is a need to identify the different strains of influenza virus for purposes of monitoring the H1N1 strain pandemic and for other epidemiological and scientific purposes.

Using the Tecan Genesis Workstation to Automate a Cytometric Bead Array (CBA) Immunoassay

Mar 11

The poster describe the process involved in automating a Cytometric Bead Array (CBA) immunoassay developed to measure relative concentrations of serum antibodies against Tetanus (TT), Sperm Whale Myoglobin (SWM) and Keyhole Limpet Hemocyanin (KLH) in KLH-immunized volunteers.

Ensuring Quality in Assays Performed with Automated Liquid Handlers

Feb 2

The focus of this presentation is to highlight the need of ensuring quality in important assays performed with automated liquid handlers. Nearly all assays performed within a laboratory are volume-dependent. In turn, all concentrations of biological and chemical components in these assays, as well as the associated dilution protocols, are volume-dependent. Because analyte concentration is volume-dependent, an assay’s results might be falsely interpreted if liquid handler variability and inaccuracies are unknown or if the system(s) go unchecked for a long period.

Inkjet System for Protein Crystallography

Feb 1

X-ray crystallography is used routinely by scientists to obtain the three dimensional structure of a biological molecule of interest.Such information can be used to determine how a pharmaceutical interacts with a protein target and what changes might improve functionality. However, the crystallization of macromolecules still remains a serious hindrance in structural determination despite impressive advances in screening methods and technologies.

Attention Deficit & Hyperactivity in a Drosophila Memory Mutant

Attention Deficit & Hyperactivity in a Drosophila Memory Mutant

Nov 9 2009

Action selection is modulated by external stimuli either directly or via memory retrieval. In a constantly changing environment, animals have evolved attention-like processes to effectively filter the incoming sensory stream. These attention-like processes, in turn, are modulated by memory. The neurobiological nature of how attention, action selection and memory are inter-connected is unknown. We describe here new phenotypes of the memory mutant radish in the fruit fly Drosophila.

The solute carrier (SLC) families have a remarkable long evolutionary history with the majority of the human families present before divergence of Bilaterian species.

58 minutes ago

The Solute Carriers (SLCs) are membrane proteins that regulate transport of many types of substances over the cell membrane. The SLCs are found in at least 46 gene families in the human genome. Here we performed the first evolutionary analysis of the entire SLC...

ATP independent type IB topoisomerase of Leishmania donovani is stimulated by ATP: an insight into the functional mechanism.

3 hours ago

Most type IB topoisomerases do not require ATP and Mg(2+) for activity. However, as shown previously for vaccinia topoisomerase I, we demonstrate that ATP stimulates the relaxation activity of the unusual heterodimeric type IB topoisomerase from Leishmania...

The relationship between HIV-1 genome RNA dimerization, virion maturation and infectivity.

3 hours ago

The relationship between virion protein maturation and genomic RNA dimerization of human immunodeficiency virus type 1 (HIV-1) remains incompletely understood. We have constructed HIV-1 Gag cleavage site mutants to enable the steady state observation of virion...

The genome of Theobroma cacao.

6 hours ago

We sequenced and assembled the draft genome of Theobroma cacao, an economically important tropical-fruit tree crop that is the source of chocolate. This assembly corresponds to 76% of the estimated genome size and contains almost all previously described genes,...

Prokariotic Cell Collection in Denmark

Nov 6 2009

I would like to know about a prokariotic cell collection in Denmark. Is there a cell bank in this country? I need a Lactobacillus strain for a fermentation assay and this information about the bank is very helpful for me.

Request for Entries

Oct 16 2009

Ask the Experts is your chance to get the answers to questions on applications, materials, methods, processes, and technologies. Email you question to bst_web@advantagemedia.com, and the editors of Bioscience Technology will find an appropriate expert to answer it. Watch this space in the future to see the questions your colleagues are posting.

STAY INFORMED: SUBSCRIBE TO

Magazine and E-mail Newsletters

Loading...
E-mail:   

MULTIMEDIA

Video:

Viewing SureFocus Slides

Jun 11

A demonstration of SureFocus Microscope Slides in the review of AFB Smears. SureFocus Slides are a patent-pending breakthrough in tuberculosis detection, as their fluorescent staining circle remains visible during review, Fluorescence Microscopy.

Podcasts:

Allen Institute for Brain Research

Allen Institute for Brain Research

Oct 14 2009

Discussed in this interview are both the mouse brain project and the human cortex project with an emphasis on the importance of these projects to neuroscience research.

Information: