When AI meets microscopy

Mike Woerdemann explains how deep learning overcomes many common microscopy challenges

Deep learning is a form of artificial intelligence (AI) inspired by the structure of the human brain. Using algorithms called neural networks to extract layers of information from a raw input, it relies on training data to yield accurate and consistent results.

The best-known practical applications of deep learning include speech recognition platforms such as Google Assistant and Amazon Alexa. However, deep learning has also seen considerable uptake across many scientific disciplines, largely because it overcomes the widely held perception that AI-based technologies are both complex and time consuming to set up.

One such discipline is the field of microscopy, which has leveraged deep learning to efficiently analyse the vast amounts of data generated by high-content screening and to tackle many other common microscopy challenges. For example, using deep learning microscopy, researchers have been able to carry out quantitative analysis of fluorescently labelled cells at ultra-low light exposure and perform label-free analysis of cells in microwell plates.

Challenges of high-content screening microscopy

Microscopy data often uses segmentation, whereby thresholds based on signal intensity or colour are applied to images and used to extract the analysis targets. Drawbacks of this approach are that it can be extremely time-consuming and can affect the sample condition; it is also highly prone to operator bias. Moreover, as microscopy platforms have evolved to support high-throughput screening, the amount of data generated can quickly create a bottleneck.

A further challenge in live cell imaging is that many cell studies require the use of fluorescent labels. Not only can exposure to strong excitation light influence cell behaviour, but adverse experimental conditions can lead to photodamage or phototoxicity with an observable impact on cell viability. Although these effects can be reduced with lower light exposure, the resulting decrease in fluorescence signal diminishes the signal-to-noise ratio to make quantitative image analysis difficult.

The impact of fluorescent labels can be avoided using brightfield microscopy, which has the added advantages of easier sample preparation, faster imaging and improved cell viability, making it well suited to long-term live cell imaging studies. Yet brightfield microscopy presents inherent image analysis and segmentation challenges and is constrained by low contrast and poorer image quality compared to fluorescence microscopy. As such, its capacity for label-free analysis is relatively limited using conventional methods.

How does deep learning address these microscopy challenges?

Deep learning represents an ideal solution to these microscopy challenges. Its algorithms can rapidly learn to predict multiple parameters autonomously by acquiring reference images during a training phase– a process that requires little human interaction and eliminates the need for time-consuming, manual annotation of object masks. This translates to efficient, reliable and unbiased analyses that are founded on high-precision detection and segmentation.

Low signal segmentation training enables experiments to be performed at ultra-low light exposure, allowing quantitative image analysis with minimal influence on cell viability. For example, by training a deep neural network with image pairs (where one image is taken under optimal lighting conditions and the other is underexposed), it is possible to achieve robust results with as little as 0.2% of the light usually required (Fig. 1).

Deep learning has demonstrated proven utility to analyse brightfield transmission images, competing with or even outperforming the classical approach with a fluorescence label (Fig. 2). As well as improving the viability of living cells by avoiding stress from transfection or chemical markers, brightfield imaging saves fluorescence channels to permit other markers to be used in downstream experiments; this greatly increases the depth of information obtained from sample material.

Making deep learning microscopy accessible

AI holds huge potential for almost any scientific discipline, not least the field of microscopy where it has removed many common barriers previously limiting the technique’s success. Now, with recognition growing that setting up the necessary software no longer requires significant time and expertise, deep learning microscopy is pushing the boundaries of scientific understanding.

Modern microscopy platforms featuring integrated deep learning software require just a brief training stage before being deployed to capture, quantify and analyse large numbers of images. For instance, Olympus’ cellSens software, and the software for the scanR high-content screening station (Fig. 3) and VS200 slide scanner now include the TruAI module, a deep learning approach based on convolutional neural networks.

With deep learning, analysing vast datasets is no longer a bottleneck, and tasks that were previously impossible using manual thresholding methods are fast becoming mainstream. In addition, deep learning has provided researchers with many more options for quantifying microscopy images, including the use of ultra-low light exposure or brightfield images alone, highlighting the power of AI to benefit life science.

Mike Woerdemann is with Olympus

 

Recent Issues