AI-Enhanced Microscopy Techniques

Andrew Williams reports on AI-enhanced microscopy techniques

Several organisations around the world are in the process of creating innovative artificial intelligence (AI) enabled and AI-enhanced microscopy techniques. So, how do such techniques work? And what are the key applications?

Augmented Reality Microscopes (ARM)

One of the most interesting recent developments is at Google Brain, where a team is currently engaged in the development of an augmented reality microscope with real-time artificial intelligence integration for cancer diagnosis. As Cameron Chen, a member of the Google Brain team working on machine learning and deep learning with applications to healthcare, explains, the microscopic assessment of tissue samples is instrumental for the diagnosis and staging of cancer and guides subsequent therapy choices. However, these assessments demonstrate ‘considerable variability’ and many regions of the world lack access to trained pathologists. Moreover, although AI promises to improve access to and quality of healthcare, Chen stresses that the costs of image digitisation in pathology and difficulties in deploying AI solutions ‘remain as barriers to real-world use.’

In an effort to remedy this situation, the Google Brain team proposes the use of an augmented reality microscope (ARM) as a cost-effective solution. The ARM overlays AI-based information onto the current view of the sample in real time, enabling seamless integration of AI into routine workflows.

“We have demonstrated the utility of ARM in the detection of metastatic breast cancer and the identification of prostate cancer, with latency compatible with real-time use – and anticipate that the ARM will remove barriers towards the use of AI designed to improve the accuracy and efficiency of cancer diagnosis,” says Chen.

The integrated AI is a deep learning algorithm, developed using large annotated datasets to identify breast and prostate cancer. The AR is driven by high-definition multimedia from a computer and visualised directly in the microscope eyepiece via a micro display mounted on the side of the microscope.

The microscope itself is a standard binocular microscope that has been modified to include a video output to the computer and the attached micro display. For Chen, the main advantage of combining AI, AR and microscopy technology for cancer diagnosis is that the cancer diagnosis workflow via a standard microscope remains unchanged.

3 Design Requirements Of Augmented Reality Microscopes

In order to achieve this, he reveals that the ARM system satisfies three major design requirements – namely ‘spatial registration of the augmented information, system response time and robustness of the deep learning algorithms.’ Firstly, he points out that AI predictions such as tumour or cell locations need to be precisely aligned with the specimen in the observer’s field of view (FOV) to retain the correct spatial context. Importantly, this alignment must be insensitive to small changes in the user’s eye position relative to the eyepiece – parallax-free – to account for user movements. Secondly, although the latest deep learning algorithms often require billions of mathematical operations, these algorithms have to be applied in real time to avoid unnatural latency in the workflow. This is ‘especially critical’ in applications such as cancer diagnosis, where the pathologist is constantly and rapidly panning around the slide.

“Finally, many deep learning algorithms for microscope images were developed using other digitisation methods, such as expensive whole-slide scanners with customised optics in histopathology. We show that these algorithms generalise to images captured via video output from a standard microscope. These three core capabilities enable the seamless integration of AI into a traditional microscopy workflow,” adds Chen.

Robotic Systems In Microscopes

Elsewhere, teams led by Dr Thomas Marchitto at the University of Colorado, Boulder and Michael Daniele at North Carolina State University (NCSU) are currently collaborating on a novel project to develop a robotic system capable of imaging, identifying and sorting microscopic fossils known as foraminifera, or forams for short. As Edgar Lobaton, associate professor in the Department of Electrical and Computer Engineering at NCSU, explains, analysis of such specimens can provide scientists with insights into the biodiversity and conditions of the ocean when the forams were alive.

So far, the team has demonstrated that machine learning can be used to identify six different species of forams from their images with performance comparable to that of human experts – and it is currently in the process of developing a robotic system that will handle the imaging and sorting automatically, which will enable the collection of a large amount of data so the team can train models capable of recognising large number of foram species.

“We are focused on the development of an open-source and affordable platform that can be used by the scientific community. Our current prototypes make use of off-the-shelf microscopes, and we are integrating the robotic components to such platforms. The robotic platform will consist mainly of 3D printed components and off-the-shelf servos and pumps,” says Lobaton.

In terms of technology, Lobaton also reveals the team uses the Amscope SE306R-P microscope, Amscope MU1803-HS-CK camera, Hiltec HS-485HB servos and a Makeblock air pump motor – with most hardware components sourced from Servocity.

“The main advantages are that this system will minimise the need for manual sorting of samples of forams for scientific studies, which is the gold standard at this point. Manual sorting is a time-consuming and error-prone procedure. In terms of next steps, we are hoping to release our first robotic prototype for imaging of forams in the next months,” he adds.

AI-Augmented Parasite Detection Microscopes

Another interesting initiative is a joint ARUP Laboratories and Techcyte project to develop the first ever AI-augmented ova and parasite detection tool. As Troy Bankhead, director at Techcyte Europe, explains, the company has found that a combination of the ability of a deep learning algorithm to quickly find and propose potential parasites with the expertise of parasitologists and medical technicians, enables ‘faster and more accurate results.’ After testing several device manufacturers, Bankhead reveals that the company created a partnership with Apacor, which manufactures a high-quality sample collection and filtration device. It then honed the sample collection and slide preparation process, resulting in a complete solution that can be ‘deployed much quicker and cheaper, but that produces results that are completely trustworthy for the labs.’

“The deep neural networks that we use to create our algorithms are very good at finding and correctly identifying things like cells, organisms, particles, and the like. But this work all starts with a scanned image – and we work with digital slide manufacturers to adapt their software and enable us to obtain images that are optimised for AI,” says Bankhead.

In his view, although humans are really good at making visual and informational associations computers are better at finding the proverbial ‘needle in a haystack’ and at being relentlessly consistent and rigorous.

“Without our technology, lab technicians performing ova and parasite detection tests can easily take up to 10 minutes to perform an evaluation. By taking care of the searching and identification of objects, and presenting them with just the objects of interest, they can reduce that time dramatically, to around 10 seconds,” he says.

Cell Analysis In Microscopes

Meanwhile, a team of scientists at the Allen Institute for Cell Science have developed a technique that uses a combination of microscopy and machine learning to train computers to see parts of the cell the human eye cannot easily distinguish. As Greg Johnson, a scientist in the Allen Institute for Cell Science team, explains, cell biologists really want to be able to know how all parts of the cell are organised and work together.

“To be able to see the precise location of cellular machinery in live cells – the location of a specific type of protein, for example – we need to use a fluorescence microscope and we need to attach a fluorescent molecule to the particular type of cellular machinery, so we can see where that part is, and only that part,” he says.

However, although confocal fluorescence microscopes like this can take three-dimensional images, they need to hit the cell with powerful lasers to get fluorescent molecules to light up, in the process essentially ‘cooking’ the cell like ants under a magnifying glass. It is in part because of this that Johnson says scientists can see only two or three types of different fluorescent molecules at a time.

In an effort to remedy this limitation, Johnson and his team took brightfield and fluorescence images of the same cell, and built a machine-learning model capable of predicting what the fluorescence image looks like based on viewing the brightfield image. Johnson believes this is ‘super useful’ because it means he and his team can ‘take brightfield-fluorescence image pairs from a bunch of different experiments and train a model for each different type of fluorescence probe.’

“So now I have a bunch of models, each one is pretty good a seeing where a different part of the cell is. We can take one brightfield image from a new sample, and give it to each model, and get a representation of where many of the cells parts are. This allows us to stitch together many different experiments to see tons of stuff that we weren’t able to before,” he says.

To implement its model, Johnson reveals that the Allen Institute team use a basic fluorescence confocal microscope and a desktop computer with a graphics card. Some cells are also labelled with a fluorescent probe.

“Brightfield images were previously seen as having little utility but now we can leverage this cheap data to do powerful things. This is something that a normal lab can do. We are also giving away software and data for free. Anyone can use and change it for whatever project they want,” he says.

“For the next steps, we want to be able to see what other types of things we can predict from these cheap images– gene expression, for example. We want smarter more creative people to take these ideas and apply it to their data,” he adds.

Recent Issues