Nikon Instruments Inc. | Americas
Artboard 1
en Change Region

Global Site

Taking microscope imaging and analysis to the next level

Artificial Intelligence (AI) and deep learning methods are making seemingly impossible tasks now possible. From recovering contrast to improving signal-to-noise ratio, or for new approaches to managing challenging acquisition parameters or segmentation previously difficult or nearly impossible, these approaches can now be automated thanks to AI.

The NIS-Elements NIS.ai suite consists of various modules and functions which expand the NIS-Elements platform by building in tailor-made solutions for acquisition, visualization and analysis.


AI Segmentation
Conventional Thresholding

Intensity measurements were desired to be made along the nuclear envelope of cells. Conventional segmentation could not differentiate the cellular structures and misses several cells. AI-trained segmentation recognizes and identifies the nuclear envelope successfully.

AI
Raw Image

Widefield data can be contaminated by scattered and out-of-focus light, but AI-based tools can recover high contrast images by removing noise and blur.

Key Features

Clarify.ai Module

Clarify.ai uses artificial intelligence to automatically remove blur from fluorescence microscope images.

Clarify.ai utilizes new Nikon technologies executed on graphic processing units (GPUs) to leverage fast and efficient clarity in images normally corrupted by blur due to out-of-focus light.

The module is pre-trained to recognize fluorescence signal emitted from out-of-focus planes and can computationally remove this haze component from the image automatically, leaving behind the in-focus structures, and can be used on any widefield 2D or 3D data set, detector, or magnification, without the need for AI training or introduction of bias from complicated user-settings.

Clarify.ai
Original

60x Multichannel 3D widefield Z stack before and after application of Clarify.ai

Clarify.ai
Original

20x Multichannel 2D widefield image of a scattering tissue slice before and after application of Clarify.ai


NIS.ai Processing and Analysis Module

The NIS.ai processing and analysis module consists of tools dedicated to improving and enhancing efficiency in data acquisition and simplifying previously complex or difficult analysis routines.

Convert.ai

By recognizing patterns present in two different imaging channels, Convert.ai can be trained to predict what the second channel would look like when only the first channel is acquired.

Commonly, this can be used as a segmentation tool for label-free approaches, or imaging without harmful near-UV excitation. Once the neural network learns the pattern common to two channels, then in subsequent experiments the second channel is no longer needed to be acquired. Throughput of acquisition as well as specimen viability both increase as a result.

Convert.ai
Original

DAPI staining of nuclei is a common method allowing cell counting and segmentation. Convert.ai can be trained to predict where the DAPI label is present in DIC or phase images. This predicted channel can then be used for segmentation and counting, without ever having to label the specimen with DAPI or acquire a fluorescence channel.

Photos courtesy of Dr. Kentaro Kobayashi, Division of Technical, Research Institute for Electronic Science, Hokkaido University


Enhance.ai

Some fluorescent samples express a very low signal and it is difficult to visualize or extract details for segmentation.

In addition, many of these samples are sensitive to light or photobleach very quickly and need to be imaged as fast as possible.

Enhance.ai can restore details by training the network what properly-exposed images look like. Then this recipe can be applied to underexposed images to restore detail that can be used for further analysis.

Enhance.ai
Original

DAPI stained nuclei are purposefully underexposed to limit the specimen’s exposure to near-UV light. Enhance.ai is used to restore the signal-to-noise ratio to normally exposed DAPI staining, for easy segmentation and counting.


Segment.ai

Some images are nearly impossible to segment by traditional intensity thresholding methods. A neural network can be trained by human classification of structures of interest that cannot easily be defined by classic thresholding and image processing by using Segment.ai.

By tracing features of interest and training these compared to the underlying image, the neural network can learn and apply segmentation to similar images, recognizing features previously only identifiable by painstaking manual tracing approaches.

Segment.ai
Original

Neurites in phase-contrast were not possible to define accurately by traditional thresholding. Segment.ai was trained on hand-traced neurites (human recognized) and learned how to trace neurites in subsequent images.


Denoise.ai Function

Included in the NIS-Elements AR core package, Denoise.ai can be applied to confocal images to remove shot noise. All images contain shot noise, which is a Poisson-distributed noise related to discreetly sampling (acquiring images) of a continuous event. As signal levels decrease, the contribution of shot noise increases and noisy images result, following a square-root function. Such noise therefore is modeled in a neural network and doesn’t need to be further trained.

With new fluorescent techniques pushing intensities lower and acquisition speeds increasing, Denoise.ai can recognize and remove the shot noise component of images, increasing clarity and allowing for shorter exposure times or more exposures of specimens while maintaining viability.

Denoise.ai
Original

Denoise.ai can be applied to remove the shot noise component of images while leaving the underlying structure and intensity values undisturbed.


No programming skills required

Clarify.ai and Denoise.ai are pre-trained deep learning networks and require no additional settings or parameters in order to apply these tools automatically to images.

The NIS.ai Processing and Analysis module uses training data to specifically target and address user-defined experiment parameters: it employs convolutional neural networks (CNNs) to learn from labeled training data created by either conventional segmentation or human-assisted tracing of small subset of representative samples.

When using the module, the software interface makes it easy to apply complex deep learning to sample data, eliminating the need to design a complex neural network and apply training data to it.

Automated tools take this training data and apply the neural network to recognize patterns. The result training recipe can then be applied repeatedly and reliably to similar samples to process or analyze huge volumes of data at significantly faster speed than traditional techniques.


GA3: an analysis pipeline with AI capabilities

Using NIS-Elements General Analysis (GA3), multiple conventional segmentation and AI tools can be combined to create data measurement routines customized for a specific experiment. These can be applied across multiple images, experiment runs, or high content data.

Because GA3 is freely customizable, it can be adapted to new experiment routines easily. Routines can be embedded as well during experiment acquisition runs.

General Analysis is used to apply Convert.ai to brightfield images to mark nuclei, and Denoise.ai applied to a noisy fluorescence channels. The converted channels can then be tracked over time lapse to measure cell movements. This routine is then applied to multiple data sets for data measurement.


Use NIS.ai as part of an imaging pipeline

NIS.ai tools can be combined with all other features of the NIS-Elements platform to develop imaging protocols and targeted analysis from basic counting through rare event or selective phenotype detection and analysis.

This can be incorporated post-acquisition, or more impactfully, as an integral part of an experimental protocol so that NIS-Elements Intelligent Acquisition analysis results obtained during the experiment run guide the experimental parameters in different directions.

Using the JOBS experiment wizard, customized experiments with embedded analysis tasks and branches based on analysis results can be created, allowing for higher throughput and more targeted acquisitions.

Example of utilizing Segment.ai in an experiment run to analyze XY positions as they are captured, and to search for specific phenotypes. When a target cell is found, a stimulation experiment is performed. If no target cell is found, the experiment proceeds to the next XY position.


Quantifiable Results

Artificial intelligence has become commonly accepted in diagnostic imaging and is an increasingly popular tool for a number of applications. Its appeal over traditional mathematical approaches is both its speed and incredible accuracy. However, it is important to be able to validate the results of AI computations, and to utilize these results appropriately for computational analysis.

NIS-Elements software provides feedback during training routines to indicate the confidence of the trained neural network to provide accurate results, as well as several analysis tools and workflows to validate the efficiency of the neural networks, or to allow easy comparison of AI data to ground truth data.


Summary of AI Modules

●: included, ⚬: option

Denoise.ai Clarify.ai Enhance.ai Segment.ai Convert.ai
NIS C, Ar, Ar ML, Ar Passive Included Optional Optional Optional Optional
NIS.ai Module No No Included Included Included
Deconvolution Modules No Included No No No
Batch Deconvolution No Included No No No
Offline Batch Denoise Package Included No No No No