Nikon Instruments Inc. | Americas

Return to the top of the page

NIS-Elements

Imaging Software

NIS.ai

AI accelerates research efficiency

Artificial Intelligence (AI) and deep learning methods are making seemingly impossible tasks now possible. From recovering contrast to improving signal-to-noise ratio, or for new approaches to managing challenging acquisition parameters or segmentation previously difficult or nearly impossible, these approaches can now be automated thanks to AI.

The NIS-Elements NIS.ai suite consists of various modules and functions which expand the NIS-Elements platform by building in tailor-made solutions for acquisition, visualization and analysis.

Conventional Thresholding
AI Segmentation

Intensity measurements were desired to be made along the nuclear envelope of cells. Conventional segmentation could not differentiate the cellular structures and misses several cells. AI-trained segmentation recognizes and identifies the nuclear envelope successfully.

Raw Image
AI

Widefield data can be contaminated by scattered and out-of-focus light, but AI-based tools can recover high contrast images by removing noise and blur.

Key Features

Clarify.ai Module

Clarify.ai uses artificial intelligence to automatically remove blur from fluorescence microscope images.

Clarify.ai utilizes new Nikon technologies executed on graphic processing units (GPUs) to leverage fast and efficient clarity in images normally corrupted by blur due to out-of-focus light.

The module is pre-trained to recognize fluorescence signal emitted from out-of-focus planes and can computationally remove this haze component from the image automatically, leaving behind the in-focus structures, and can be used on any widefield 2D or 3D data set, detector, or magnification, without the need for AI training or introduction of bias from complicated user-settings.

Original
Clarify.ai

60x Multichannel 3D widefield Z stack before and after application of Clarify.ai

Original
Clarify.ai

20x Multichannel 2D widefield image of a scattering tissue slice before and after application of Clarify.ai

NIS.ai Processing and Analysis Module

The NIS.ai processing and analysis module consists of tools dedicated to improving and enhancing efficiency in data acquisition and simplifying previously complex or difficult analysis routines.

Convert.ai

By recognizing patterns present in two different imaging channels, Convert.ai can be trained to predict what the second channel would look like when only the first channel is acquired.

Commonly, this can be used as a segmentation tool for label-free approaches, or imaging without harmful near-UV excitation. Once the neural network learns the pattern common to two channels, then in subsequent experiments the second channel is no longer needed to be acquired. Throughput of acquisition as well as specimen viability both increase as a result.

Original
Convert.ai

DAPI staining of nuclei is a common method allowing cell counting and segmentation. Convert.ai can be trained to predict where the DAPI label is present in DIC or phase images. This predicted channel can then be used for segmentation and counting, without ever having to label the specimen with DAPI or acquire a fluorescence channel.

Photos courtesy of Dr. Kentaro Kobayashi, Division of Technical, Research Institute for Electronic Science, Hokkaido University

Enhance.ai

Some fluorescent samples express a very low signal and it is difficult to visualize or extract details for segmentation.

In addition, many of these samples are sensitive to light or photobleach very quickly and need to be imaged as fast as possible.

Enhance.ai can restore details by training the network what properly-exposed images look like. Then this recipe can be applied to underexposed images to restore detail that can be used for further analysis.

Original
Enhance.ai

DAPI stained nuclei are purposefully underexposed to limit the specimen’s exposure to near-UV light. Enhance.ai is used to restore the signal-to-noise ratio to normally exposed DAPI staining, for easy segmentation and counting.

Segment.ai

Some images are nearly impossible to segment by traditional intensity thresholding methods. A neural network can be trained by human classification of structures of interest that cannot easily be defined by classic thresholding and image processing by using Segment.ai.

By tracing features of interest and training these compared to the underlying image, the neural network can learn and apply segmentation to similar images, recognizing features previously only identifiable by painstaking manual tracing approaches.

Original
Segment.ai

Neurites in phase-contrast were not possible to define accurately by traditional thresholding. Segment.ai was trained on hand-traced neurites (human recognized) and learned how to trace neurites in subsequent images.

Denoise.ai Function

Included in the NIS-Elements AR core package, Denoise.ai can be applied to confocal images to remove shot noise. All images contain shot noise, which is a Poisson-distributed noise related to discreetly sampling (acquiring images) of a continuous event. As signal levels decrease, the contribution of shot noise increases and noisy images result, following a square-root function. Such noise therefore is modeled in a neural network and doesn’t need to be further trained.

With new fluorescent techniques pushing intensities lower and acquisition speeds increasing, Denoise.ai can recognize and remove the shot noise component of images, increasing clarity and allowing for shorter exposure times or more exposures of specimens while maintaining viability.

Original
Denoise.ai

Denoise.ai can be applied to remove the shot noise component of images while leaving the underlying structure and intensity values undisturbed.

No programming skills required

Clarify.ai and Denoise.ai are pre-trained deep learning networks and require no additional settings or parameters in order to apply these tools automatically to images.

The NIS.ai Processing and Analysis module uses training data to specifically target and address user-defined experiment parameters: it employs convolutional neural networks (CNNs) to learn from labeled training data created by either conventional segmentation or human-assisted tracing of small subset of representative samples.

When using the module, the software interface makes it easy to apply complex deep learning to sample data, eliminating the need to design a complex neural network and apply training data to it.

Automated tools take this training data and apply the neural network to recognize patterns. The result training recipe can then be applied repeatedly and reliably to similar samples to process or analyze huge volumes of data at significantly faster speed than traditional techniques.

GA3: an analysis pipeline with AI capabilities

The NIS-Elements GA (General Analysis)/GA3 option enables easy customization of complex analysis or statistical flows such as 3D volume measurement and 4D tracking by simply dragging and dropping analysis templates, ensuring accurate and reliable analyses.

General Analysis is used to apply Convert.ai to brightfield images to mark nuclei, and Denoise.ai applied to a noisy fluorescence channels. The converted channels can then be tracked over time lapse to measure cell movements. This routine is then applied to multiple data sets for data measurement.

Streamlining and advancing total workflow efficiency

By combining functions of NIS-Elements such as a General Analysis processing toolbox (GA/GA3) and acquisition workflow toolbox (JOBS) for customizing complex experiments, a user can develop various image acquisition protocols and streamline the entire experimental workflow, from image acquisition to analysis.

Various functions performed by applying NIS.ai such as detection and analysis of specific cell states can be incorporated into image acquisition sequences that influence the control parameters of the acquisition device during an experiment based on feedback from analysis results, improving throughput and enabling the building of more complex experimental systems.

Example of incorporating GA and NIS.ai into JOBS
The image above shows an example of Segment.ai being used in an experiment. After multipoint imaging, AI is used to detect target cells. If target cells are detected, the result is fed back into the experimental sequence, and light stimulation or changes to the imaging conditions are performed. If target cells are not detected, the system moves to the next imaging point.

Quantifiable Results

Artificial intelligence has become commonly accepted in diagnostic imaging and is an increasingly popular tool for a number of applications. Its appeal over traditional mathematical approaches is both its speed and incredible accuracy. However, it is important to be able to validate the results of AI computations, and to utilize these results appropriately for computational analysis.

NIS-Elements software provides feedback during training routines to indicate the confidence of the trained neural network to provide accurate results, as well as several analysis tools and workflows to validate the efficiency of the neural networks, or to allow easy comparison of AI data to ground truth data.

Summary of AI Modules

●: included, ⚬: option

Denoise.ai Clarify.ai Enhance.ai Segment.ai Convert.ai
NIS C, Ar, Ar ML, Ar Passive Included Optional Optional Optional Optional
NIS.ai Module No No Included Included Included
Deconvolution Modules No Included No No No
Batch Deconvolution No Included No No No
Offline Batch Denoise Package Included No No No No