Nikon Instruments Europe BV et toutes les filiales Nikon en Europe suivent de près la situation liée au COVID-19 (coronavirus) et appliquent les recommandations de chaque autorité sanitaire locale. En savoir plus sur les mesures mises en place.
Taking microscope imaging and analysis to the next level
Artificial Intelligence (AI) and deep learning methods are making seemingly impossible tasks now possible. From recovering contrast to improving signal-to-noise ratio, or for new approaches to managing challenging acquisition parameters or segmentation previously difficult or nearly impossible, these approaches can now be automated thanks to AI.
The NIS-Elements NIS.ai suite consists of various modules and functions which expand the NIS-Elements platform by building in tailor-made solutions for acquisition, visualization and analysis.
Clarify.ai uses artificial intelligence to automatically remove blur from fluorescence microscope images.
Clarify.ai utilizes new Nikon technologies executed on graphic processing units (GPUs) to leverage fast and efficient clarity in images normally corrupted by blur due to out-of-focus light.
The module is pre-trained to recognize fluorescence signal emitted from out-of-focus planes and can computationally remove this haze component from the image automatically, leaving behind the in-focus structures, and can be used on any widefield 2D or 3D data set, detector, or magnification, without the need for AI training or introduction of bias from complicated user-settings.
NIS.ai Processing and Analysis Module
The NIS.ai processing and analysis module consists of tools dedicated to improving and enhancing efficiency in data acquisition and simplifying previously complex or difficult analysis routines.
By recognizing patterns present in two different imaging channels, Convert.ai can be trained to predict what the second channel would look like when only the first channel is acquired.
Commonly, this can be used as a segmentation tool for label-free approaches, or imaging without harmful near-UV excitation. Once the neural network learns the pattern common to two channels, then in subsequent experiments the second channel is no longer needed to be acquired. Throughput of acquisition as well as specimen viability both increase as a result.
Some fluorescent samples express a very low signal and it is difficult to visualize or extract details for segmentation.
In addition, many of these samples are sensitive to light or photobleach very quickly and need to be imaged as fast as possible.
Enhance.ai can restore details by training the network what properly-exposed images look like. Then this recipe can be applied to underexposed images to restore detail that can be used for further analysis.
Some images are nearly impossible to segment by traditional intensity thresholding methods. A neural network can be trained by human classification of structures of interest that cannot easily be defined by classic thresholding and image processing by using Segment.ai.
By tracing features of interest and training these compared to the underlying image, the neural network can learn and apply segmentation to similar images, recognizing features previously only identifiable by painstaking manual tracing approaches.
Included in the NIS-Elements AR core package, Denoise.ai can be applied to confocal images to remove shot noise. All images contain shot noise, which is a Poisson-distributed noise related to discreetly sampling (acquiring images) of a continuous event. As signal levels decrease, the contribution of shot noise increases and noisy images result, following a square-root function. Such noise therefore is modeled in a neural network and doesn’t need to be further trained.
With new fluorescent techniques pushing intensities lower and acquisition speeds increasing, Denoise.ai can recognize and remove the shot noise component of images, increasing clarity and allowing for shorter exposure times or more exposures of specimens while maintaining viability.
No programming skills required
Clarify.ai and Denoise.ai are pre-trained deep learning networks and require no additional settings or parameters in order to apply these tools automatically to images.
The NIS.ai Processing and Analysis module uses training data to specifically target and address user-defined experiment parameters: it employs convolutional neural networks (CNNs) to learn from labeled training data created by either conventional segmentation or human-assisted tracing of small subset of representative samples.
When using the module, the software interface makes it easy to apply complex deep learning to sample data, eliminating the need to design a complex neural network and apply training data to it.
Automated tools take this training data and apply the neural network to recognize patterns. The result training recipe can then be applied repeatedly and reliably to similar samples to process or analyze huge volumes of data at significantly faster speed than traditional techniques.
GA3: an analysis pipeline with AI capabilities
Using NIS-Elements General Analysis (GA3), multiple conventional segmentation and AI tools can be combined to create data measurement routines customized for a specific experiment. These can be applied across multiple images, experiment runs, or high content data.
Because GA3 is freely customizable, it can be adapted to new experiment routines easily. Routines can be embedded as well during experiment acquisition runs.
Use NIS.ai as part of an imaging pipeline
NIS.ai tools can be combined with all other features of the NIS-Elements platform to develop imaging protocols and targeted analysis from basic counting through rare event or selective phenotype detection and analysis.
This can be incorporated post-acquisition, or more impactfully, as an integral part of an experimental protocol so that NIS-Elements Intelligent Acquisition analysis results obtained during the experiment run guide the experimental parameters in different directions.
Using the JOBS experiment wizard, customized experiments with embedded analysis tasks and branches based on analysis results can be created, allowing for higher throughput and more targeted acquisitions.
Artificial intelligence has become commonly accepted in diagnostic imaging and is an increasingly popular tool for a number of applications. Its appeal over traditional mathematical approaches is both its speed and incredible accuracy. However, it is important to be able to validate the results of AI computations, and to utilize these results appropriately for computational analysis.
NIS-Elements software provides feedback during training routines to indicate the confidence of the trained neural network to provide accurate results, as well as several analysis tools and workflows to validate the efficiency of the neural networks, or to allow easy comparison of AI data to ground truth data.
Summary of AI Modules
●: included, ⚬: option
|NIS C, Ar, Ar ML, Ar Passive||Included||Optional||Optional||Optional||Optional|
|Bundled with Deconvolution Modules||No||Included||No||No||No|
|Offline Batch Denoise Package||Included||No||No||No||No|