论文标题
DeepFocus:使用样品基于CNN的锋利功能,几次显微镜幻灯片自动对焦
DeepFocus: a Few-Shot Microscope Slide Auto-Focus using a Sample Invariant CNN-based Sharpness Function
论文作者
论文摘要
自动对焦(AF)方法广泛用于生物显微镜检查中,例如以获取时间表,其中成像的物体倾向于从焦点中排出。 AD算法确定了将样品移回焦平面的最佳距离。当前基于硬件的方法需要修改显微镜和基于图像的算法要么依赖于许多图像来收敛到最锐利的位置,要么需要训练数据以及针对每种仪器和成像配置的模型。在这里,我们提出了DeepFocus,这是一种我们作为微管理器插件实现的AF方法,并表征了其基于卷积神经网络的清晰度功能,我们观察到它是深度的共同变化和样品不变的。样本不变性允许我们的AF算法使用经过训练的模型训练,可与宽范围的光学显微镜一起使用,并收敛到最佳的轴向位置,并在三个迭代中收集到最佳的轴向位置,并且单个仪器依赖性的校准堆栈对平坦(但任意)纹理的对象进行了依赖。从对合成数据和实验数据进行的实验中,我们观察到平均精度,给定3个测得的图像,为0.30 +-0.16微米,为10倍,Na 0.3目标。我们预见,这种性能和低图像数将有助于限制使用光敏样品采集期间的光损伤。
Autofocus (AF) methods are extensively used in biomicroscopy, for example to acquire timelapses, where the imaged objects tend to drift out of focus. AD algorithms determine an optimal distance by which to move the sample back into the focal plane. Current hardware-based methods require modifying the microscope and image-based algorithms either rely on many images to converge to the sharpest position or need training data and models specific to each instrument and imaging configuration. Here we propose DeepFocus, an AF method we implemented as a Micro-Manager plugin, and characterize its Convolutional neural network-based sharpness function, which we observed to be depth co-variant and sample-invariant. Sample invariance allows our AF algorithm to converge to an optimal axial position within as few as three iterations using a model trained once for use with a wide range of optical microscopes and a single instrument-dependent calibration stack acquisition of a flat (but arbitrary) textured object. From experiments carried out both on synthetic and experimental data, we observed an average precision, given 3 measured images, of 0.30 +- 0.16 micrometers with a 10x, NA 0.3 objective. We foresee that this performance and low image number will help limit photodamage during acquisitions with light-sensitive samples.