论文标题
无线电干涉仪的随机校准
Stochastic Calibration of Radio Interferometers
论文作者
论文摘要
随着现代射频望远镜(如Lofar和SKA)等未来望远镜产生的数据速率的不断提高,许多数据处理步骤被需要使用有限的计算资源处理的数据量所淹没。校准就是这样一种操作,主要是总体数据处理计算成本,尽管如此,它是实现许多科学目标的重要操作。确实存在校准算法,可以很好地扩展到阵列的站点数量和正在校准的方向数量。但是,剩余的瓶颈是原始数据量,它的基准数与基准的数量缩放,并且与电台数量的平方成正比。我们提出了一种“随机”校准策略,其中我们仅在一个小批次数据中读取用于获得校准解决方案的数据,而不是读取全部校准的数据。但是,我们获得了有效的全批数据的解决方案。通常,在进行校准之前,需要对数据进行平均,以在尺寸限制的计算内存中适应数据。随机校准克服了对数据进行平均的需求,然后才能执行任何校准,并提供许多优点,包括:启用减轻淡淡的射频干扰;从数据中更好地从数据中删除强源;以及快速无线电瞬变的更好检测和空间定位。
With ever increasing data rates produced by modern radio telescopes like LOFAR and future telescopes like the SKA, many data processing steps are overwhelmed by the amount of data that needs to be handled using limited compute resources. Calibration is one such operation that dominates the overall data processing computational cost, nonetheless, it is an essential operation to reach many science goals. Calibration algorithms do exist that scale well with the number of stations of an array and the number of directions being calibrated. However, the remaining bottleneck is the raw data volume, which scales with the number of baselines, and which is proportional to the square of the number of stations. We propose a 'stochastic' calibration strategy where we only read in a mini-batch of data for obtaining calibration solutions, as opposed to reading the full batch of data being calibrated. Nonetheless, we obtain solutions that are valid for the full batch of data. Normally, data need to be averaged before calibration is performed to accommodate the data in size-limited compute memory. Stochastic calibration overcomes the need for data averaging before any calibration can be performed, and offers many advantages including: enabling the mitigation of faint radio frequency interference; better removal of strong celestial sources from the data; and better detection and spatial localization of fast radio transients.