论文标题
使用随机测试来管理COVID-19锁定的安全出口
Using random testing to manage a safe exit from the COVID-19 lockdown
论文作者
论文摘要
我们认为,经常对受感染者的比例进行抽样(通过随机测试或通过污水分析)对于管理Covid-19-19流行是至关重要的,因为这两者都实时测量由限制性措施控制的关键变量,并预计由于疾病进展而导致的医疗系统负载。 (i)对随机测试结果的了解将显着提高大流行的可预测性,(ii)允许有关如何修改限制性度量的明智和优化决策,并且延迟时间比目前的延迟时间短得多,并且(iii)启用对新手段的实时评估,以降低传输速率。 在这里,我们建议,不论适当均匀的人群的规模,每天随机测试的人数的保守估计值为15000,这足以获得有关当前感染及其时间演变的可靠数据,从而实现了限制性度量的定量效应的实时评估。更高的测试能力允许检测扩散率的地理差异。此外,最重要的是,随着每天进行采样,可以尝试重新启动,而感染者的比例仍然比放松限制的水平高的数量级高,以症状性个体的限制限制。通过考虑缓解措施的反馈和控制模型,在该模型中,在嘈杂的采样数据中衍生出饲料。
We argue that frequent sampling of the fraction of infected people (either by random testing or by analysis of sewage water), is central to managing the COVID-19 pandemic because it both measures in real time the key variable controlled by restrictive measures, and anticipates the load on the healthcare system due to progression of the disease. Knowledge of random testing outcomes will (i) significantly improve the predictability of the pandemic, (ii) allow informed and optimized decisions on how to modify restrictive measures, with much shorter delay times than the present ones, and (iii) enable the real-time assessment of the efficiency of new means to reduce transmission rates. Here we suggest, irrespective of the size of a suitably homogeneous population, a conservative estimate of 15000 for the number of randomly tested people per day which will suffice to obtain reliable data about the current fraction of infections and its evolution in time, thus enabling close to real-time assessment of the quantitative effect of restrictive measures. Still higher testing capacity permits detection of geographical differences in spreading rates. Furthermore and most importantly, with daily sampling in place, a reboot could be attempted while the fraction of infected people is still an order of magnitude higher than the level required for a relaxation of restrictions with testing focused on symptomatic individuals. This is demonstrated by considering a feedback and control model of mitigation where the feed-back is derived from noisy sampling data.