在工程应用中定期遇到时间和空间相关的不确定参数。通常,这些不确定因素被占用随机字段和进程,这需要了解出现不可用的出现概率分布函数的知识。在这些情况下,非概率方法如间隔分析和模糊集合理论是有用的不确定性措施。涉及模糊和间隔字段的部分微分方程是使用有限元方法来解决的,其中使用某些基函数扩展方法采样输入字段。然而,这种方法是有问题的,因为它依赖于关于空间相关领域的知识。在这项工作中,我们利用物理信息的神经网络(Pinns)来解决间隔和模糊部分微分方程。得到的网络结构被称为间隔物理信息的神经网络(IPInns)和模糊物理信息的神经网络(FPInns),其显示了获得涉及空间和/或时间不确定参数字段的方程的有界解的有希望的结果。与有限元接近相比,不需要输入字段的相关长度规范以及不需要Monte-Carlo仿真。实际上,关于输入间隔字段的信息直接获得作为所提出的解决方案方案的副产物。此外,保留了PinNS的所有主要优点,即该方案的网外性质,并且易于逆问题设置。
translated by 谷歌翻译
Facial action units (FAUs) are critical for fine-grained facial expression analysis. Although FAU detection has been actively studied using ideally high quality images, it was not thoroughly studied under heavily occluded conditions. In this paper, we propose the first occlusion-robust FAU recognition method to maintain FAU detection performance under heavy occlusions. Our novel approach takes advantage of rich information from the latent space of masked autoencoder (MAE) and transforms it into FAU features. Bypassing the occlusion reconstruction step, our model efficiently extracts FAU features of occluded faces by mining the latent space of a pretrained masked autoencoder. Both node and edge-level knowledge distillation are also employed to guide our model to find a mapping between latent space vectors and FAU features. Facial occlusion conditions, including random small patches and large blocks, are thoroughly studied. Experimental results on BP4D and DISFA datasets show that our method can achieve state-of-the-art performances under the studied facial occlusion, significantly outperforming existing baseline methods. In particular, even under heavy occlusion, the proposed method can achieve comparable performance as state-of-the-art methods under normal conditions.
translated by 谷歌翻译
在实现最先进的性能和在实际应用中负担得起的大型模型之间,计算机视觉的差异越来越大。在本文中,我们解决了这个问题,并显着弥合了这两种模型之间的差距。在我们的实证研究中,我们不一定要提出一种新方法,而是要努力确定一个可靠的有效食谱,以使最先进的大型模型在实践中负担得起。我们证明,当正确执行时,知识蒸馏可以成为减少大型尺寸而不损害其性能的强大工具。特别是,我们发现存在某些隐式设计选择,这可能会严重影响蒸馏的有效性。我们的关键贡献是对这些设计选择的明确识别,这些选择以前在文献中尚未阐明。我们通过一项全面的实证研究备份了我们的发现,在广泛的视觉数据集上展示了令人信服的结果,尤其是获得了最先进的Imagenet Resnet-50模型,该模型可实现82.8%的Top-1准确性。 。
translated by 谷歌翻译