用户信任是设计强大的视觉分析系统的关键考虑因素,尽管不可避免的偏见和人类,机器和机器介绍的其他不确定性以及绘制知识出现的画布的数据源,但可以指导用户取得合理的结论。在研究的考虑之后出现了许多因素,这些因素引入了相当大的复杂性并加剧了我们对视觉分析系统中信任关系如何发展的理解,就像它们在智能社会技术系统中一样。然而,视觉分析系统的性质并不是与其简单的表亲完全相同的现象,也不一定是相同类型的现象。无论如何,两个应用领域都呈现出对信任性需要的相同根本原因:不确定性和风险假设。此外,视觉分析系统甚至比(传统上)(传统上)在处理过程中倾向于直接对人类的投入和方向而受到的智能系统的影响更大,它受到多种认知偏见的影响,这些偏见进一步加剧了对不确定性的核算,这些偏见可能使可能受苦的不确定性折磨。用户的信心,并最终信任系统。在本文中,我们认为,必须通过提取信息和假设测试来考虑数据源的不确定性传播,以了解用户对视觉分析系统的信任如何发展其生命周期,以及分析师对可视化参数的选择。为我们提供了一种简单的手段,可以捕获不确定性和认知偏差之间的相互作用,这是分析师在评估解释时执行的搜索任务属性的函数。我们从视觉分析,人类认知理论和不确定性中对文献进行了广泛的横截面,并尝试合成有用的观点。
translated by 谷歌翻译
在过去的十年中,视频通信一直在迅速增加,YouTube提供了一种媒介,用户可以在其中发布,发现,共享和反应视频。引用研究文章的视频数量也有所增加,尤其是因为学术会议需要进行视频提交已变得相对普遍。但是,研究文章与YouTube视频之间的关系尚不清楚,本文的目的是解决此问题。我们使用YouTube视频创建了新的数据集,并在各种在线平台上提到了研究文章。我们发现,视频中引用的大多数文章都与医学和生物化学有关。我们通过统计技术和可视化分析了这些数据集,并建立了机器学习模型,以预测(1)视频中是否引用了研究文章,(2)视频中引用的研究文章是否达到了一定程度的知名度,以及(3)引用研究文章的视频是否流行。最佳模型的F1得分在80%至94%之间。根据我们的结果,在更多推文和新闻报道中提到的研究文章有更高的机会接收视频引用。我们还发现,视频观点对于预测引用和增加研究文章的普及和公众参与科学很重要。
translated by 谷歌翻译
SemideFinite编程(SDP)是一个统一的框架,可以概括线性编程和四二次二次编程,同时在理论和实践中也产生有效的求解器。但是,当覆盖SDP的约束以在线方式到达时,存在近似最佳解决方案的已知结果。在本文中,我们研究了在线涵盖线性和半决赛程序,其中通过可能错误的预测指标的建议增强了算法。我们表明,如果预测变量是准确的,我们可以有效地绕过这些不可能的结果,并在最佳解决方案(即一致性)上实现恒定因素近似值。另一方面,如果预测变量不准确,在某些技术条件下,我们取得的结果既匹配经典的最佳上限和紧密的下限,则达到恒定因素,即稳健性。更广泛地,我们引入了一个框架,该框架既扩展了(1)由Bamas,Maggiori和Svensson(Neurips 2020)研究的机器学习预测变量增加的在线套装问题,以及(2)在线覆盖SDP问题,由SDP问题发起。 Elad,Kale和Naor(ICALP 2016)。具体而言,我们获得了一般的在线学习算法,用于涵盖具有分数建议和约束的线性程序,并启动学习启发算法以涵盖SDP问题的研究。我们的技术基于Buchbinder和NAOR的原始二次框架(操作研究的数学,34,2009),并且可以进一步调整以处理变量位于有限区域的约束,即框约束。
translated by 谷歌翻译
最近,基于变压器的方法可预测多边形点或偏斜的曲线控制点可以定位文本,在场景文本检测中非常受欢迎。但是,使用的点标签形式意味着人类的阅读顺序,这会影响变压器模型的鲁棒性。至于模型体系结构,以前的方法尚未完全探索解码器中使用的查询的公式。在本文中,我们提出了一个简洁的动态点场景文本检测,称为dptext-detr,它直接将点坐标用作查询,并在解码器层之间动态更新它们。我们指出了一种简单而有效的位置标签形式,以应对原始效果。此外,增强的分解自我发项模块旨在显式地模拟多边形点序列的圆形形状,而不是非本地关注。广泛的实验证明了各种任意形状场景文本基准的训练效率,鲁棒性和最先进的性能。除了探测器之外,我们观察到现有的端到端观察者难以识别类似逆的文本。为了客观地评估他们的绩效并促进未来的研究,我们提出了一个逆文本测试集,其中包含500个手动标记图像。代码和反文本测试集将在https://github.com/ymy-k/dptext-detr上找到。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Rankings are widely collected in various real-life scenarios, leading to the leakage of personal information such as users' preferences on videos or news. To protect rankings, existing works mainly develop privacy protection on a single ranking within a set of ranking or pairwise comparisons of a ranking under the $\epsilon$-differential privacy. This paper proposes a novel notion called $\epsilon$-ranking differential privacy for protecting ranks. We establish the connection between the Mallows model (Mallows, 1957) and the proposed $\epsilon$-ranking differential privacy. This allows us to develop a multistage ranking algorithm to generate synthetic rankings while satisfying the developed $\epsilon$-ranking differential privacy. Theoretical results regarding the utility of synthetic rankings in the downstream tasks, including the inference attack and the personalized ranking tasks, are established. For the inference attack, we quantify how $\epsilon$ affects the estimation of the true ranking based on synthetic rankings. For the personalized ranking task, we consider varying privacy preferences among users and quantify how their privacy preferences affect the consistency in estimating the optimal ranking function. Extensive numerical experiments are carried out to verify the theoretical results and demonstrate the effectiveness of the proposed synthetic ranking algorithm.
translated by 谷歌翻译
Due to their ability to offer more comprehensive information than data from a single view, multi-view (multi-source, multi-modal, multi-perspective, etc.) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality becomes more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN) based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quality of each view when fusing them renders these models inexplicable, performing unsatisfactorily and inflexible in downstream remote sensing tasks. To fill this gap, in this paper, evidential deep learning is introduced to the task of aerial-ground dual-view remote sensing scene classification to model the credibility of each view. Specifically, the theory of evidence is used to calculate an uncertainty value which describes the decision-making risk of each view. Based on this uncertainty, a novel decision-level fusion strategy is proposed to ensure that the view with lower risk obtains more weight, making the classification more credible. On two well-known, publicly available datasets of aerial-ground dual-view remote sensing images, the proposed approach achieves state-of-the-art results, demonstrating its effectiveness. The code and datasets of this article are available at the following address: https://github.com/gaopiaoliang/Evidential.
translated by 谷歌翻译
A noisy training set usually leads to the degradation of the generalization and robustness of neural networks. In this paper, we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels. Specifically, we first present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels. In SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model. We theoretically show that SPR can recover clean data under some conditions. Under general scenarios, the conditions may be no longer satisfied; and some noisy data are falsely selected as clean data. To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data. To improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in parallel to make the framework scalable to large datasets. While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework and validate the theoretical results of Knockoffs-SPR. Our code and pre-trained models will be released.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译