二重优化发现在现代机器学习问题中发现了广泛的应用,例如超参数优化,神经体系结构搜索,元学习等。而具有独特的内部最小点(例如,内部功能是强烈凸的,都具有唯一的内在最小点)的理解,这是充分理解的,多个内部最小点的问题仍然是具有挑战性和开放的。为此问题设计的现有算法适用于限制情况,并且不能完全保证融合。在本文中,我们采用了双重优化的重新制定来限制优化,并通过原始的双二线优化(PDBO)算法解决了问题。 PDBO不仅解决了多个内部最小挑战,而且还具有完全一阶效率的情况,而无需涉及二阶Hessian和Jacobian计算,而不是大多数现有的基于梯度的二杆算法。我们进一步表征了PDBO的收敛速率,它是与多个内部最小值的双光线优化的第一个已知的非质合收敛保证。我们的实验证明了所提出的方法的预期性能。
translated by 谷歌翻译
彼得纤维优化已广泛应用于许多重要的机器学习应用,例如普带的参数优化和元学习。最近,已经提出了几种基于动量的算法来解决贝韦尔优化问题。但是,基于SGD的算法的$ \ Mathcal {\ widetilde o}(\ epsilon ^ {-2}),那些基于势头的算法不会达到可释放的计算复杂性。在本文中,我们提出了两种用于双纤维优化的新算法,其中第一算法采用基于动量的递归迭代,第二算法采用嵌套环路中的递归梯度估计来降低方差。我们表明这两种算法都达到了$ \ mathcal {\ widetilde o}的复杂性(\ epsilon ^ { - 1.5})$,这优于所有现有算法的级别。我们的实验验证了我们的理论结果,并展示了我们在封路数据应用程序中的算法的卓越实证性能。
translated by 谷歌翻译
Given partial objects and some complete ones as references, point cloud completion aims to recover authentic shapes. However, existing methods pay little attention to general shapes, which leads to the poor authenticity of completion results. Besides, the missing patterns are diverse in reality, but existing methods can only handle fixed ones, which means a poor generalization ability. Considering that a partial point cloud is a subset of the corresponding complete one, we regard them as different samples of the same distribution and propose Structure Retrieval based Point Completion Network (SRPCN). It first uses k-means clustering to extract structure points and disperses them into distributions, and then KL Divergence is used as a metric to find the complete structure point cloud that best matches the input in a database. Finally, a PCN-like decoder network is adopted to generate the final results based on the retrieved structure point clouds. As structure plays an important role in describing the general shape of an object and the proposed structure retrieval method is robust to missing patterns, experiments show that our method can generate more authentic results and has a stronger generalization ability.
translated by 谷歌翻译
学生绩效预测是了解学生需求的重要研究问题,呈现适当的学习机会/资源,并培养教学质量。但是,传统的机器学习方法无法产生稳定和准确的预测结果。在本文中,我们提出了一种基于图的集合机器学习方法,旨在通过多种方法的共识来提高单机学习方法的稳定性。具体而言,我们利用监督预测方法和无监督的聚类方法,构建一种迭代方法,它在二分图中传播,以及收敛到更稳定和准确的预测结果。广泛的实验表明了我们提出的方法在预测更准确的学生表现方面的有效性。具体而言,我们的模型优于最佳的传统机器学习算法,以预测准确度高达14.8%。
translated by 谷歌翻译
在点云生成和完成中,用于将潜在特征转换为点云的先前方法通常基于完全连接的层(基于FC)或折叠操作(基于折叠)。然而,基于FC的方法产生的点云通常由异常值和粗糙表面困扰。对于基于折叠的方法,它们的数据流量很大,收敛速度慢,并且它们也很难处理非平滑表面的产生。在这项工作中,我们提出了Axform,一种基于注意的方法来将潜在特征转换为点云。 Axform首先使用完全连接的图层在临时空间中生成点。然后聚合这些中期点以生成目标点云。 AXFROM将参数共享和数据流入到帐户中,这使得异常值较少,更少的网络参数和更快的收敛速度。 Axform产生的点不具有强大的2歧管约束,这改善了非平滑表面的产生。当AxForm扩展到本地代以进行多个分支时,向心缩法使其具有自集群和空间一致性的属性,进一步实现了无监督的语义分割。我们还采用此方案和设计AXFormNet进行点云完成。对不同数据集的相当大的实验表明我们的方法实现了最先进的结果。
translated by 谷歌翻译
背景:宫颈癌严重影响了女性生殖系统的健康。光学相干断层扫描(OCT)作为宫颈疾病检测的非侵入性,高分辨率成像技术。然而,OCT图像注释是知识密集型和耗时的,这阻碍了基于深度学习的分类模型的培训过程。目的:本研究旨在基于自我监督学习,开发一种计算机辅助诊断(CADX)方法来对体内宫颈OCT图像进行分类。方法:除了由卷积神经网络(CNN)提取的高电平语义特征外,建议的CADX方法利用了通过对比纹理学习来利用未标记的宫颈OCT图像的纹理特征。我们在中国733名患者的多中心临床研究中对OCT图像数据集进行了十倍的交叉验证。结果:在用于检测高风险疾病的二元分类任务中,包括高级鳞状上皮病变和宫颈癌,我们的方法实现了0.9798加号或减去0.0157的面积曲线值,灵敏度为91.17加或对于OCT图像贴片,减去4.99%,特异性为93.96加仑或减去4.72%;此外,它在测试集上的四位医学专家中表现出两种。此外,我们的方法在使用交叉形阈值投票策略的118名中国患者中达到了91.53%的敏感性和97.37%的特异性。结论:所提出的基于对比 - 学习的CADX方法表现优于端到端的CNN模型,并基于纹理特征提供更好的可解释性,其在“见和治疗”的临床协议中具有很大的潜力。
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
Cohn and Umans proposed a framework for developing fast matrix multiplication algorithms based on the embedding computation in certain groups algebras. In subsequent work with Kleinberg and Szegedy, they connected this to the search for combinatorial objects called strong uniquely solvable puzzles (strong USPs). We begin a systematic computer-aided search for these objects. We develop and implement constraint-based algorithms build on reductions to $\mathrm{SAT}$ and $\mathrm{IP}$ to verify that puzzles are strong USPs, and to search for large strong USPs. We produce tight bounds on the maximum size of a strong USP for width $k \le 5$, construct puzzles of small width that are larger than previous work, and improve the upper bounds on strong USP size for $k \le 12$. Although our work only deals with puzzles of small-constant width, the strong USPs we find imply matrix multiplication algorithms that run in $O(n^\omega)$ time with exponent $\omega \le 2.66$. While our algorithms do not beat the fastest algorithms, our work provides evidence and, perhaps, a path to finding families of strong USPs that imply matrix multiplication algorithms that are more efficient than those currently known.
translated by 谷歌翻译
This paper presents a practical global optimization algorithm for the K-center clustering problem, which aims to select K samples as the cluster centers to minimize the maximum within-cluster distance. This algorithm is based on a reduced-space branch and bound scheme and guarantees convergence to the global optimum in a finite number of steps by only branching on the regions of centers. To improve efficiency, we have designed a two-stage decomposable lower bound, the solution of which can be derived in a closed form. In addition, we also propose several acceleration techniques to narrow down the region of centers, including bounds tightening, sample reduction, and parallelization. Extensive studies on synthetic and real-world datasets have demonstrated that our algorithm can solve the K-center problems to global optimal within 4 hours for ten million samples in the serial mode and one billion samples in the parallel mode. Moreover, compared with the state-of-the-art heuristic methods, the global optimum obtained by our algorithm can averagely reduce the objective function by 25.8% on all the synthetic and real-world datasets.
translated by 谷歌翻译
Video-language pre-training has advanced the performance of various downstream video-language tasks. However, most previous methods directly inherit or adapt typical image-language pre-training paradigms to video-language pre-training, thus not fully exploiting the unique characteristic of video, i.e., temporal. In this paper, we propose a Hierarchical Temporal-Aware video-language pre-training framework, HiTeA, with two novel pre-training tasks for modeling cross-modal alignment between moments and texts as well as the temporal relations of video-text pairs. Specifically, we propose a cross-modal moment exploration task to explore moments in videos, which results in detailed video moment representation. Besides, the inherent temporal relations are captured by aligning video-text pairs as a whole in different time resolutions with multi-modal temporal relation exploration task. Furthermore, we introduce the shuffling test to evaluate the temporal reliance of datasets and video-language pre-training models. We achieve state-of-the-art results on 15 well-established video-language understanding and generation tasks, especially on temporal-oriented datasets (e.g., SSv2-Template and SSv2-Label) with 8.6% and 11.1% improvement respectively. HiTeA also demonstrates strong generalization ability when directly transferred to downstream tasks in a zero-shot manner. Models and demo will be available on ModelScope.
translated by 谷歌翻译