持久图(PDS)通常以同源性类别的死亡和出生为特征,以提供图形结构的拓扑表示,通常在机器学习任务中有用。先前的作品依靠单个图形签名来构建PD。在本文中,我们探讨了多尺度图标志家族的使用,以增强拓扑特征的鲁棒性。我们提出了一个深度学习体系结构来处理该集合的输入。基准图分类数据集上的实验表明,与使用图神经网络的最新方法相比,我们所提出的架构优于其他基于同源的方法,并实现其他基于同源的方法,并实现竞争性能。此外,我们的方法可以轻松地应用于大尺寸的输入图,因为它不会遭受有限的可伸缩性,这对于图内核方法可能是一个问题。
translated by 谷歌翻译
在本文中,我们描述了使用汉密尔顿蒙特卡洛方法从基于经验可能性的后验进行采样的{\ tt r}软件包。基于经验可能性的方法论已在最近的许多感兴趣问题的贝叶斯建模中使用。该半摩擦过程可以轻松地将非参数分布估计器的灵活性与参数模型的可解释性结合在一起。该模型是通过估计基于方程的约束来指定的。从贝叶斯的经验可能性(贝耶斯)后部提取推断是具有挑战性的。可能性是数值计算的,因此不存在后部的闭合表达。此外,对于任何有限尺寸的样本,可能性的支持是非凸,这阻碍了许多马尔可夫链蒙特卡洛(MCMC)程序的快速混合。最近已经表明,使用对数经验可能性梯度的性质,可以设计有效的汉密尔顿蒙特卡洛(HMC)算法来从贝内斯尔后部采样。该软件包要求用户仅指定估计方程,先验及其各自的梯度。从参数后部绘制的MCMC样本,并获得了用户所需的各种细节。
translated by 谷歌翻译
药物误解是可能导致对患者造成不可预测后果的风险之一。为了减轻这种风险,我们开发了一个自动系统,该系统可以正确识别移动图像中的药丸的处方。具体来说,我们定义了所谓的药丸匹配任务,该任务试图匹配处方药中药丸所拍摄的药丸的图像。然后,我们提出了PIMA,这是一种使用图神经网络(GNN)和对比度学习来解决目标问题的新方法。特别是,GNN用于学习处方中文本框之间的空间相关性,从而突出显示带有药丸名称的文本框。此外,采用对比度学习来促进药丸名称的文本表示与药丸图像的视觉表示之间的跨模式相似性的建模。我们进行了广泛的实验,并证明PIMA在我们构建的药丸和处方图像的现实数据集上优于基线模型。具体而言,与其他基线相比,PIMA的准确性从19.09%提高到46.95%。我们认为,我们的工作可以为建立新的临床应用并改善药物安全和患者护理提供新的机会。
translated by 谷歌翻译
在这项工作中,我们研究了对象检测模型的自我监督预审计的不同方法。我们首先设计一个通用框架,通过随机采样和投射框来学习从图像中学习空间一致的密集表示,并将其投影到每个增强视图,并最大程度地提高相应的盒子功能之间的相似性。我们研究文献中的现有设计选择,例如盒子生成,功能提取策略,并使用其在实例级图像表示学习技术上获得成功启发的多种视图。我们的结果表明,该方法对超参数的不同选择是可靠的,并且使用多个视图不如实例级图像表示学习所显示的那样有效。我们还设计了两个辅助任务,以通过(1)通过使用对比度损失从采样设置中预测盒子中的一个视图中的框来预测框,并且(2)使用变压器预测盒子坐标,这可能会受益。下游对象检测任务。我们发现,在标记数据上预审计的模型时,这些任务不会导致更好的对象检测性能。
translated by 谷歌翻译
无监督的零射声语音转换(VC)旨在修改话语的扬声器特性,以匹配看不见的目标扬声器,而无需依赖并行培训数据。最近,已经显示了语音表示的自我监督学习在不使用转录物的情况下产生有用的语言单元,这可以直接传递给VC模型。在本文中,我们展示了通过使用长度重采样解码器来实现高质量的音频样本,这使得VC模型能够与不同的语言特征提取器和声码器一起工作,而无需它们以相同的序列长度运行。我们表明,我们的方法可以胜过VCTK数据集的许多基线。在不修改架构的情况下,我们进一步展示了a)使用来自同一扬声器的不同音频段,b)添加循环一致性损失,并且c)添加扬声器分类损失可以有助于学习更好的扬声器嵌入。我们的模型使用这些技术训练了Libritts,实现了最佳性能,产生了音频样本对目标扬声器的声音,同时保留了在字符错误率方面与实际人类话语相当的语言内容。
translated by 谷歌翻译
诸如联合学习之类的分布式学习范例通常涉及通过网络传输模型更新或梯度,从而避免传输私有数据。但是,有关培训数据的敏感信息可以从这种梯度透露。先前的作品已经证明,可以通过某些模型的最后一层(例如,reset)分析标签,或者通过使用匹配[zhu等人]的渐变与当前状态的额外知识,可以与模型输入共同重建。模型。在这项工作中,我们提出了一种方法来发现从最后一层的梯度和标签映射的梯度发现一组训练样本标签。我们的方法适用于多个域的各种模型架构。我们展示了我们在两个领域的模型训练中的效果 - 图像分类和自动语音识别。此外,我们表明,当与我们的方法结合使用时,现有的重建技术可以提高它们的功效。相反,我们证明梯度量化和稀疏可以显着降低攻击的成功。
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译
Generative models have been widely studied in computer vision. Recently, diffusion models have drawn substantial attention due to the high quality of their generated images. A key desired property of image generative models is the ability to disentangle different attributes, which should enable modification towards a style without changing the semantic content, and the modification parameters should generalize to different images. Previous studies have found that generative adversarial networks (GANs) are inherently endowed with such disentanglement capability, so they can perform disentangled image editing without re-training or fine-tuning the network. In this work, we explore whether diffusion models are also inherently equipped with such a capability. Our finding is that for stable diffusion models, by partially changing the input text embedding from a neutral description (e.g., "a photo of person") to one with style (e.g., "a photo of person with smile") while fixing all the Gaussian random noises introduced during the denoising process, the generated images can be modified towards the target style without changing the semantic content. Based on this finding, we further propose a simple, light-weight image editing algorithm where the mixing weights of the two text embeddings are optimized for style matching and content preservation. This entire process only involves optimizing over around 50 parameters and does not fine-tune the diffusion model itself. Experiments show that the proposed method can modify a wide range of attributes, with the performance outperforming diffusion-model-based image-editing algorithms that require fine-tuning. The optimized weights generalize well to different images. Our code is publicly available at https://github.com/UCSB-NLP-Chang/DiffusionDisentanglement.
translated by 谷歌翻译
Sequential recommendation is an important task to predict the next-item to access based on a sequence of interacted items. Most existing works learn user preference as the transition pattern from the previous item to the next one, ignoring the time interval between these two items. However, we observe that the time interval in a sequence may vary significantly different, and thus result in the ineffectiveness of user modeling due to the issue of \emph{preference drift}. In fact, we conducted an empirical study to validate this observation, and found that a sequence with uniformly distributed time interval (denoted as uniform sequence) is more beneficial for performance improvement than that with greatly varying time interval. Therefore, we propose to augment sequence data from the perspective of time interval, which is not studied in the literature. Specifically, we design five operators (Ti-Crop, Ti-Reorder, Ti-Mask, Ti-Substitute, Ti-Insert) to transform the original non-uniform sequence to uniform sequence with the consideration of variance of time intervals. Then, we devise a control strategy to execute data augmentation on item sequences in different lengths. Finally, we implement these improvements on a state-of-the-art model CoSeRec and validate our approach on four real datasets. The experimental results show that our approach reaches significantly better performance than the other 11 competing methods. Our implementation is available: https://github.com/KingGugu/TiCoSeRec.
translated by 谷歌翻译
Current work in named entity recognition (NER) uses either cross entropy (CE) or conditional random fields (CRF) as the objective/loss functions to optimize the underlying NER model. Both of these traditional objective functions for the NER problem generally produce adequate performance when the data distribution is balanced and there are sufficient annotated training examples. But since NER is inherently an imbalanced tagging problem, the model performance under the low-resource settings could suffer using these standard objective functions. Based on recent advances in area under the ROC curve (AUC) maximization, we propose to optimize the NER model by maximizing the AUC score. We give evidence that by simply combining two binary-classifiers that maximize the AUC score, significant performance improvement over traditional loss functions is achieved under low-resource NER settings. We also conduct extensive experiments to demonstrate the advantages of our method under the low-resource and highly-imbalanced data distribution settings. To the best of our knowledge, this is the first work that brings AUC maximization to the NER setting. Furthermore, we show that our method is agnostic to different types of NER embeddings, models and domains. The code to replicate this work will be provided upon request.
translated by 谷歌翻译