这项工作提出了一种新的计算框架,用于学习用于真实数据集的明确生成模型。特别地,我们建议在包含多个独立的多维线性子空间组成的特征空间中的多类多维数据分发和{线性判别表示(LDR)}之间学习{\ EM闭环转录}。特别地,我们认为寻求的最佳编码和解码映射可以被配制为编码器和解码器之间的{\ em二手最小游戏的均衡点}。该游戏的自然实用功能是所谓的{\ em速率减少},这是一个简单的信息定理措施,用于特征空间中子空间类似的高斯的混合物之间的距离。我们的配方利用来自控制系统的闭环误差反馈的灵感,避免昂贵的评估和最小化数据空间或特征空间的任意分布之间的近似距离。在很大程度上,这种新的制定统一了自动编码和GaN的概念和益处,并自然将它们扩展到学习多级和多维实际数据的判别和生成}表示的设置。我们对许多基准图像数据集的广泛实验表明了这种新的闭环配方的巨大潜力:在公平的比较下,学习的解码器的视觉质量和编码器的分类性能是竞争力的,并且通常比基于GaN,VAE或基于GaN,VAE或基于GaN,VAE的方法更好的方法两者的组合。我们注意到所以,不同类别的特征在特征空间中明确地映射到大约{em独立的主管子空间};每个类中的不同视觉属性由每个子空间中的{\ em独立主体组件}建模。
translated by 谷歌翻译
这项工作试图提供一种合理的理论框架,旨在从数据压缩和歧视性代表的原则解释现代深度(卷积)网络。我们认为,对于高维多类数据,最佳线性判别表示最大化整个数据集之间的编码率差和所有子集的平均值。我们表明,用于优化速率降低目标的基本迭代梯度上升方案自然地导致了一个名为Redunet的多层深网络,其共享现代深度网络的共同特征。深度分层架构,线性和非线性操作员,甚至网络的甚至参数都通过正向传播明确地构造了逐层构造,尽管它们通过背部传播可用于微调。所获得的“白盒”网络的所有组件都具有精确的优化,统计和几何解释。此外,当我们强制执行分类时,所以,所以网络的所有线性运算符自然地变为多通道卷曲。不变设置中的推导表明稀疏性和不变性之间的折衷,并且还表明这种深度卷积网络在光谱域中构建和学习的显着更有效。我们的初步模拟和实验清楚地验证了速率降低目标和相关的Redunet的有效性。所有代码和数据都可用于\ url {https://github.com/ma-lab-berkeley}。
translated by 谷歌翻译
In this work, we introduce a hypergraph representation learning framework called Hypergraph Neural Networks (HNN) that jointly learns hyperedge embeddings along with a set of hyperedge-dependent embeddings for each node in the hypergraph. HNN derives multiple embeddings per node in the hypergraph where each embedding for a node is dependent on a specific hyperedge of that node. Notably, HNN is accurate, data-efficient, flexible with many interchangeable components, and useful for a wide range of hypergraph learning tasks. We evaluate the effectiveness of the HNN framework for hyperedge prediction and hypergraph node classification. We find that HNN achieves an overall mean gain of 7.72% and 11.37% across all baseline models and graphs for hyperedge prediction and hypergraph node classification, respectively.
translated by 谷歌翻译
Learning fair graph representations for downstream applications is becoming increasingly important, but existing work has mostly focused on improving fairness at the global level by either modifying the graph structure or objective function without taking into account the local neighborhood of a node. In this work, we formally introduce the notion of neighborhood fairness and develop a computational framework for learning such locally fair embeddings. We argue that the notion of neighborhood fairness is more appropriate since GNN-based models operate at the local neighborhood level of a node. Our neighborhood fairness framework has two main components that are flexible for learning fair graph representations from arbitrary data: the first aims to construct fair neighborhoods for any arbitrary node in a graph and the second enables adaption of these fair neighborhoods to better capture certain application or data-dependent constraints, such as allowing neighborhoods to be more biased towards certain attributes or neighbors in the graph.Furthermore, while link prediction has been extensively studied, we are the first to investigate the graph representation learning task of fair link classification. We demonstrate the effectiveness of the proposed neighborhood fairness framework for a variety of graph machine learning tasks including fair link prediction, link classification, and learning fair graph embeddings. Notably, our approach achieves not only better fairness but also increases the accuracy in the majority of cases across a wide variety of graphs, problem settings, and metrics.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Diffusion models have emerged as the state-of-the-art for image generation, among other tasks. Here, we present an efficient diffusion-based model for 3D-aware generation of neural fields. Our approach pre-processes training data, such as ShapeNet meshes, by converting them to continuous occupancy fields and factoring them into a set of axis-aligned triplane feature representations. Thus, our 3D training scenes are all represented by 2D feature planes, and we can directly train existing 2D diffusion models on these representations to generate 3D neural fields with high quality and diversity, outperforming alternative approaches to 3D-aware generation. Our approach requires essential modifications to existing triplane factorization pipelines to make the resulting features easy to learn for the diffusion model. We demonstrate state-of-the-art results on 3D generation on several object classes from ShapeNet.
translated by 谷歌翻译
拟议的控制方法使用基于自适应的馈电控制器来为CDPR建立一个被动输入输出映射,该映射与线性不变的严格阳性真实反馈控制器一起使用,以确保稳健的闭环输入输出稳定性和渐进式姿势轨迹通过消极定理跟踪。所提出的控制器的新颖性是其配方用于一系列有效载荷态度参数化,包括任何无约束的态度参数化,四元组或方向余弦矩阵(DCM)。通过用刚性和柔性电缆的CDPR进行数值模拟,证明了所提出的控制器的性能和鲁棒性。结果证明了仔细定义CDPR的姿势误差的重要性,CDPR的姿势误差是在使用Quaternion和dcm时以乘法方式执行的,并且在使用不受约束的态度参数时(例如Euler-andle-angle序列)时以特定的添加剂方式执行。
translated by 谷歌翻译
被广泛采用的缩减采样是为了在视觉识别的准确性和延迟之间取得良好的权衡。不幸的是,没有学习常用的合并层,因此无法保留重要信息。作为另一个降低方法,自适应采样权重和与任务相关的过程区域,因此能够更好地保留有用的信息。但是,自适应采样的使用仅限于某些层。在本文中,我们表明,在深神经网络的构件中使用自适应采样可以提高其效率。特别是,我们提出了SSBNET,该SSBNET是通过将采样层反复插入Resnet等现有网络构建的。实验结果表明,所提出的SSBNET可以在ImageNet和可可数据集上实现竞争性图像分类和对象检测性能。例如,SSB-Resnet-RS-200在Imagenet数据集上的精度达到82.6%,比基线RESNET-RS-152高0.6%,具有相似的复杂性。可视化显示了SSBNET在允许不同层专注于不同位置的优势,而消融研究进一步验证了自适应采样比均匀方法的优势。
translated by 谷歌翻译
$ t_ {1 \ rho} $映射是一种有希望的定量MRI技术,用于对组织性质的非侵入性评估。基于学习的方法可以从减少数量的$ t_ {1 \ rho} $加权图像中映射$ t_ {1 \ rho} $,但需要大量的高质量培训数据。此外,现有方法不提供$ t_ {1 \ rho} $估计的置信度。为了解决这些问题,我们提出了一个自我监督的学习神经网络,该网络使用学习过程中的放松约束来学习$ t_ {1 \ rho} $映射。为$ t_ {1 \ rho} $量化网络建立了认知不确定性和态度不确定性,以提供$ t_ {1 \ rho} $映射的贝叶斯置信度估计。不确定性估计还可以使模型规范化,以防止其学习不完美的数据。我们对52例非酒精性脂肪肝病患者收集的$ T_ {1 \ rho} $数据进行了实验。结果表明,我们的方法优于$ t_ {1 \ rho} $量化肝脏的现有方法,使用少于两个$ t_ {1 \ rho} $加权图像。我们的不确定性估计提供了一种可行的方法,可以建模基于自我监督学习的$ t_ {1 \ rho} $估计的信心,这与肝脏中的现实$ t_ {1 \ rho} $成像是一致的。
translated by 谷歌翻译
生成时间连贯的高保真视频是生成建模研究中的重要里程碑。我们通过提出一个视频生成的扩散模型来取得这一里程碑的进步,该模型显示出非常有希望的初始结果。我们的模型是标准图像扩散体系结构的自然扩展,它可以从图像和视频数据中共同训练,我们发现这可以减少Minibatch梯度的方差并加快优化。为了生成长而更高的分辨率视频,我们引入了一种新的条件抽样技术,用于空间和时间视频扩展,该技术的性能比以前提出的方法更好。我们介绍了大型文本条件的视频生成任务,以及最新的结果,以实现视频预测和无条件视频生成的确定基准。可从https://video-diffusion.github.io/获得补充材料
translated by 谷歌翻译