Existing solutions to network scheduling typically assume that the instantaneous link rates are completely known before a scheduling decision is made or consider a bandit setting where the accurate link quality is discovered only after it has been used for data transmission. In practice, the decision maker can obtain (relatively accurate) channel information, e.g., through beamforming in mmWave networks, right before data transmission. However, frequent beamforming incurs a formidable overhead in densely deployed mmWave WLANs. In this paper, we consider the important problem of throughput optimization with joint link probing and scheduling. The problem is challenging even when the link rate distributions are pre-known (the offline setting) due to the necessity of balancing the information gains from probing and the cost of reducing the data transmission opportunity. We develop an approximation algorithm with guaranteed performance when the probing decision is non-adaptive, and a dynamic programming based solution for the more challenging adaptive setting. We further extend our solutions to the online setting with unknown link rate distributions and develop a contextual-bandit based algorithm and derive its regret bound. Numerical results using data traces collected from real-world mmWave deployments demonstrate the efficiency of our solutions.
translated by 谷歌翻译
Ensemble learning serves as a straightforward way to improve the performance of almost any machine learning algorithm. Existing deep ensemble methods usually naively train many different models and then aggregate their predictions. This is not optimal in our view from two aspects: i) Naively training multiple models adds much more computational burden, especially in the deep learning era; ii) Purely optimizing each base model without considering their interactions limits the diversity of ensemble and performance gains. We tackle these issues by proposing deep negative correlation classification (DNCC), in which the accuracy and diversity trade-off is systematically controlled by decomposing the loss function seamlessly into individual accuracy and the correlation between individual models and the ensemble. DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated. Thanks to the optimized diversities, DNCC works well even when utilizing a shared network backbone, which significantly improves its efficiency when compared with most existing ensemble systems. Extensive experiments on multiple benchmark datasets and network structures demonstrate the superiority of the proposed method.
translated by 谷歌翻译
Natural language interaction is a promising direction for democratizing 3D shape design. However, existing methods for text-driven 3D shape editing face challenges in producing decoupled, local edits to 3D shapes. We address this problem by learning disentangled latent representations that ground language in 3D geometry. To this end, we propose a complementary tool set including a novel network architecture, a disentanglement loss, and a new editing procedure. Additionally, to measure edit locality, we define a new metric that we call part-wise edit precision. We show that our method outperforms existing SOTA methods by 20% in terms of edit locality, and up to 6.6% in terms of language reference resolution accuracy. Our work suggests that by solely disentangling language representations, downstream 3D shape editing can become more local to relevant parts, even if the model was never given explicit part-based supervision.
translated by 谷歌翻译
Sampling diverse programs from a code language model and reranking with model likelihood is a popular method for code generation but it is prone to preferring degenerate solutions. Inspired by collaborative programming, we propose Coder-Reviewer reranking. We augment Coder language models from past work, which generate programs given language instructions, with Reviewer models, which evaluate the likelihood of the instruction given the generated programs. We perform an extensive study across six datasets with eight models from three model families. Experimental results show that Coder-Reviewer reranking leads to consistent and significant improvement (up to 17% absolute accuracy gain) over reranking with the Coder model only. When combined with executability filtering, Coder-Reviewer reranking can often outperform the minimum Bayes risk method. Coder-Reviewer reranking is easy to implement by prompting, can generalize to different programming languages, and works well with off-the-shelf hyperparameters.
translated by 谷歌翻译
This paper introduces the shared task of summarizing documents in several creative domains, namely literary texts, movie scripts, and television scripts. Summarizing these creative documents requires making complex literary interpretations, as well as understanding non-trivial temporal dependencies in texts containing varied styles of plot development and narrative structure. This poses unique challenges and is yet underexplored for text summarization systems. In this shared task, we introduce four sub-tasks and their corresponding datasets, focusing on summarizing books, movie scripts, primetime television scripts, and daytime soap opera scripts. We detail the process of curating these datasets for the task, as well as the metrics used for the evaluation of the submissions. As part of the CREATIVESUMM workshop at COLING 2022, the shared task attracted 18 submissions in total. We discuss the submissions and the baselines for each sub-task in this paper, along with directions for facilitating future work in the field.
translated by 谷歌翻译
快速的现场评估(ROSE)技术可以通过适当地分析快速染色的细胞病理学图像来显着加速胰腺癌的诊断。计算机辅助诊断(CAD)可以潜在地解决玫瑰病中病理学家的短缺。但是,不同样品之间的癌性模式差异很大,这使CAD任务极具挑战性。此外,由于不同的染色质量和各种采集装置类型,玫瑰图像在颜色分布,亮度和对比度方面具有复杂的扰动。为了应对这些挑战,我们提出了一种基于随机实例的视觉变压器(SI-VIT)方法,该方法可以减少扰动并增强实例之间的建模。借助重新组装的洗牌实例及其行李级软标签,该方法利用回归头将模型集中在细胞上,而不是各种扰动。同时,该模型与分类头结合在一起,可以有效地识别不同实例之间的一般分布模式。结果表明,分类准确性有了更准确的注意区域的显着提高,表明玫瑰图像的多种模式有效地提取了,并且复杂的扰动大大降低。这也表明SI-VIT在分析细胞病理学图像方面具有巨大的潜力。代码和实验结果可在https://github.com/sagizty/mil-si上获得。
translated by 谷歌翻译
近年来,有监督的深度学习取得了巨大的成功,从大量完全标记的数据中,对预测模型进行了培训。但是,实际上,标记这样的大数据可能非常昂贵,甚至出于隐私原因甚至可能是不可能的。因此,在本文中,我们旨在学习一个无需任何类标签的准确分类器。更具体地说,我们考虑了多组未标记的数据及其类先验的情况,即每个类别的比例。在此问题设置下,我们首先得出了对分类风险的无偏估计量,可以从给定未标记的集合中估算,并理论上分析了学习分类器的概括误差。然后,我们发现获得的分类器往往会导致过度拟合,因为其经验风险在训练过程中呈负面。为了防止过度拟合,我们进一步提出了一个部分风险正规化,该风险正规化在某些级别上保持了未标记的数据集和类方面的部分风险。实验表明,我们的方法有效地减轻了过度拟合和优于从多个未标记集中学习的最先进方法。
translated by 谷歌翻译
在这项工作中,我们在分配强化学习方面建立了最新的进步,以基于IQN提供模型的最新分配变体。我们通过使用GAN模型的生成器和鉴别器功能与分位数回归来实现这一目标,从而近似于状态返回分布的完整分位数。我们证明了基线数据集的性能提高-57 Atari 2600游戏。此外,我们使用算法来显示Atari游戏中风险敏感政策的最新培训表现,并通过政策优化和评估。
translated by 谷歌翻译
从最佳运输到稳健的维度降低,可以将大量的机器学习应用程序放入Riemannian歧管上的Min-Max优化问题中。尽管在欧几里得的环境中已经分析了许多最小的最大算法,但事实证明,将这些结果转化为Riemannian案例已被证明是难以捉摸的。张等。 [2022]最近表明,测量凸凹入的凹入问题总是容纳鞍点解决方案。受此结果的启发,我们研究了Riemannian和最佳欧几里得空间凸入concove算法之间的性能差距。我们在负面的情况下回答了这个问题,证明Riemannian校正的外部(RCEG)方法在地球上强烈convex-concove案例中以线性速率实现了最后近期收敛,与欧几里得结果匹配。我们的结果还扩展到随机或非平滑案例,在这种情况下,RCEG和Riemanian梯度上升下降(RGDA)达到了近乎最佳的收敛速率,直到因歧管的曲率而定为因素。
translated by 谷歌翻译
训练基金会模型(例如GPT-3和Palm)可能非常昂贵,通常涉及数以万计的GPU连续运行数月。这些模型通常经过专门的群集培训,这些群集具有快速,均匀的互连,并使用精心设计的软件系统来支持数据并行性和模型/管道并行性。这样的专用集群可能是昂贵且难以获得的。我们可以相反,可以利用更大量的分散,异质和较低的互连计算?先前的工作研究了可以纯粹以数据并行方式训练的相对较小模型的异质,分散的设置重点。模型平行基础模型培训(例如威震天)的最先进的方案仅考虑均匀的数据中心设置。在本文中,我们介绍了第一个研究大型基础模型的研究,该模型在异质网络上的去中心化制度中进行了模型并行性。我们的主要技术贡献是一种调度算法,该算法将不同的计算“任务”在培训基础模型中分配给通过缓慢的异质网络连接的一组分散的GPU设备。我们提供了正式的成本模型,并进一步提出了一种有效的进化算法,以找到最佳分配策略。我们进行了广泛的实验,这些实验代表了使用现实世界网络测量模拟的地理分布设备进行学习的不同方案。在最极端的情况下,在跨越3大洲的8个不同的城市中,我们的方法比以前的最新培训系统(Megatron)快4.8倍。
translated by 谷歌翻译