Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
工业推荐系统通常提出包含来自多个子系统的结果的混合列表。实际上,每个子系统都使用自己的反馈数据进行了优化,以避免不同子系统之间的干扰。但是,我们认为,由于\ textit {数据稀疏},此类数据使用可能会导致次优的在线性能。为了减轻此问题,我们建议从包含网络尺度和长期印象数据的\ textit {super-domain}中提取知识,并进一步协助在线推荐任务(下游任务)。为此,我们提出了一个新颖的工业\ textbf {k} nowl \ textbf {e} dge \ textbf {e} xtraction和\ textbf {p} lugging(\ textbf {keep})框架,这是一个两阶段的框架其中包括1)超级域上有监督的预训练知识提取模块,以及2)将提取的知识纳入下游模型的插件网络。这使得对在线推荐的逐步培训变得友好。此外,我们设计了一种有效的经验方法,用于在大规模工业系统中实施Keep时保持和介绍我们的动手经验。在两个现实世界数据集上进行的实验表明,保持可以实现有希望的结果。值得注意的是,Keep也已部署在阿里巴巴的展示广告系统上,带来了$+5.4 \%$ CTR和$+4.7 \%\%$ rpm的提升。
translated by 谷歌翻译
如今,数据驱动的深度神经模式已经在点击率(CTR)预测上已经显示出显着的进展。不幸的是,当数据不足时,这种模型的有效性可能会失败。为了处理这个问题,研究人员经常采用勘探战略来审查基于估计奖励的项目,例如UCB或汤普森采样。在CTR预测的开发和探索的背景下,最近的研究已经尝试利用预测不确定性以及模型预测作为奖励得分。但是,我们认为这种方法可以使最终排名分数偏离原始分布,从而影响在线系统中的模型性能。在本文中,我们提出了一种名为\ textbf {a} dversarial \ textbf {g} vlient driven \ textbf {e} xploration(年龄)的新颖探索方法。具体地,我们提出了一个伪探索模块来模拟渐变更新过程,其可以近似模型的探索项目的样本的影响。此外,为了更好的探索效率,我们提出了一种动态阈值单元,以消除具有低电位CTR的样本的效果。在开放式学术数据集上证明了我们方法的有效性。同时,年龄也部署在现实世界展示广告平台中,所有在线指标都得到了显着改善。
translated by 谷歌翻译
特征交互已被识别为机器学习中的一个重要问题,这对于点击率(CTR)预测任务也是非常重要的。近年来,深度神经网络(DNN)可以自动从原始稀疏功能中学习隐式非线性交互,因此已广泛用于工业CTR预测任务。然而,在DNN中学到的隐式特征交互不能完全保留原始和经验特征交互的完整表示容量(例如,笛卡尔产品)而不会损失。例如,简单地尝试学习特征A和特征B <A,B>作为新特征的显式笛卡尔产品表示可以胜过先前隐式功能交互模型,包括基于分解机(FM)的模型及其变体。在本文中,我们提出了一个共同行动网络(CAN),以近似于显式成对特征交互,而不会引入太多的附加参数。更具体地,给出特征A及其相关的特征B,通过学习两组参数来建模它们的特征交互:1)嵌入特征A和2)以表示特征B的多层Perceptron(MLP)。近似通过通过特征B的MLP网络传递特征A的嵌入可以获得特征交互。我们将这种成对特征交互作为特征合作,并且这种共动网单元可以提供拟合复合物的非常强大的容量功能交互。公共和工业数据集的实验结果表明,可以优于最先进的CTR模型和笛卡尔产品方法。此外,可以在阿里巴巴的显示广告系统中部署,获得12 \%的CTR和8 \%关于每个Mille(RPM)的收入,这是对业务的巨大改进。
translated by 谷歌翻译
Brain midline shift (MLS) is one of the most critical factors to be considered for clinical diagnosis and treatment decision-making for intracranial hemorrhage. Existing computational methods on MLS quantification not only require intensive labeling in millimeter-level measurement but also suffer from poor performance due to their dependence on specific landmarks or simplified anatomical assumptions. In this paper, we propose a novel semi-supervised framework to accurately measure the scale of MLS from head CT scans. We formulate the MLS measurement task as a deformation estimation problem and solve it using a few MLS slices with sparse labels. Meanwhile, with the help of diffusion models, we are able to use a great number of unlabeled MLS data and 2793 non-MLS cases for representation learning and regularization. The extracted representation reflects how the image is different from a non-MLS image and regularization serves an important role in the sparse-to-dense refinement of the deformation field. Our experiment on a real clinical brain hemorrhage dataset has achieved state-of-the-art performance and can generate interpretable deformation fields.
translated by 谷歌翻译
Adversarial imitation learning (AIL) has become a popular alternative to supervised imitation learning that reduces the distribution shift suffered by the latter. However, AIL requires effective exploration during an online reinforcement learning phase. In this work, we show that the standard, naive approach to exploration can manifest as a suboptimal local maximum if a policy learned with AIL sufficiently matches the expert distribution without fully learning the desired task. This can be particularly catastrophic for manipulation tasks, where the difference between an expert and a non-expert state-action pair is often subtle. We present Learning from Guided Play (LfGP), a framework in which we leverage expert demonstrations of multiple exploratory, auxiliary tasks in addition to a main task. The addition of these auxiliary tasks forces the agent to explore states and actions that standard AIL may learn to ignore. Additionally, this particular formulation allows for the reusability of expert data between main tasks. Our experimental results in a challenging multitask robotic manipulation domain indicate that LfGP significantly outperforms both AIL and behaviour cloning, while also being more expert sample efficient than these baselines. To explain this performance gap, we provide further analysis of a toy problem that highlights the coupling between a local maximum and poor exploration, and also visualize the differences between the learned models from AIL and LfGP.
translated by 谷歌翻译
In this work, we introduce a hypergraph representation learning framework called Hypergraph Neural Networks (HNN) that jointly learns hyperedge embeddings along with a set of hyperedge-dependent embeddings for each node in the hypergraph. HNN derives multiple embeddings per node in the hypergraph where each embedding for a node is dependent on a specific hyperedge of that node. Notably, HNN is accurate, data-efficient, flexible with many interchangeable components, and useful for a wide range of hypergraph learning tasks. We evaluate the effectiveness of the HNN framework for hyperedge prediction and hypergraph node classification. We find that HNN achieves an overall mean gain of 7.72% and 11.37% across all baseline models and graphs for hyperedge prediction and hypergraph node classification, respectively.
translated by 谷歌翻译
Neural fields, also known as coordinate-based or implicit neural representations, have shown a remarkable capability of representing, generating, and manipulating various forms of signals. For video representations, however, mapping pixel-wise coordinates to RGB colors has shown relatively low compression performance and slow convergence and inference speed. Frame-wise video representation, which maps a temporal coordinate to its entire frame, has recently emerged as an alternative method to represent videos, improving compression rates and encoding speed. While promising, it has still failed to reach the performance of state-of-the-art video compression algorithms. In this work, we propose FFNeRV, a novel method for incorporating flow information into frame-wise representations to exploit the temporal redundancy across the frames in videos inspired by the standard video codecs. Furthermore, we introduce a fully convolutional architecture, enabled by one-dimensional temporal grids, improving the continuity of spatial features. Experimental results show that FFNeRV yields the best performance for video compression and frame interpolation among the methods using frame-wise representations or neural fields. To reduce the model size even further, we devise a more compact convolutional architecture using the group and pointwise convolutions. With model compression techniques, including quantization-aware training and entropy coding, FFNeRV outperforms widely-used standard video codecs (H.264 and HEVC) and performs on par with state-of-the-art video compression algorithms.
translated by 谷歌翻译
Learning fair graph representations for downstream applications is becoming increasingly important, but existing work has mostly focused on improving fairness at the global level by either modifying the graph structure or objective function without taking into account the local neighborhood of a node. In this work, we formally introduce the notion of neighborhood fairness and develop a computational framework for learning such locally fair embeddings. We argue that the notion of neighborhood fairness is more appropriate since GNN-based models operate at the local neighborhood level of a node. Our neighborhood fairness framework has two main components that are flexible for learning fair graph representations from arbitrary data: the first aims to construct fair neighborhoods for any arbitrary node in a graph and the second enables adaption of these fair neighborhoods to better capture certain application or data-dependent constraints, such as allowing neighborhoods to be more biased towards certain attributes or neighbors in the graph.Furthermore, while link prediction has been extensively studied, we are the first to investigate the graph representation learning task of fair link classification. We demonstrate the effectiveness of the proposed neighborhood fairness framework for a variety of graph machine learning tasks including fair link prediction, link classification, and learning fair graph embeddings. Notably, our approach achieves not only better fairness but also increases the accuracy in the majority of cases across a wide variety of graphs, problem settings, and metrics.
translated by 谷歌翻译
Quantum many-body problems are some of the most challenging problems in science and are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors. The combination of neural networks (NN) for representing quantum states, coupled with the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems. However, the run-time of this approach scales quadratically with the number of simulated particles, constraining the practically usable NN to - in machine learning terms - minuscule sizes (<10M parameters). Considering the many breakthroughs brought by extreme NN in the +1B parameters scale to other domains, lifting this constraint could significantly expand the set of quantum systems we can accurately simulate on classical computers, both in size and complexity. We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm - the source of the quadratic scaling. In our preliminary experiments, we demonstrate VQ-NQS ability to reproduce the ground state of the 2D Heisenberg model across various system sizes, while reporting a significant reduction of about ${\times}10$ in the number of FLOPs in the local-energy calculation.
translated by 谷歌翻译