This work studies training one-hidden-layer overparameterized ReLU networks via gradient descent in the neural tangent kernel (NTK) regime, where, differently from the previous works, the networks' biases are trainable and are initialized to some constant rather than zero. The first set of results of this work characterize the convergence of the network's gradient descent dynamics. Surprisingly, it is shown that the network after sparsification can achieve as fast convergence as the original network. The contribution over previous work is that not only the bias is allowed to be updated by gradient descent under our setting but also a finer analysis is given such that the required width to ensure the network's closeness to its NTK is improved. Secondly, the networks' generalization bound after training is provided. A width-sparsity dependence is presented which yields sparsity-dependent localized Rademacher complexity and a generalization bound matching previous analysis (up to logarithmic factors). As a by-product, if the bias initialization is chosen to be zero, the width requirement improves the previous bound for the shallow networks' generalization. Lastly, since the generalization bound has dependence on the smallest eigenvalue of the limiting NTK and the bounds from previous works yield vacuous generalization, this work further studies the least eigenvalue of the limiting NTK. Surprisingly, while it is not shown that trainable biases are necessary, trainable bias helps to identify a nice data-dependent region where a much finer analysis of the NTK's smallest eigenvalue can be conducted, which leads to a much sharper lower bound than the previously known worst-case bound and, consequently, a non-vacuous generalization bound.
translated by 谷歌翻译
Our situated environment is full of uncertainty and highly dynamic, thus hindering the widespread adoption of machine-led Intelligent Decision-Making (IDM) in real world scenarios. This means IDM should have the capability of continuously learning new skills and efficiently generalizing across wider applications. IDM benefits from any new approaches and theoretical breakthroughs that exhibit Artificial General Intelligence (AGI) breaking the barriers between tasks and applications. Recent research has well-examined neural architecture, Transformer, as a backbone foundation model and its generalization to various tasks, including computer vision, natural language processing, and reinforcement learning. We therefore argue that a foundation decision model (FDM) can be established by formulating various decision-making tasks as a sequence decoding task using the Transformer architecture; this would be a promising solution to advance the applications of IDM in more complex real world tasks. In this paper, we elaborate on how a foundation decision model improves the efficiency and generalization of IDM. We also discuss potential applications of a FDM in multi-agent game AI, production scheduling, and robotics tasks. Finally, through a case study, we demonstrate our realization of the FDM, DigitalBrain (DB1) with 1.2 billion parameters, which achieves human-level performance over 453 tasks, including text generation, images caption, video games playing, robotic control, and traveling salesman problems. As a foundation decision model, DB1 would be a baby step towards more autonomous and efficient real world IDM applications.
translated by 谷歌翻译
Recent years have witnessed the tremendous progress of 3D GANs for generating view-consistent radiance fields with photo-realism. Yet, high-quality generation of human radiance fields remains challenging, partially due to the limited human-related priors adopted in existing methods. We present HumanGen, a novel 3D human generation scheme with detailed geometry and $\text{360}^{\circ}$ realistic free-view rendering. It explicitly marries the 3D human generation with various priors from the 2D generator and 3D reconstructor of humans through the design of "anchor image". We introduce a hybrid feature representation using the anchor image to bridge the latent space of HumanGen with the existing 2D generator. We then adopt a pronged design to disentangle the generation of geometry and appearance. With the aid of the anchor image, we adapt a 3D reconstructor for fine-grained details synthesis and propose a two-stage blending scheme to boost appearance generation. Extensive experiments demonstrate our effectiveness for state-of-the-art 3D human generation regarding geometry details, texture quality, and free-view performance. Notably, HumanGen can also incorporate various off-the-shelf 2D latent editing methods, seamlessly lifting them into 3D.
translated by 谷歌翻译
System auditing has emerged as a key approach for monitoring system call events and investigating sophisticated attacks. Based on the collected audit logs, research has proposed to search for attack patterns or track the causal dependencies of system events to reveal the attack sequence. However, existing approaches either cannot reveal long-range attack sequences or suffer from the dependency explosion problem due to a lack of focus on attack-relevant parts, and thus are insufficient for investigating complex attacks. To bridge the gap, we propose Zebra, a system that synergistically integrates attack pattern search and causal dependency tracking for efficient attack investigation. With Zebra, security analysts can alternate between search and tracking to reveal the entire attack sequence in a progressive, user-guided manner, while mitigating the dependency explosion problem by prioritizing the attack-relevant parts. To enable this, Zebra provides (1) an expressive and concise domain-specific language, Tstl, for performing various types of search and tracking analyses, and (2) an optimized language execution engine for efficient execution over a big amount of auditing data. Evaluations on a broad set of attack cases demonstrate the effectiveness of Zebra in facilitating a timely attack investigation.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
Many problems in causal inference and economics can be formulated in the framework of conditional moment models, which characterize the target function through a collection of conditional moment restrictions. For nonparametric conditional moment models, efficient estimation often relies on preimposed conditions on various measures of ill-posedness of the hypothesis space, which are hard to validate when flexible models are used. In this work, we address this issue by proposing a procedure that automatically learns representations with controlled measures of ill-posedness. Our method approximates a linear representation defined by the spectral decomposition of a conditional expectation operator, which can be used for kernelized estimators and is known to facilitate minimax optimal estimation in certain settings. We show this representation can be efficiently estimated from data, and establish L2 consistency for the resulting estimator. We evaluate the proposed method on proximal causal inference tasks, exhibiting promising performance on high-dimensional, semi-synthetic data.
translated by 谷歌翻译
This paper presents a methodology for combining programming and mathematics to optimize elevator wait times. Based on simulated user data generated according to the canonical three-peak model of elevator traffic, we first develop a naive model from an intuitive understanding of the logic behind elevators. We take into consideration a general array of features including capacity, acceleration, and maximum wait time thresholds to adequately model realistic circumstances. Using the same evaluation framework, we proceed to develop a Deep Q Learning model in an attempt to match the hard-coded naive approach for elevator control. Throughout the majority of the paper, we work under a Markov Decision Process (MDP) schema, but later explore how the assumption fails to characterize the highly stochastic overall Elevator Group Control System (EGCS).
translated by 谷歌翻译
在计算机音乐和心理声学中,感知响度与身体属性之间的关系是一个重要的主题。对“相等大通轮廓”的早期研究可以追溯到1920年代,从那以后,对强度和频率进行了测量的响度已被修订了多次。然而,大多数研究仅关注合成的声音,并且很少有合理的自然色调理论。为此,我们通过建模钢琴音调在本文中研究了天然音调感知的理论和应用。该理论部分包含:1)对音高的钢琴相等大小轮廓的准确测量,以及2)一个机器学习模型,能够纯粹基于基于人类主题测量的光谱特征来推断响度。至于应用程序,我们将理论应用于钢琴控制转移,其中我们调整了两个不同玩家钢琴(在不同的声学环境中)上的MIDI速度,以达到相同的感知效果。实验表明,我们的理论响度建模和相应的性能控制转移算法都显着优于其基准。
translated by 谷歌翻译
在本文中,我们考虑了神经视频压缩(NVC)中位分配的问题。由于帧参考结构,使用相同的R-D(速率)权衡参数$ \ lambda $的当前NVC方法是次优的,这带来了位分配的需求。与以前基于启发式和经验R-D模型的方法不同,我们建议通过基于梯度的优化解决此问题。具体而言,我们首先提出了一种基于半损坏的变异推理(SAVI)的连续位实现方法。然后,我们通过更改SAVI目标,使用迭代优化提出了一个像素级隐式分配方法。此外,我们基于NVC的可区分特征得出了精确的R-D模型。我们通过使用精确的R-D模型证明其等效性与位分配的等效性来展示我们的方法的最佳性。实验结果表明,我们的方法显着改善了NVC方法,并且胜过现有的位分配方法。我们的方法是所有可区分NVC方法的插件,并且可以直接在现有的预训练模型上采用。
translated by 谷歌翻译
3D感知的生成模型已经证明了它们的出色性能,从而从单眼2D图像集合中生成3D神经辐射场(NERF),甚至对于拓扑视为对象类别。但是,这些方法仍然缺乏分别控制生成的辐射场中对象的形状和外观的能力。在本文中,我们提出了一个生成模型,用于合成具有分离形状和外观变化的拓扑变体对象的辐射场。我们的方法生成可变形的辐射字段,该字段构建了对象的密度字段之间的密度对应关系,并在共享模板字段中编码它们的外观。我们的分解是以无监督的方式实现的,而没有向先前的3D感知gan培训引入额外的标签。我们还开发了一种有效的图像反转方案,用于在真实的单眼图像中重建对象的辐射场并操纵其形状和外观。实验表明,我们的方法可以从非结构化的单眼图像中成功学习生成模型,并很好地解散具有较大拓扑方差的物体(例如椅子)的形状和外观。经过合成数据训练的模型可以忠实地在给定的单个图像中重建真实对象,并获得高质量的纹理和形状编辑结果。
translated by 谷歌翻译