Recently deep neural networks, which require a large amount of annotated samples, have been widely applied in nuclei instance segmentation of H\&E stained pathology images. However, it is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns. Although unsupervised and semi-supervised learning methods have been studied for nuclei segmentation, very few works have delved into the selective labeling of samples to reduce the workload of annotation. Thus, in this paper, we propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner. In the proposed framework, we first develop a novel consistency-based patch selection method to determine which image patches are the most beneficial to the training. Then we introduce a conditional single-image GAN with a component-wise discriminator, to synthesize more training samples. Lastly, our proposed framework trains an existing segmentation model with the above augmented samples. The experimental results show that our proposed method could obtain the same-level performance as a fully-supervised baseline by annotating less than 5% pixels on some benchmarks.
translated by 谷歌翻译
Recently, domain-specific PLMs have been proposed to boost the task performance of specific domains (e.g., biomedical and computer science) by continuing to pre-train general PLMs with domain-specific corpora. However, this Domain-Adaptive Pre-Training (DAPT; Gururangan et al. (2020)) tends to forget the previous general knowledge acquired by general PLMs, which leads to a catastrophic forgetting phenomenon and sub-optimal performance. To alleviate this problem, we propose a new framework of General Memory Augmented Pre-trained Language Model (G-MAP), which augments the domain-specific PLM by a memory representation built from the frozen general PLM without losing any general knowledge. Specifically, we propose a new memory-augmented layer, and based on it, different augmented strategies are explored to build the memory representation and then adaptively fuse it into the domain-specific PLM. We demonstrate the effectiveness of G-MAP on various domains (biomedical and computer science publications, news, and reviews) and different kinds (text classification, QA, NER) of tasks, and the extensive results show that the proposed G-MAP can achieve SOTA results on all tasks.
translated by 谷歌翻译
激光镜头和相机是两个用于自动驾驶中3D感知的互补传感器。激光点云具有准确的空间和几何信息,而RGB图像为上下文推理提供了纹理和颜色数据。为了共同利用激光雷达和相机,现有的融合方法倾向于基于校准,即一对一的映射,将每个3D点与一个投影图像像素对齐。但是,这些方法的性能高度依赖于校准质量,这对传感器的时间和空间同步敏感。因此,我们提出了一个动态的交叉注意(DCA)模块,具有新型的一对一的交叉模式映射,该模块从初始投影对邻域的最初投影中学习了多个偏移,从而发展了对校准误差的耐受性。此外,提出了A \ textIt {动态查询增强}来感知与模型无关的校准,从而进一步增强了DCA对初始未对准的耐受性。名为“动态跨注意网络”(DCAN)的整个融合体系结构利用了多级图像特征,并适应了点云的多个表示,这使DCA可以用作插件融合模块。对Nuscenes和Kitti的广泛实验证明了DCA的有效性。拟议的DCAN在Nuscenes检测挑战上优于最先进的方法。
translated by 谷歌翻译
持续学习(CL)依次学习像人类这样的新任务,其目标是实现更好的稳定性(S,记住过去的任务)和可塑性(P,适应新任务)。由于过去的培训数据不可用,因此探索培训示例中S和P的影响差异很有价值,这可能会改善对更好的SP的学习模式。受影响函数的启发(如果),我们首先研究了示例通过添加扰动来示例体重和计算影响推导的影响。为了避免在神经网络中Hessian逆的存储和计算负担,我们提出了一种简单而有效的METASP算法,以模拟IF计算中的两个关键步骤,并获得S-和P-Aware示例的影响。此外,我们建议通过解决双目标优化问题来融合两种示例影响,并获得对SP Pareto最优性的融合影响。融合影响可用于控制模型的更新并优化排练的存储。经验结果表明,我们的算法在任务和类别基准CL数据集上都显着优于最先进的方法。
translated by 谷歌翻译
Machine learning methods have revolutionized the discovery process of new molecules and materials. However, the intensive training process of neural networks for molecules with ever-increasing complexity has resulted in exponential growth in computation cost, leading to long simulation time and high energy consumption. Photonic chip technology offers an alternative platform for implementing neural networks with faster data processing and lower energy usage compared to digital computers. Photonics technology is naturally capable of implementing complex-valued neural networks at no additional hardware cost. Here, we demonstrate the capability of photonic neural networks for predicting the quantum mechanical properties of molecules. To the best of our knowledge, this work is the first to harness photonic technology for machine learning applications in computational chemistry and molecular sciences, such as drug discovery and materials design. We further show that multiple properties can be learned simultaneously in a photonic chip via a multi-task regression learning algorithm, which is also the first of its kind as well, as most previous works focus on implementing a network in the classification task.
translated by 谷歌翻译
虽然端到端的神经机翻译(NMT)取得了令人印象深刻的进步,但嘈杂的输入通常会导致模型变得脆弱和不稳定。生成对抗性示例作为增强数据被证明是有用的,以减轻这个问题。对逆势示例生成(AEG)的现有方法是字级或字符级。在本文中,我们提出了一个短语级侵犯示例生成(PAEG)方法来增强模型的鲁棒性。我们的方法利用基于梯度的策略来替代源输入中的弱势位置的短语。我们在三个基准中验证了我们的方法,包括LDC中文 - 英语,IWSLT14德语,以及WMT14英语 - 德语任务。实验结果表明,与以前的方法相比,我们的方法显着提高了性能。
translated by 谷歌翻译
最近出现的联邦学习(FL)是一个有吸引力的分布式学习框架,其中许多无线最终用户设备可以训练全局模型,数据仍然自动加载。与传统的机器学习框架相比,收集集中存储的用户数据,这为数据隐私带来了巨大的沟通负担和担忧,这种方法不仅可以保存网络带宽,还可以保护数据隐私。尽管前景有前景,但拜占庭袭击,传统分布式网络中的棘手威胁,也被发现对FL相当有效。在本文中,我们对佛罗里达州的抗议袭击进行了全面调查了捍卫拜占庭袭击的最先进战略。我们首先根据他们使用的技术为现有的防御解决方案提供分类法,然后是在整个板上的比较和讨论。然后,我们提出了一种新的拜占庭攻击方法,称为重量攻击,以击败这些防御计划,并进行实验以证明其威胁。结果表明,现有的防御解决方案虽然丰富,但仍远未完全保护FL。最后,我们表明体重攻击可能的可能对策,并突出了一些挑战和未来的研究方向,以减轻百灵鱼袭击杂志。
translated by 谷歌翻译
交通优化挑战,如负载平衡,流量调度和提高数据包交付时间,是广域网(WAN)中困难的在线决策问题。例如,需要复杂的启发式方法,以找到改善分组输送时间并最小化可能由链接故障或拥塞引起的中断的最佳路径。最近的加强学习(RL)算法的成功可以提供有用的解决方案,以建立更好的鲁棒系统,这些系统从无模式设置中学习。在这项工作中,我们考虑了一条路径优化问题,专门针对数据包路由,在大型复杂网络中。我们开发和评估一种无模型方法,应用多代理元增强学习(MAMRL),可以确定每个数据包的下一跳,以便将其传递到其目的地,最短的时间整体。具体地,我们建议利用和比较深度策略优化RL算法,以便在通信网络中启用分布式无模型控制,并呈现基于新的Meta学习的框架Mamrl,以便快速适应拓扑变化。为了评估所提出的框架,我们用各种WAN拓扑模拟。我们广泛的数据包级仿真结果表明,与古典最短路径和传统的加强学习方法相比,Mamrl即使网络需求增加也显着降低了平均分组交付时间;与非元深策略优化算法相比,我们的结果显示在连杆故障发生的同时出现相当的平均数据包交付时间时减少较少的剧集中的数据包丢失。
translated by 谷歌翻译
在这项工作中,我们已经提出了一种称为VAE-Krnet的生成模型,用于密度估计或近似,其将规范变形Autiachoder(VAE)与我们最近开发的基于流的生成模型相结合,称为Krnet。 VAE用作尺寸减少技术以捕获潜伏空间,并且Krnet用于模拟潜在变量的分布。在数据和潜在变量之间使用线性模型,我们表明VAE-Krnet可以比规范VAE更有效且鲁棒。 VAE-KRNET可以用作密度模型,以近似数据分布或任意概率密度函数(PDF)已知到常数。 VAE-KRNET在维度方面灵活。当尺寸的数量相对较小时,Krnet可以有效地近似于原始随机变量的分布。对于高维病例,我们可以使用VAE-Krnet合并尺寸减少。 VAE-Krnet的一个重要应用是用于后部分布的近似的变分贝叶。变分贝叶斯方法通常基于模型和后部之间的Kullback-Leibler(KL)发散的最小化。对于高尺寸分布,由于维度的诅咒构建精确的密度模型是非常具有挑战性的,其中通常引入额外的假设以效率。例如,经典平均场方法假设尺寸之间的相互独立性,这通常会导致由于过度简化而产生低估的方差。为了减轻这个问题,我们包括丢失潜在随机变量和原始随机变量之间的相互信息的最大化,这有助于从低密度的区域保持更多信息,使得方差估计得到改善。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译