The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
复杂的系统(恒星,超新星,星系和群集)通常在可观察性质(例如,亮度,速度分散,振荡周期,温度)之间表现出低散射关系。这些缩放关系可以照亮底层物理,可以为估计质量和距离提供观测工具。机器学习可以在抽象的高维参数空间中寻找新的扩展关系(或对现有关系的简单扩展)提供系统的系统。我们使用称为符号回归(SR)的机器学习工具,该工具以分析方程的形式在给定的数据集中绘制模式。我们专注于Sunyaev-Zeldovich Flux $ - $群集质量关系($ Y_ \ MATHRM {SZ} -M $),它会影响来自集群丰富数据的宇宙学参数的推断。使用SR对来自IllustrySTG流体动力学模拟的数据,我们找到了一个新的群集质量代理,它结合了$ Y_ \ MATHRM {SZ} $和电离气体的浓度($ c_ \ mathrm {gas} $):$ m \ propto y_ \ mathrm {ccon} ^ {3/5} \ Equiv y_ \ mathrm {sz} ^ {3/5}(1-a \,c_ \ mathrm {gas})$。 $ y_ \ mathrm {coct} $减少预测$ m $的分散$ \ sim 20-30 $%的大型群集($ m \ gtrsim 10 ^ {14} \,h ^ { - 1} \,m_ \ oott $)在高和低频的高频上,与使用只需$ y_ \ mathrm {sz} $相比。我们表明对$ C_ \ MATHRM {GARS} $的依赖性与展示比其郊区更大的分散的集群核心。最后,我们从骆驼项目的模拟中测试$ y_ \ mathrm {cenc} $ in clusters,并显示$ y_ \ mathrm {crc} $对宇宙学,天体物理学,划分物理学和宇宙方差的变化是稳健的。我们的结果和方法可以用于电流和即将到来的CMB和X射线调查的精确多波长簇质量估计,如ACT,所以,SPT,肌肉和CMB-S4。
translated by 谷歌翻译
本文通过将MD势能分量引入我们的生成模型,我们利用了生成模型,并在分子动力学(MD)模拟中的问题进行了重构。通过将潜在的能量纳入从TORCHMD进入条件的生成框架,我们试图在螺旋〜$ \ Lightarrow $〜蛋白的线圈结构之间构建低势能的转化途径。我们展示了如何为条件生成模型添加额外的损失功能,其通过分子配置的潜在能量为动机,并且还提出了一种用于这种增强损耗功能的优化技术。我们的结果表明,这种额外的损失术语在合成现实分子轨迹上的好处。
translated by 谷歌翻译
在本文中,我们提出了一种自适应组套索深神经网络,用于高维函数近似,其中从动力系统生成输入数据,目标函数取决于少数有源变量或几乎没有变量的线性组合。我们通过深度神经网络近似于目标功能,并强制对合适的隐藏层的权重实施自适应组套索约束,以便表示目标函数的约束。我们利用近端算法优化惩罚损耗函数。使用BREGMAN距离的非负属性,我们证明所提出的优化程序实现损失衰减。我们的实证研究表明,该方法始终优于最近的最先进方法,包括稀疏词典矩阵方法,有或没有组卢赛诺罚款的神经网络。
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译
Machine Reading Comprehension has become one of the most advanced and popular research topics in the fields of Natural Language Processing in recent years. The classification of answerability questions is a relatively significant sub-task in machine reading comprehension; however, there haven't been many studies. Retro-Reader is one of the studies that has solved this problem effectively. However, the encoders of most traditional machine reading comprehension models in general and Retro-Reader, in particular, have not been able to exploit the contextual semantic information of the context completely. Inspired by SemBERT, we use semantic role labels from the SRL task to add semantics to pre-trained language models such as mBERT, XLM-R, PhoBERT. This experiment was conducted to compare the influence of semantics on the classification of answerability for the Vietnamese machine reading comprehension. Additionally, we hope this experiment will enhance the encoder for the Retro-Reader model's Sketchy Reading Module. The improved Retro-Reader model's encoder with semantics was first applied to the Vietnamese Machine Reading Comprehension task and obtained positive results.
translated by 谷歌翻译
Due to the environmental impacts caused by the construction industry, repurposing existing buildings and making them more energy-efficient has become a high-priority issue. However, a legitimate concern of land developers is associated with the buildings' state of conservation. For that reason, infrared thermography has been used as a powerful tool to characterize these buildings' state of conservation by detecting pathologies, such as cracks and humidity. Thermal cameras detect the radiation emitted by any material and translate it into temperature-color-coded images. Abnormal temperature changes may indicate the presence of pathologies, however, reading thermal images might not be quite simple. This research project aims to combine infrared thermography and machine learning (ML) to help stakeholders determine the viability of reusing existing buildings by identifying their pathologies and defects more efficiently and accurately. In this particular phase of this research project, we've used an image classification machine learning model of Convolutional Neural Networks (DCNN) to differentiate three levels of cracks in one particular building. The model's accuracy was compared between the MSX and thermal images acquired from two distinct thermal cameras and fused images (formed through multisource information) to test the influence of the input data and network on the detection results.
translated by 谷歌翻译
Rapid advancements in collection and dissemination of multi-platform molecular and genomics data has resulted in enormous opportunities to aggregate such data in order to understand, prevent, and treat human diseases. While significant improvements have been made in multi-omic data integration methods to discover biological markers and mechanisms underlying both prognosis and treatment, the precise cellular functions governing these complex mechanisms still need detailed and data-driven de-novo evaluations. We propose a framework called Functional Integrative Bayesian Analysis of High-dimensional Multiplatform Genomic Data (fiBAG), that allows simultaneous identification of upstream functional evidence of proteogenomic biomarkers and the incorporation of such knowledge in Bayesian variable selection models to improve signal detection. fiBAG employs a conflation of Gaussian process models to quantify (possibly non-linear) functional evidence via Bayes factors, which are then mapped to a novel calibrated spike-and-slab prior, thus guiding selection and providing functional relevance to the associations with patient outcomes. Using simulations, we illustrate how integrative methods with functional calibration have higher power to detect disease related markers than non-integrative approaches. We demonstrate the profitability of fiBAG via a pan-cancer analysis of 14 cancer types to identify and assess the cellular mechanisms of proteogenomic markers associated with cancer stemness and patient survival.
translated by 谷歌翻译
Recent increases in the computational demands of deep neural networks (DNNs) have sparked interest in efficient deep learning mechanisms, e.g., quantization or pruning. These mechanisms enable the construction of a small, efficient version of commercial-scale models with comparable accuracy, accelerating their deployment to resource-constrained devices. In this paper, we study the security considerations of publishing on-device variants of large-scale models. We first show that an adversary can exploit on-device models to make attacking the large models easier. In evaluations across 19 DNNs, by exploiting the published on-device models as a transfer prior, the adversarial vulnerability of the original commercial-scale models increases by up to 100x. We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase. Based on the insights, we propose a defense, $similarity$-$unpairing$, that fine-tunes on-device models with the objective of reducing the similarity. We evaluated our defense on all the 19 DNNs and found that it reduces the transferability up to 90% and the number of queries required by a factor of 10-100x. Our results suggest that further research is needed on the security (or even privacy) threats caused by publishing those efficient siblings.
translated by 谷歌翻译