折射率是最常见的眼睛障碍,是可更正视觉障碍的关键原因,造成了美国近80%的视觉障碍。可以使用多种方法诊断折射误差,包括主观折射,视网膜镜检查和自动磨蚀器。尽管主观折射是黄金标准,但它需要患者的合作,因此不适合婴儿,幼儿和发育迟缓的成年人。视网膜镜检查是一种客观折射方法,不需要患者的任何输入。但是,视网膜镜检查需要镜头套件和训练有素的检查员,这限制了其用于大规模筛查的使用。在这项工作中,我们通过将智能手机连接到视网膜镜和录制视网膜镜视频与患者戴着定制的纸框架来自动化自动化。我们开发了一个视频处理管道,该管道将视网膜视频视为输入,并根据我们提出的视网膜镜检查数学模型的扩展来估算净屈光度错误。我们的系统减轻了对镜头套件的需求,可以由未经培训的检查员进行。在一项185只眼睛的临床试验中,我们的灵敏度为91.0%,特异性为74.0%。此外,与主观折射测量相比,我们方法的平均绝对误差为0.75 $ \ pm $ 0.67D。我们的结果表明,我们的方法有可能用作现实世界中医疗设置中的基于视网膜镜检查的折射率筛选工具。
translated by 谷歌翻译
We study the expressibility and learnability of convex optimization solution functions and their multi-layer architectural extension. The main results are: \emph{(1)} the class of solution functions of linear programming (LP) and quadratic programming (QP) is a universal approximant for the $C^k$ smooth model class or some restricted Sobolev space, and we characterize the rate-distortion, \emph{(2)} the approximation power is investigated through a viewpoint of regression error, where information about the target function is provided in terms of data observations, \emph{(3)} compositionality in the form of a deep architecture with optimization as a layer is shown to reconstruct some basic functions used in numerical analysis without error, which implies that \emph{(4)} a substantial reduction in rate-distortion can be achieved with a universal network architecture, and \emph{(5)} we discuss the statistical bounds of empirical covering numbers for LP/QP, as well as a generic optimization problem (possibly nonconvex) by exploiting tame geometry. Our results provide the \emph{first rigorous analysis of the approximation and learning-theoretic properties of solution functions} with implications for algorithmic design and performance guarantees.
translated by 谷歌翻译
A central problem in computational biophysics is protein structure prediction, i.e., finding the optimal folding of a given amino acid sequence. This problem has been studied in a classical abstract model, the HP model, where the protein is modeled as a sequence of H (hydrophobic) and P (polar) amino acids on a lattice. The objective is to find conformations maximizing H-H contacts. It is known that even in this reduced setting, the problem is intractable (NP-hard). In this work, we apply deep reinforcement learning (DRL) to the two-dimensional HP model. We can obtain the conformations of best known energies for benchmark HP sequences with lengths from 20 to 50. Our DRL is based on a deep Q-network (DQN). We find that a DQN based on long short-term memory (LSTM) architecture greatly enhances the RL learning ability and significantly improves the search process. DRL can sample the state space efficiently, without the need of manual heuristics. Experimentally we show that it can find multiple distinct best-known solutions per trial. This study demonstrates the effectiveness of deep reinforcement learning in the HP model for protein folding.
translated by 谷歌翻译
In the past few years, Artificial Intelligence (AI) has garnered attention from various industries including financial services (FS). AI has made a positive impact in financial services by enhancing productivity and improving risk management. While AI can offer efficient solutions, it has the potential to bring unintended consequences. One such consequence is the pronounced effect of AI-related unfairness and attendant fairness-related harms. These fairness-related harms could involve differential treatment of individuals; for example, unfairly denying a loan to certain individuals or groups of individuals. In this paper, we focus on identifying and mitigating individual unfairness and leveraging some of the recently published techniques in this domain, especially as applicable to the credit adjudication use case. We also investigate the extent to which techniques for achieving individual fairness are effective at achieving group fairness. Our main contribution in this work is functionalizing a two-step training process which involves learning a fair similarity metric from a group sense using a small portion of the raw data and training an individually "fair" classifier using the rest of the data where the sensitive features are excluded. The key characteristic of this two-step technique is related to its flexibility, i.e., the fair metric obtained in the first step can be used with any other individual fairness algorithms in the second step. Furthermore, we developed a second metric (distinct from the fair similarity metric) to determine how fairly a model is treating similar individuals. We use this metric to compare a "fair" model against its baseline model in terms of their individual fairness value. Finally, some experimental results corresponding to the individual unfairness mitigation techniques are presented.
translated by 谷歌翻译
Spiking Neural Networks (SNNs) are bio-plausible models that hold great potential for realizing energy-efficient implementations of sequential tasks on resource-constrained edge devices. However, commercial edge platforms based on standard GPUs are not optimized to deploy SNNs, resulting in high energy and latency. While analog In-Memory Computing (IMC) platforms can serve as energy-efficient inference engines, they are accursed by the immense energy, latency, and area requirements of high-precision ADCs (HP-ADC), overshadowing the benefits of in-memory computations. We propose a hardware/software co-design methodology to deploy SNNs into an ADC-Less IMC architecture using sense-amplifiers as 1-bit ADCs replacing conventional HP-ADCs and alleviating the above issues. Our proposed framework incurs minimal accuracy degradation by performing hardware-aware training and is able to scale beyond simple image classification tasks to more complex sequential regression tasks. Experiments on complex tasks of optical flow estimation and gesture recognition show that progressively increasing the hardware awareness during SNN training allows the model to adapt and learn the errors due to the non-idealities associated with ADC-Less IMC. Also, the proposed ADC-Less IMC offers significant energy and latency improvements, $2-7\times$ and $8.9-24.6\times$, respectively, depending on the SNN model and the workload, compared to HP-ADC IMC.
translated by 谷歌翻译
The FAIR Guiding Principles aim to improve the findability, accessibility, interoperability, and reusability of digital content by making them both human and machine actionable. However, these principles have not yet been broadly adopted in the domain of machine learning-based program analyses and optimizations for High-Performance Computing (HPC). In this paper, we design a methodology to make HPC datasets and machine learning models FAIR after investigating existing FAIRness assessment and improvement techniques. Our methodology includes a comprehensive, quantitative assessment for elected data, followed by concrete, actionable suggestions to improve FAIRness with respect to common issues related to persistent identifiers, rich metadata descriptions, license and provenance information. Moreover, we select a representative training dataset to evaluate our methodology. The experiment shows the methodology can effectively improve the dataset and model's FAIRness from an initial score of 19.1% to the final score of 83.0%.
translated by 谷歌翻译
已经证明,深层合奏将典型的集体学习中看到的积极效果扩展到神经网络和增强学习(RL)。但是,要提高此类整体模型的效率仍然有很多事情要做。在这项工作中,我们介绍了在RL(feft)中快速传输的各种合奏,这是一种基于合奏的新方法,用于在高度多模式环境中进行增强学习,并改善了转移到看不见的环境。该算法分为两个主要阶段:合奏成员的培训,以及合成成员的合成(或微调)成员,以在新环境中起作用。该算法的第一阶段涉及并行培训常规的政策梯度或参与者 - 批评者,但增加了鼓励这些政策彼此不同的损失。这会导致单个单峰剂探索最佳策略的空间,并捕获与单个参与者相比,捕获环境的多模式的更多。 DEFT的第二阶段涉及将组件策略综合为新的策略,该策略以两种方式之一在修改的环境中效果很好。为了评估DEFT的性能,我们从近端策略优化(PPO)算法的基本版本开始,并通过faft的修改将其扩展。我们的结果表明,预处理阶段可有效地在多模式环境中产生各种策略。除了替代方案,faft通常会收敛到高奖励的速度要快得多,例如随机初始化而无需faft和合奏成员的微调。虽然当然还有更多的工作来分析理论上的熟练并将其扩展为更强大,但我们认为,它为在环境中捕获多模式的框架提供了一个强大的框架,同时仍将使用简单策略表示的RL方法。
translated by 谷歌翻译
我们提出了一个新型混合动力系统(硬件和软件),该系统载有微型无人接地车辆(MiniUGV),以执行复杂的搜索和操纵任务。该系统利用异质机器人来完成使用单个机器人系统无法完成的任务。它使无人机能够探索一个隐藏的空间,并具有狭窄的开口,Miniugv可以轻松进入并逃脱。假定隐藏的空间可用于MiniUGV。 MiniUGV使用红外(IR)传感器和单眼相机在隐藏空间中搜索对象。所提出的系统利用摄像机的更广阔的视野(FOV)以及对象检测算法的随机性引导隐藏空间中的MiniUGV以找到对象。找到对象后,MiniUGV使用视觉伺服抓住它,然后返回其起点,从无人机将其缩回并将物体运送到安全的地方。如果在隐藏空间中没有发现对象,则无人机继续进行空中搜索。束缚的MiniUGV使无人机具有超出其影响力并执行搜索和操纵任务的能力,而该任务对于任何机器人都无法单独进行。该系统具有广泛的应用,我们通过重复实验证明了其可行性。
translated by 谷歌翻译
基于事件的摄像机最近由于其不同步捕获时间丰富的信息的能力而显示出高速运动估计的巨大潜力。具有神经启发的事件驱动的处理的尖峰神经网络(SNN)可以有效地处理异步数据,而神经元模型(例如泄漏的综合和火灾(LIF))可以跟踪输入中包含的典型时序信息。 SNN通过在神经元内存中保持动态状态,保留重要信息,同时忘记冗余数据随着时间的推移而实现这一目标。因此,我们认为,与类似大小的模拟神经网络(ANN)相比,SNN将允许在顺序回归任务上更好地性能。但是,由于以后的层消失了,很难训练深SNN。为此,我们提出了一个具有可学习的神经元动力学的自适应完全刺激框架,以减轻尖峰消失的问题。我们在时间(BPTT)中利用基于替代梯度的反向传播来从头开始训练我们的深SNN。我们验证了在多车立体化事件相机(MVSEC)数据集和DSEC-FLOW数据集中的光流估计任务的方法。我们在这些数据集上的实验显示,与最新的ANN相比,平均终点误差(AEE)平均降低了13%。我们还探索了几个缩小的模型,并观察到我们的SNN模型始终超过大小的ANN,提供10%-16%的AEE。这些结果证明了SNN对较小模型的重要性及其在边缘的适用性。在效率方面,与最先进的ANN实施相比,我们的SNN可节省大量的网络参数(48倍)和计算能(51倍),同时获得了〜10%的EPE。
translated by 谷歌翻译
模型预测控制(MPC)是一种最先进的(SOTA)控制技术,需要迭代地解决硬约束优化问题。对于不确定的动态,基于分析模型的强大MPC施加了其他约束,从而增加了问题的硬度。当需要在较少的时间内需要更多计算时,问题会加剧性能至关重要的应用程序。过去已经提出了数据驱动的回归方法,例如神经网络,以近似系统动力学。但是,在没有符号分析先验的情况下,此类模型依赖于大量标记的数据。这会产生非平凡的培训间接开销。物理知识的神经网络(PINN)以合理的精度获得了近似的普通微分方程(ODE)的非线性系统的吸引力。在这项工作中,我们通过PINNS(RAMP-NET)提出了一个强大的自适应MPC框架,该框架使用了一种神经网络,部分从简单的ODE中训练,部分是由数据训练的。物理损失用于学习代表理想动态的简单odes。访问损失函数内部的分析功能是正常化的,为参数不确定性执行了可靠的行为。另一方面,定期数据丢失用于适应剩余的干扰(非参数不确定性),在数学建模过程中未被误解。实验是在模拟环境中进行的,以进行四轨的轨迹跟踪。与两种基于SOTA回归的MPC方法相比,我们报告了7.8%至43.2%和8.04%和8.04%至61.5%的跟踪误差的降低。
translated by 谷歌翻译