关键应用程序中机器学习(ML)组件的集成引入了软件认证和验证的新挑战。正在开发新的安全标准和技术准则,以支持基于ML的系统的安全性,例如ISO 21448 SOTIF用于汽车域名,并保证机器学习用于自主系统(AMLAS)框架。 SOTIF和AMLA提供了高级指导,但对于每个特定情况,必须将细节凿出来。我们启动了一个研究项目,目的是证明开放汽车系统中ML组件的完整安全案例。本文报告说,Smikk的安全保证合作是由行业级别的行业合作的,这是一个基于ML的行人自动紧急制动示威者,在行业级模拟器中运行。我们演示了AMLA在伪装上的应用,以在简约的操作设计域中,即,我们为其基于ML的集成组件共享一个完整的安全案例。最后,我们报告了经验教训,并在开源许可下为研究界重新使用的开源许可提供了傻笑和安全案例。
translated by 谷歌翻译
本文介绍了更深层的扩展版本,这是一种基于搜索的仿真集成测试解决方案,该解决方案生成了用于测试基于神经网络的巷道式泳道系统的检测失败测试方案。在新提出的版本中,我们使用了一组新的生物启发的搜索算法,遗传算法(GA),$({\ mu}+{\ lambda})$和$({\ mu},{\ mu},{\ lambda}),{\ lambda}) $进化策略(ES)和粒子群优化(PSO),利用了针对用于对测试场景进行建模的演示模型量身定制的优质人口种子和特定于域的交叉和突变操作。为了证明更深层次的新测试生成器的功能,我们就SBST 2021的网络物理系统测试竞赛中的五个参与工具进行了经验评估和比较。我们的评估显示了新提出的测试更深层次的发电机不仅代表了先前版本的可观改进,而且还被证明是有效和有效地引发相当数量的不同故障的测试方案,用于测试ML驱动的车道保存系统。在有限的测试时间预算,高目标故障严重性和严格的速度限制限制下,它们可以在促进测试方案多样性的同时触发几次失败。
translated by 谷歌翻译
通过机器学习的人工智能越来越多地用于数字社会。基于机器学习的解决方案带来了巨大的机会,从而创造了“软件2.0”,而且为工程界提供了巨大的挑战。由于数据科学家使用的实验方法在开发机器学习模型时,敏捷是一个重要的特征。在这个主题演讲中,我们讨论了两种当代开发现象,这是机器学习开发的基础,即笔记本界面和MLOPS。首先,我们提出了一种解决方案,可以通过支持对集成开发环境的简单过渡来解决笔记本电脑中工作的一些内在弱点。其次,我们通过在MLOPS语境中引入隐喻障碍和钢筋来提出AI系统的加强工程。基于机器学习的解决方案是动态的本质上,我们认为强化连续工程是质量保证明天可信赖的AI系统。
translated by 谷歌翻译
小角度X射线散射(萨克斯)广泛用于材料科学,作为检查纳米结构的一种方式。实验萨克斯数据的分析涉及将相当简单的数据格式映射到大量的结构模型。尽管各种科学计算工具来协助模型选择,但活动依赖于萨克斯分析师的经验,这被认为是社区的效率瓶颈。要应对这一决策问题,我们开发和评估开源,基于机器学习的工具扫描(散射AI分析),以提供关于模型选择的建议。扫描利用多台机器学习算法,并使用模型和在SASView包中实现的模拟工具,用于生成定义的一组规定的数据集。我们的评价表明,扫描提供了95%-97%的整体准确性。 XGBoost分类器已被识别为最准确的方法,在准确性和培训时间之间的平衡良好。对于普通纳米结构的11个预定义的结构模型和易于抽取功能来扩展数量和类型培训模型,扫描可以加速萨克斯数据分析工作流程。
translated by 谷歌翻译
In recent years, several metrics have been developed for evaluating group fairness of rankings. Given that these metrics were developed with different application contexts and ranking algorithms in mind, it is not straightforward which metric to choose for a given scenario. In this paper, we perform a comprehensive comparative analysis of existing group fairness metrics developed in the context of fair ranking. By virtue of their diverse application contexts, we argue that such a comparative analysis is not straightforward. Hence, we take an axiomatic approach whereby we design a set of thirteen properties for group fairness metrics that consider different ranking settings. A metric can then be selected depending on whether it satisfies all or a subset of these properties. We apply these properties on eleven existing group fairness metrics, and through both empirical and theoretical results we demonstrate that most of these metrics only satisfy a small subset of the proposed properties. These findings highlight limitations of existing metrics, and provide insights into how to evaluate and interpret different fairness metrics in practical deployment. The proposed properties can also assist practitioners in selecting appropriate metrics for evaluating fairness in a specific application.
translated by 谷歌翻译
Partial differential equations (PDEs) are important tools to model physical systems, and including them into machine learning models is an important way of incorporating physical knowledge. Given any system of linear PDEs with constant coefficients, we propose a family of Gaussian process (GP) priors, which we call EPGP, such that all realizations are exact solutions of this system. We apply the Ehrenpreis-Palamodov fundamental principle, which works like a non-linear Fourier transform, to construct GP kernels mirroring standard spectral methods for GPs. Our approach can infer probable solutions of linear PDE systems from any data such as noisy measurements, or initial and boundary conditions. Constructing EPGP-priors is algorithmic, generally applicable, and comes with a sparse version (S-EPGP) that learns the relevant spectral frequencies and works better for big data sets. We demonstrate our approach on three families of systems of PDE, the heat equation, wave equation, and Maxwell's equations, where we improve upon the state of the art in computation time and precision, in some experiments by several orders of magnitude.
translated by 谷歌翻译
Classically, the development of humanoid robots has been sequential and iterative. Such bottom-up design procedures rely heavily on intuition and are often biased by the designer's experience. Exploiting the non-linear coupled design space of robots is non-trivial and requires a systematic procedure for exploration. We adopt the top-down design strategy, the V-model, used in automotive and aerospace industries. Our co-design approach identifies non-intuitive designs from within the design space and obtains the maximum permissible range of the design variables as a solution space, to physically realise the obtained design. We show that by constructing the solution space, one can (1) decompose higher-level requirements onto sub-system-level requirements with tolerance, alleviating the "chicken-or-egg" problem during the design process, (2) decouple the robot's morphology from its controller, enabling greater design flexibility, (3) obtain independent sub-system level requirements, reducing the development time by parallelising the development process.
translated by 谷歌翻译
Recent diffusion-based AI art platforms are able to create impressive images from simple text descriptions. This makes them powerful tools for concept design in any discipline that requires creativity in visual design tasks. This is also true for early stages of architectural design with multiple stages of ideation, sketching and modelling. In this paper, we investigate how applicable diffusion-based models already are to these tasks. We research the applicability of the platforms Midjourney, DALL-E 2 and StableDiffusion to a series of common use cases in architectural design to determine which are already solvable or might soon be. We also analyze how they are already being used by analyzing a data set of 40 million Midjourney queries with NLP methods to extract common usage patterns. With this insights we derived a workflow to interior and exterior design that combines the strengths of the individual platforms.
translated by 谷歌翻译
With the rise of AI and automation, moral decisions are being put into the hands of algorithms that were formerly the preserve of humans. In autonomous driving, a variety of such decisions with ethical implications are made by algorithms for behavior and trajectory planning. Therefore, we present an ethical trajectory planning algorithm with a framework that aims at a fair distribution of risk among road users. Our implementation incorporates a combination of five essential ethical principles: minimization of the overall risk, priority for the worst-off, equal treatment of people, responsibility, and maximum acceptable risk. To the best of the authors' knowledge, this is the first ethical algorithm for trajectory planning of autonomous vehicles in line with the 20 recommendations from the EU Commission expert group and with general applicability to various traffic situations. We showcase the ethical behavior of our algorithm in selected scenarios and provide an empirical analysis of the ethical principles in 2000 scenarios. The code used in this research is available as open-source software.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译