Drones have shown to be useful aerial vehicles for unmanned transport missions such as food and medical supply delivery. This can be leveraged to deliver life-saving nutrition and medicine for people in emergency situations. However, commercial drones can generally only carry 10 % - 30 % of their own mass as payload, which limits the amount of food delivery in a single flight. One novel solution to noticeably increase the food-carrying ratio of a drone, is recreating some structures of a drone, such as the wings, with edible materials. We thus propose a drone, which is no longer only a food transporting aircraft, but itself is partially edible, increasing its food-carrying mass ratio to 50 %, owing to its edible wings. Furthermore, should the edible drone be left behind in the environment after performing its task in an emergency situation, it will be more biodegradable than its non-edible counterpart, leaving less waste in the environment. Here we describe the choice of materials and scalable design of edible wings, and validate the method in a flight-capable prototype that can provide 300 kcal and carry a payload of 80 g of water.
translated by 谷歌翻译
机器人系统的控制设计很复杂,通常需要解决优化才能准确遵循轨迹。在线优化方法(例如模型预测性控制(MPC))已被证明可以实现出色的跟踪性能,但需要高计算能力。相反,基于学习的离线优化方法,例如加固学习(RL),可以在机器人上快速有效地执行,但几乎不匹配MPC在轨迹跟踪任务中的准确性。在具有有限计算的系统(例如航空车)中,必须在执行时间有效的精确控制器。我们提出了一种分析策略梯度(APG)方法来解决此问题。 APG通过在跟踪误差上以梯度下降的速度训练控制器来利用可区分的模拟器的可用性。我们解决了通过课程学习和实验经常在广泛使用的控制基准,Cartpole和两个常见的空中机器人,一个四极管和固定翼无人机上进行的训练不稳定性。在跟踪误差方面,我们提出的方法优于基于模型和无模型的RL方法。同时,它达到与MPC相似的性能,同时需要少于数量级的计算时间。我们的工作为APG作为机器人技术的有前途的控制方法提供了见解。为了促进对APG的探索,我们开放代码并在https://github.com/lis-epfl/apg_traightory_tracking上提供。
translated by 谷歌翻译
基于手势的界面通常用于实现更自然和直观的机器人遥气操作。然而,有时候,手势控制需要对用户造成显着疲劳的姿势或运动。在先前的用户学习中,我们证明了NA \“IVE用户可以在其武器展开时控制具有躯干运动的固定翼无人机。然而,这种姿势诱导了重要的手臂疲劳。在这项工作中,我们展示了一款被动臂支撑这补偿了手臂重量,平均扭矩误差小于0.005n / kg,超过0.005n / kg的受试者使用的运动范围的97%以上,因此平均降低肩部的肌肉疲劳。此外,这臂支持旨在将5百分位数的身体尺寸的用户融入第99百分位的男性。使用机械模型描述了臂支架的性能分析,并且其实现是用机械表征和用户学习验证的测量飞行性能,肩部肌肉活动和用户验收。
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
translated by 谷歌翻译
The development and adoption of artificial intelligence (AI) technologies in space applications is growing quickly as the consensus increases on the potential benefits introduced. As more and more aerospace engineers are becoming aware of new trends in AI, traditional approaches are revisited to consider the applications of emerging AI technologies. Already at the time of writing, the scope of AI-related activities across academia, the aerospace industry and space agencies is so wide that an in-depth review would not fit in these pages. In this chapter we focus instead on two main emerging trends we believe capture the most relevant and exciting activities in the field: differentiable intelligence and on-board machine learning. Differentiable intelligence, in a nutshell, refers to works making extensive use of automatic differentiation frameworks to learn the parameters of machine learning or related models. Onboard machine learning considers the problem of moving inference, as well as learning, onboard. Within these fields, we discuss a few selected projects originating from the European Space Agency's (ESA) Advanced Concepts Team (ACT), giving priority to advanced topics going beyond the transposition of established AI techniques and practices to the space domain.
translated by 谷歌翻译
The term ``neuromorphic'' refers to systems that are closely resembling the architecture and/or the dynamics of biological neural networks. Typical examples are novel computer chips designed to mimic the architecture of a biological brain, or sensors that get inspiration from, e.g., the visual or olfactory systems in insects and mammals to acquire information about the environment. This approach is not without ambition as it promises to enable engineered devices able to reproduce the level of performance observed in biological organisms -- the main immediate advantage being the efficient use of scarce resources, which translates into low power requirements. The emphasis on low power and energy efficiency of neuromorphic devices is a perfect match for space applications. Spacecraft -- especially miniaturized ones -- have strict energy constraints as they need to operate in an environment which is scarce with resources and extremely hostile. In this work we present an overview of early attempts made to study a neuromorphic approach in a space context at the European Space Agency's (ESA) Advanced Concepts Team (ACT).
translated by 谷歌翻译
Learned Bloom Filters, i.e., models induced from data via machine learning techniques and solving the approximate set membership problem, have recently been introduced with the aim of enhancing the performance of standard Bloom Filters, with special focus on space occupancy. Unlike in the classical case, the "complexity" of the data used to build the filter might heavily impact on its performance. Therefore, here we propose the first in-depth analysis, to the best of our knowledge, for the performance assessment of a given Learned Bloom Filter, in conjunction with a given classifier, on a dataset of a given classification complexity. Indeed, we propose a novel methodology, supported by software, for designing, analyzing and implementing Learned Bloom Filters in function of specific constraints on their multi-criteria nature (that is, constraints involving space efficiency, false positive rate, and reject time). Our experiments show that the proposed methodology and the supporting software are valid and useful: we find out that only two classifiers have desirable properties in relation to problems with different data complexity, and, interestingly, none of them has been considered so far in the literature. We also experimentally show that the Sandwiched variant of Learned Bloom filters is the most robust to data complexity and classifier performance variability, as well as those usually having smaller reject times. The software can be readily used to test new Learned Bloom Filter proposals, which can be compared with the best ones identified here.
translated by 谷歌翻译
In deep learning, transfer learning (TL) has become the de facto approach when dealing with image related tasks. Visual features learnt for one task have been shown to be reusable for other tasks, improving performance significantly. By reusing deep representations, TL enables the use of deep models in domains with limited data availability, limited computational resources and/or limited access to human experts. Domains which include the vast majority of real-life applications. This paper conducts an experimental evaluation of TL, exploring its trade-offs with respect to performance, environmental footprint, human hours and computational requirements. Results highlight the cases were a cheap feature extraction approach is preferable, and the situations where an expensive fine-tuning effort may be worth the added cost. Finally, a set of guidelines on the use of TL are proposed.
translated by 谷歌翻译
Developing safe and useful general-purpose AI systems will require us to make progress on scalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on how to turn it into one that can be productively studied empirically. We first present an experimental design centered on choosing tasks for which human specialists succeed but unaided humans and current general AI systems fail. We then present a proof-of-concept experiment following meant to demonstrate a key feature of this experimental design and show its viability with two question-answering tasks: MMLU and time-limited QuALITY. On these tasks, we find that human participants who interact with an unreliable large-language-model dialog assistant through chat -- a trivial baseline strategy for scalable oversight -- substantially outperform both the model alone and their own unaided performance. These results are an encouraging sign that scalable oversight will be tractable to study with present models and bolster recent findings that large language models can productively assist humans with difficult tasks.
translated by 谷歌翻译