This study uses multisensory data (i.e., color and depth) to recognize human actions in the context of multimodal human-robot interaction. Here we employed the iCub robot to observe the predefined actions of the human partners by using four different tools on 20 objects. We show that the proposed multimodal ensemble learning leverages complementary characteristics of three color cameras and one depth sensor that improves, in most cases, recognition accuracy compared to the models trained with a single modality. The results indicate that the proposed models can be deployed on the iCub robot that requires multimodal action recognition, including social tasks such as partner-specific adaptation, and contextual behavior understanding, to mention a few.
translated by 谷歌翻译
我们考虑主人想要在$ n $ Workers上运行分布式随机梯度下降(SGD)算法的设置,每个算法都有一个数据子集。分布式SGD可能会遭受散乱者的影响,即导致延迟的缓慢或反应迟钝的工人。文献中研究的一种解决方案是在更新模型之前等待每次迭代的最快$ k <n $工人的响应,其中$ k $是固定的参数。 $ k $的价值的选择提供了SGD的运行时(即收敛率)与模型错误之间的权衡。为了优化误差折衷,我们研究了在整个算法的运行时,以自适应〜$ k $(即不同的$ k $)调查分布式SGD。我们首先设计了一种自适应策略,用于改变$ k $,该策略根据我们得出的墙壁通行时间的函数,基于上限的上限来优化这种权衡。然后,我们建议并实施一种基于统计启发式的自适应分布式SGD的算法。我们的结果表明,与非自适应实现相比,分布式SGD的自适应版本可以在更少的时间内达到较低的误差值。此外,结果还表明,自适应版本是沟通效率的,其中主人与工人之间所需的通信量小于非自适应版本的沟通量。
translated by 谷歌翻译