论文标题

使用感官时刻启用无监督的多模式元学习

Using Sensory Time-cue to enable Unsupervised Multimodal Meta-learning

论文作者

Liu, Qiong, Zhang, Yanxia

论文摘要

随着物联网(物联网)传感器的数据变得无处不在,最新的机器学习算法直接使用传感器数据就面临着许多挑战。为了克服这些挑战,必须设计方法以直接从传感器中学习,而无需手动注释。本文引入了无监督的元学习(Stum)的感觉时间提示。与传统的学习方法不同,这些方法在很大程度上取决于标签或与时间无关的特征提取假设(例如高斯分布特征)不同,Stum System使用输入的时间关系来指导模式内和跨模态内部和跨模态的特征空间形成。 Stum从各种小任务中学到的事实可能会将这种方法放在元学习营地中。与现有的元学习方法不同,基于时间表与物联网流数据共存的多种模态内和跨多种模态组成。在视听的学习示例中,由于连续的视觉帧通常包含相同的对象,因此该方法提供了一种独特的方式,可以一起组织同一对象的功能。如果大约在同一时间与对象呈现口语名称,则相同的方法还可以使用对象的口语名称来组织视觉对象功能。这种跨模式功能组织可能会进一步帮助组织属于相似对象但在不同位置和时间获得的视觉功能的组织。通过评估可以实现有希望的结果。

As data from IoT (Internet of Things) sensors become ubiquitous, state-of-the-art machine learning algorithms face many challenges on directly using sensor data. To overcome these challenges, methods must be designed to learn directly from sensors without manual annotations. This paper introduces Sensory Time-cue for Unsupervised Meta-learning (STUM). Different from traditional learning approaches that either heavily depend on labels or on time-independent feature extraction assumptions, such as Gaussian distribution features, the STUM system uses time relation of inputs to guide the feature space formation within and across modalities. The fact that STUM learns from a variety of small tasks may put this method in the camp of Meta-Learning. Different from existing Meta-Learning approaches, STUM learning tasks are composed within and across multiple modalities based on time-cue co-exist with the IoT streaming data. In an audiovisual learning example, because consecutive visual frames usually comprise the same object, this approach provides a unique way to organize features from the same object together. The same method can also organize visual object features with the object's spoken-name features together if the spoken name is presented with the object at about the same time. This cross-modality feature organization may further help the organization of visual features that belong to similar objects but acquired at different location and time. Promising results are achieved through evaluations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源