英文摘要 |
Person identification is always one of the most popular technology applications. There are many devices and products have been sold to do person identification, such as radio frequency identification (RFID), face recognition, and iris recognition. However, most of identifications approaches, which are all based on single technology, have limitations when applying in the real environment. For example, they are strongly restricted by specific scenarios and spatial condition of places. In this paper, we propose a data fusion method which combines three kinds of sensors, a camera, inertial sensors and compasses. The camera can capture the video of the whole space, with the video and AI algorithms, the record objects’positions and trajectories can be calculated and identified. Each user is equipped with a wearable device, and the wearable device can capture the user’s motion without any space constraints. The video is not used for face or iris recognition so video quality is not concern here and privacy violation problem is prevented. In this paper, we propose a feature fusion algorithm, which not only considers the motion trajectory of the subject, but also the time characteristics. By the proposed methods, user and wearable devices are paired, so each user can be identified via his or her wearable device, which owns a unique id. According to experiments, our system reaches over 95% recognition rate. A prototype implementation is completed demonstrated to verify the feasibility of our proposed approach. |