英文摘要 |
The polygraph test can be used for criminal investigation and prevention. Contactless lie detection has become a trend sooner rather than later in our country due to the requirement of polygraph policy and business management for lie detection. The goal of this research is to create a prototype of a native acoustic lie detection technology based on this concept. Due to the scarcity of a Chinese audio database, this study relies on the principle of personalized lie detection. In a simulation test, 27 participant sample data sets were obtained, containing truth and deception acoustic files in varied chronological sequences. In 50 genuine polygraph cases, audio files from the neutral/relevant/ comparison questions during the pretest interview stage were collected to evaluate the real polygraph cases. Only the subjects’ voices are preserved when the audio files are edited. These files are submitted to voice activity detection, openSMILE feature extraction and selection, and discrimination analysis during the data editing process. The results indicate that identifying the truth purely based on specific auditory features is unreliable. The accuracy rate of corss-validation is 85% in simulation cases and 92% in real cases. By examining the classification results, we can see that the acoustic characteristics used for categorization are roughly the same, although each person has subtle variances. The number of features selected varied with recording quality; more features were picked in higher recording quality files, but there is no positive correlation between the number of features selected and the accuracy rate. Based on these results, we believe that the personalized acoustic analysis methodology utilized in the study has the potential to be used for native lie detection. The subsequent research will concentrate on the subject's standard corpus collection for comparison. |