| 英文摘要 |
Accompanying the evolution of the digital technology era, smartphones have become indispensable items in most people's daily lives. However, while enjoying the convenience, new types of fraud also pose threats. In August 2023, domestic police uncovered a fraud case involving audio deepfake, where scammers impersonated a convenience store supervisor to call internal employees, instructing them to operate machines and purchase game points. Similar incidents have occurred abroad in recent years, highlighting the risk that smartphone users face even when receiving calls from familiar voices, as ''hearing is not believing.'' In the face of such audio deepfake fraud cases, current domestic call recognition functions primarily identify fraud or harassment calls based on ''caller numbers'' and cannot recognize real-time call audio. Therefore, this research proposes a precautionary prevention method. When users receive smartphone calls, a mobile app using deep learning technology with an accuracy rate of 98.1% scans the voice features. It instantly analyzes the probability of the call audio being deepfake after connecting, and alerts users with a pop-up warning window to increase vigilance and verify further, avoiding sophisticated audio fraud traps. Although government agencies have paid attention to AI fraud and promoted awareness among the public in recent years, it is unrealistic to expect everyone to remain vigilant autonomously. Thus, by constructing this smartphone deepfake audio fraud alert system, we provide users with recognition information, effectively enhancing their chances to prevent new types of telecom fraud. |