英文摘要 |
Spoken Language Understanding (SLU) is an important part of spoken dialogue systems, which involves two subtasks: slot filling and intent detection. In the SLU task, joint learning has proven effective because intent classes and slot labels can share semantic information with each other. However, because of the high cost of building manually labeled datasets, data scarcity has become a major bottleneck for domain adaptation in SLU. Recent studies on text generation models, such as Dirichlet variational autoencoders (DVAE), have shown excellent results in generating natural sentences and semi-supervised learning. Inspire by this, we first propose a new generative model DVAE-SLU that exploits DVAE’s generative ability to generate complete labeled utterances. Furthermore, based on DVAE-SLU, we propose a semi-supervised learning model SDVAE-SLU for joint slot filling and intent detection. Unlike previous methods, this is the first work to generate SLU datasets using DVAE. Experimental results on two classic datasets demonstrate that compared with baseline methods, existing SLU models achieve better performance by training synthetic utterances generated by DVAE-SLU, and the effectiveness of SDVAE-SLU. |