| 英文摘要 |
Machine learning (ML) has been applied to psychological characteristic assessment, but most studies have used self-reported scores from scales as annotations, while only a few have used expert ratings as labels. There is a lack of studies using the same data for dual annotating, which can allow researchers to compare the results of self-reported and expert-rated annotation models and to discover new knowledge from the results. This study aims to explore the application of ML on career interests in the context of postmodern career counseling and consultation. The performance of ML applied to learning expert ratings or ML applied to approximating self-reported scale scores are explored. Using a training set of 1007 university students, two ML models were trained with annotations from experts and scales. The model quality and validity performance were analyzed using a test set of 250 samples. Results revealed that applying ML to approximate scale scores yielded only low model quality (r = .26), with poor convergence and discriminant validity. No evidence of criterion-related validity. The model quality was still far from the practical threshold. In contrast, applying ML to learn expert assessments showed moderate model performance (r = .60) with satisfactory convergence and discriminant validity. Although criterion-related validity did not reach significance, but there was a certain degree of positive effect. The model quality was generally close to the practical threshold. In practical application, it is suggested to follow the original practice of career counseling and consultation. In the first stage, ML models can be used to assist in the initial assessment of career interests. In the second stage, systematic scale measurement can be implemented according to the needs of the client. The discriminant analysis showed that there was an incremental effect in both stages. Finally, this study also discussed the possible implications and limitations of the low correlation between expert ratings and self-reported ratings for ML research. |