Test equating is a statistical process to adjust scores on different forms to the same scale, so that scores obtained on different forms can be compared to each other. According to the item response theory, while processing test equating, anchor items must be involved in different tests, so that they can be served as a link among these tests. This research aims to investigate the impacts of estimating error distributions in anchor item parameters on test equating. The result shows that the measurement error in the anchor items can be directly reflected on the test equating, which has greater impact when the difficulty parameters have estimation errors; and increasing test items can reduce bias during test equating, while the amount of the tested has not much impact, and finally, the equating of the anchor shows the best effect when it takes up to 20% to 30% of test items.