| 英文摘要 |
Detecting road marking signs is a pivotal aspect of advanced driver-assisted systems (ADAS) and autonomous driving, providing crucial input for decision-making. However, the accuracy of road sign marking detection can be significantly affected by environmental factors, particularly in low-light conditions such as during night-time or within tunnels. In our research, we utilized two variants of YOLOv7, standard YOLOv7 and YOLOv7-tiny, combined with contrast enhancement techniques to enhance the detection of road marking signs, specifically focusing on low-light scenarios. Contrast-limited adaptive histogram equalization (CLAHE) and linear image fusion are employed and tested to detect road marking signs at night. The Taiwan road marking sign dataset at night (TRMSDN) is used in this research. Our evaluation results, comparing standard YOLOv7 and YOLOv7-tiny, revealed that leveraging contrast enhancement techniques can improve detection performance in low-light conditions. Our proposed method, which combines linearly fused images shows superior performance with 0.735 precision, 0.874 recall, 0.843 mAP .5, and 0.798 F1-score for standard YOLOv7. YOLOv7-tiny achieves 0.782 precision, 0.843 recall, 0.850 mAP .5, and 0.811 F1-score. Additionally, our experiment showcased that YOLOv7-tiny performs comparably to the standard YOLOv7 in detecting road marking signs, making it a viable option for deployment in edge devices. The limitation of the study is the usage of a constant weight during linear fusion and the limited number of images and classes in the dataset. Further research will try to address the limitation by investigating an adaptive weight value and increasing the number of data in the dataset. |