As a popular research direction in the field of intelligent transportation, various scholars have widely concerned themselves with traffic sign detection However, there are still some key issues that need to be further solved in order to thoroughly apply related technologies to real scenarios, such as the feature extraction scheme of traffic sign images, and the optimal selection of detection methods. For the purpose of overcoming these difficulties. This paper proposes a YOLO-based traffic sign detection framework. Firstly, a lightweight convolution attention mechanism is embedded into the backbone network to obtain the information of space and channel; Secondly, the multi-scale awareness module is used to replace large convolution with 3×3 convolution superposition to improve the receptive field area of the object in the model and enhance the feature fusion performance of the model; Finally, CIoU is used as the loss function of the bounding box to locate the experimental object with high precision. The experimental results show that on the CCTSDB data set, the MAP of this method reaches 91.0%, which is 3.5% higher than the original YOLOv5. Compared with other mainstream object detection algorithms, it has a certain degree of improvement, which proves the effectiveness of this method.