With the advancement of virtual reality technology, 360-degree equirectangular images have become a crucial medium for immersive experiences. However, when existing 4K resolution equirectangular images are presented through virtual reality head-mounted displays, the limited pixel allocation across the viewing angle results in blurred or distorted visuals, thereby compromising user experience. This study develops a 360 SRGAN loss function model integrated with filtering mechanisms to address the insufficient resolution of equirectangular projection images. The research employs an optimized 5×5 convolutional kernel to process the characteristics of 360-degree images and designs a 360 SRGAN that combines loss functions of sharpening filters and median filters, effectively enhancing image details while suppressing noise. Furthermore, user perception experiments conducted through Meta Quest 2 with subjective evaluation verification demonstrate significant improvements in user perception of contrast and sharpness (p < 0.05), with results unaffected by professional background. This research provides a technical solution for super-resolution of VR equirectangular images and holds substantial application value for virtual reality 360-degree equirectangular environments.