Recent advances in feature-based knowledge distillation have shown promise in computer vision, yet their direct application to medical image segmentation has been challenging due to the inherent high intra-class variance and class imbalance prevalent in medical images. This paper introduces a novel approach that synergizes knowledge distillation with contrastive learning to enhance the performance of student networks in medical image segmentation. By leveraging importance maps and region affinity graphs, our method encourages the student network to deeply explore the regional feature representations of the teacher network, capturing essential structural information and detailed features. This process is complemented by class-guided contrastive learning, which sharpens the discriminative capacity of the student network for different class features, specifically addressing intra-class variance and inter-class imbalance. Experimental validation on the colorectal cancer tumor dataset demonstrates notable improvements, with student networks ENet, MobileNetV2, and ResNet-18 achieving Dice coefficient score enhancements of 4.92%, 4.34%, and 4.59%, respectively. When benchmarked against teacher networks FANet, PSPNet, SwinUnet, and AttentionUnet, our best-performing student network exhibited performance boosts of 2.45%, 5.84%, 6.58%, and 3.56%, respectively, underscoring the efficacy of integrating knowledge distillation with contrastive learning for medical image segmentation.