Video anomaly detection (VAD) is crucial in public safety and intelligent video surveillance systems and has been widely researched and applied in academia. This paper proposes a video anomaly detection method based on the Cascaded Memory-augmented Autoencoder (CMAAE). CMAAE stores feature prototypes of standard samples in a memory pool and embed multiple memory-enhancing modules in the encoder-decoder structure. SE attention is introduced into memory modules to improve their performance, and skip connections are used to share attention weights among memory modules, enabling the model to learn more comprehensive feature information and enhance the quality of reconstructed video frames. Multiple loss constraint models are used during training to improve anomaly detection accuracy. CMAAE achieves outstanding performance of 99.2% on the UCSD Ped2 dataset and 89.4% on the CUHK Avenue dataset, demonstrating the effectiveness of our approach.