With the complexity of the robot operating environment increases, there becoming higher demands on the optimal path planning for robots. Most of the path planning is performed in known environments and static models. However, there are still challenges for robots to perform path planning in complex unknown or dynamic environments, which will suffer from deadlock problems and obstacle avoidance failures. Reinforcement learning (RL) can help fuzzy algorithm to optimize the strategy. However, the difficulty of designing the rewards in RL makes the algorithm require a large number of samples to learn the strategy, resulting in computational complexity. To solve these problems, a new local path planning based on the improved fuzzy and Q(λ)-learning algorithms is proposed, aiming to plan the shortest path and avoid obstacles. For solving the problems of breaking through and avoiding obstacles, a fuzzy controller is designed. The distance of nearest obstacle in front of the mobile robot and the distance between the obstacles in the two breakout directions are regarded as the two inputs for this controller. And the two fuzzy quantities of the mobile robot’s running angle and the safe step length are outputted. In the path planning, the Q(λ)-learning algorithm are used to optimize the weights of the running angle and the safe step, obtaining a more accurate robot position and speeding up path planning efficiency. Furthermore, to solve the overlap problems among the starting point, end point, and obstacles, a safer running environment is designed considering radiuses of these objects. Besides, the mobile robot breakout scheme and sustainable obstacle avoidance scheme are designed to solve the deadlock problem and “large obstacle” avoidance problem, respectively. Simulation results in the sparse and complex operating environment show that our proposed algorithm can plan a relatively optimal and safe path, improving the success rate of path planning.