Jinying Li, Yuting Dai, Chao Yang
DOI Number: N/A
Conference number: IFASD-2024-032
This study investigates the adaptation of reinforcement learning into stall flutter suppression. The geometric model is a NACA0012 airfoil with active trailing edge morphing.
Firstly, an offline, rapid responsive stall flutter environment is constructed with differential equations, where the aerodynamic force is predicted with reduced-order models. A double-Q-network (DQN) algorithm is adapted to train the controlling agent with the proposed offline environment. The agent has 5 optional actions with different amplitudes and directions of morphing. The reward function is designed with a linear combined punishment of pitching angle and angular velocity, a large bonus reward on complete suppression, and a large punishment on over-limit morphing. The trained agent shows a rapid and complete stall flutter suppression performance in offline environment simulation, where different sets of observations and scores are discussed.