██ CLEARANCE: ALPHA-01  ·  MISSION: ENGAGE  ·  PLATFORM: SIM
ALT   10,000 M
VEL   1021.08 M/S
G-FORCE 32.6 G
T-I-M-E  00:08.00
Callsign: MAVERICK  ·  Autonomous Guidance  ·  2025

Guidance
Core
Maverick

Comparative autopilot study: SFI, LQR, and MPC guidance laws evaluated against missile longitudinal dynamics. Fastest time-to-target confirmed at 8 seconds. Built in MATLAB with CasADi/IPOPT.

8.00s
MPC Time-to-Target
IPOPT
NLP Solver
1021
M/S Velocity
3
Controllers Compared
MISSION ID: MAVERICK-01  ·  OBJECTIVE: INTERCEPT  ·  STATUS: COMPLETE
MODULE SFI  EAGLE
MODULE LQR  TALON
MODULE MPC  VIPER
13.8s
SFI time-to-target
baseline
10.0s
LQR time-to-target
optimal regulator
8.0s
MPC time-to-target
fastest · confirmed
6.06
MPC total Σ|u|
lowest effort
MATLAB simulation — SFI vs LQR vs MPC · Missile longitudinal dynamics · CasADi / IPOPT
Mission briefing

Engage.
Intercept.
Evaluate.

Objective: evaluate three autonomous guidance laws against a linear short-period missile model. Each controller is assessed on trajectory accuracy, control effort, and time-to-target. Simulations executed in MATLAB; MPC solved online with CasADi and IPOPT for every timestep across a 50-step engagement.


The missile dynamics model is based on Mracek & Ridgely's short-period approximation — a two-state system (angle of attack α, pitch rate q) driven by fin deflection δp. Both controllability and observability confirmed analytically before controller synthesis.


VIEW MISSION FILES ON GITHUB →
Controller dossiers

Three Pilots

Module SFI  ·  Callsign: EAGLE
State-Feedback Integral
Augmented state-feedback with integral action. Gains placed via pole-placement on the augmented short-period model. Low average actuator effort — the educational baseline that everything else is measured against.
Speed
38
Accuracy
45
Efficiency
82
Module LQR  ·  Callsign: TALON
LQR with Integral
Optimal infinite-horizon regulator. DARE solved to obtain K_aug = [K, K_i]. Zero steady-state Az error, reliable transient response, computationally lightweight — suitable for embedded autopilots.
Speed
62
Accuracy
95
Efficiency
71
Module MPC  ·  Callsign: VIPER
Model Predictive Control
Finite-horizon NLP solved online via CasADi + IPOPT. Enforces actuator bounds (u ∈ [−0.3, 0.2]) directly. Fastest engagement, lowest total Σ|u|. Requires solver warm-starting for real-time ops.
Speed
95
Accuracy
68
Efficiency
88
Viper — MPC Formulation

Predictive
Intercept

The MPC problem is solved fresh at every timestep over a horizon of N=10 steps. The CasADi symbolic model uses discretised longitudinal dynamics, minimising tracking cost plus control effort while enforcing actuator bounds throughout the horizon.


A soft terminal constraint steers the predicted 3D position toward the impact point [10500m, 300m] — giving the controller spatial awareness beyond the immediate tracking objective.

  • State: α (angle of attack), q (pitch rate)
  • Control: δp (fin deflection) — bounded [−0.3, 0.2] rad
  • Horizon: N=10, timestep Ts=0.1s, 50 simulation steps
  • Qy = diag([2, 0.5]), R = 1 — tracking vs effort tradeoff
  • Terminal cost: Qterm = 0.01·I on 3D position error
  • 3D trajectory integrated symbolically inside the NLP
CasADi / IPOPT — MPC Loop
# Symbolic short-period dynamics x_next = Ad·x + Bd·u # Tracking cost across horizon cost += (Cx - y_ref)ᵀ · Qy · (Cx - y_ref) + u_k · R · u_k # 3D trajectory integrated symbolically x_pred(k+1) = x_pred(k) + Vel·cos(θ)·Ts z_pred(k+1) = z_pred(k) + Vel·sin(θ)·Ts # Soft terminal constraint → impact point cost += (p_N - target)ᵀ · Qterm · (p_N - target) # Solve with IPOPT, apply u[:,0] sol = nlpsol('ipopt', nlp, opts) u0 = sol.x[nx*(N+1):][:,0] # receding horizon
After-action report

Performance Matrix

Metric SFI — Eagle LQR — Talon MPC — Viper
Time to Target (s) 13.80 10.00 8.00 ✓
Avg Control Effort (rad) 0.04 0.10 0.08
Final Az Error (g) 19.91 0.00 ✓ 14.97
Total Control Effort Σ|u| 48.94 10.34 6.06 ✓

✓ = best in category  ·  MPC: fastest engagement and lowest total effort  ·  LQR: perfect steady-state accuracy