GuidanceCore — MAVERICK GitHub

Autonomous Missile Guidance Systems

MISSION

GuidanceCore — MAVERICK

Autonomous Missile Guidance using Model Predictive Control
STATUS TACTICAL READY

GuidanceCore — MAVERICK

Autonomous guidance system — comparative study of State-Integral (SFI), LQR (with integral), and MPC. Built & simulated in MATLAB; MPC implemented with CasADi/IPOPT for optimization.

Mission Brief ↓
TARGET ACQUIRED
AUTO-TRACK ENGAGED
RANGE 12.40 km
G-FORCE 32.60 g
LOCK: FULL
RANGE TO IMPACT: 00:00:12
MODE: AUTONOMOUS
[ MISSION BRIEFING // CLEARANCE: ALPHA ]

Objective: Evaluate three autonomous guidance laws—State-Integral (SFI), Linear Quadratic Regulator with Integral action (LQR+I), and Model Predictive Control (MPC)—for engagement in a short-period missile environment. Each controller shall be assessed across:

  • Trajectory tracking accuracy under dynamic load
  • Control effort and actuator stress
  • Terminal constraint handling and stabilization time

Intel Source: Missile dynamics are modeled using a linear short-period approximation (Ref. Mracek & Ridgely), simulating autonomous interception without pilot override. Simulations executed using MATLAB and CasADi/IPOPT for comparative fairness.

[ NOTE ]
Legacy controllers (SFI/LQR) provide baseline doctrine. MPC introduces predictive capability for next-gen AI-guided interceptors.
[ MODULE SFI // CALLSIGN: EAGLE ]
State-Integral Augmented Feedback — Low energy, educational baseline

Role: Augmented state-feedback with integral action to eliminate steady-state Az error. Designed via pole-placement for an augmented short-period model (α, q) with integral of acceleration error.

Design

  • Augmented integral state: ė_int = Az_ref − Az_m
  • Control law: u = −K x − K_i e_int
  • Gains via pole-placement on augmented system

Characteristics

  • Low average control effort; smooth actuator commands
  • Slower time-to-target versus LQR/MPC
  • Useful for teaching/intuition about integral compensation

Experimental Notes

SFI exhibits an initial corrective spike followed by smooth convergence in control deflection (δp). However, during aggressive line-of-sight (LOS) maneuvers, it may show higher tracking error in Az. Refer to the project plots for full time-series behavior and actuator response.

[ MODULE LQR // CALLSIGN: TALON ]
LQR with Integral — Balanced optimal regulator with zero steady-state error

Role: Optimal infinite-horizon state-feedback augmented with an integral state to remove steady-state acceleration error. Gains computed by solving the Discrete Algebraic Riccati Equation (DARE).

Design

  • Cost minimized: J = Σ (xᵀQx + uᵀRu)
  • Augmented matrices build integral into state vector; solve DARE → K_aug = [K, K_i]
  • Tuned via Qy and R weighting (Qy = CᵀQyC formulation)

Characteristics

  • Best Az tracking among studied controllers (zero steady-state error)
  • Moderate control effort and reliable transient behavior
  • Computationally lightweight — suitable for embedded autopilots

Experimental Notes

LQR+I shows quick stabilization of α and q with smooth control surface deflections (δp). It is effective when computational efficiency and predictable optimality are required. Refer to the project documentation for Riccati derivations and full state trajectories.

[ MODULE MPC // CALLSIGN: VIPER ]
Model Predictive Control — Predictive, constraint-aware, fastest response

Role: Finite-horizon optimization solved online (CasADi + IPOPT). Balances tracking, control effort, and terminal position cost while enforcing input bounds to respect actuator limits.

Formulation

  • Horizon cost: min_U Σ [(C x_k − y_ref_k)ᵀQy(C x_k − y_ref_k) + u_kᵀ R u_k] + terminal
  • Dynamics: x_{k+1} = A_d x_k + B_d u_k, input bounds u_min ≤ u_k ≤ u_max
  • Terminal cost penalizes deviation from impact point (p_N)

Characteristics

  • Fastest time-to-target and lowest total control effort in simulations
  • Handles input bounds directly (u_min = −0.3, u_max = 0.2 in experiments)
  • Computationally intensive — requires solver tuning and warm-starting for real-time use
// EXECUTE: OPEN MATLAB CODE

Experimental Notes

MPC produced superior response (Time-to-target ≈ 8s) and efficient actuator use, but soft terminal penalties caused some Az tracking trade-offs. See Tables & Figures in the report for numeric metrics.

< HANGAR BAY // AUTOPILOT COMPARATIVES >

PERFORMANCE METRICS
SFI
LQR
MPC
Time to Target (s)
13.80
10.00
8.00
Avg Control Effort (rad)
0.04
0.10
0.08
Final Az Error (g)
19.91
0.00
14.97
Total Control Effort (Σ|u|)
48.94
10.34
6.06
[ VIDEO FEED // FIELD TEST RUN ]
// OPEN THE COMPLETE MISSION BRIEFING