Joints    7 DOF
Planner  RRTConnect
IK Mode  Jacobian
Sensor   RGB-D Cam
System: Franka Panda
Platform: 7-DoF Manipulator
Mode: Autonomous
Perception   Planning   Control   2025

Franka
Panda
Manipulation

A classical robotics pipeline built from scratch on a real 7-DoF arm. HSV perception, Jacobian IK, and RRTConnect motion planning working together to pick and place objects autonomously.

7-DOF
Franka Panda
RRTConnect
Motion planner
Pinocchio
IK solver
HSV
Object detection
System: Franka Panda   Mode: Classical Pipeline   Env: ROS 2 + MuJoCo
Perception HSV
Planning RRTConnect
IK Pinocchio
7
Degrees of
freedom
HSV
Color segmentation
object detection
RRT
Sampling-based
motion planning
J⁺
Damped least-squares
IK solver
Franka Panda pick and place   ROS 2   Pinocchio IK   OMPL
What I built

Classical Robotics
Done Right

The goal was to build a pick and place system on a real 7-DoF Franka Panda arm using nothing but classical robotics tools. No end-to-end learning, no pretrained policies. Just perception, planning, and control working together cleanly.

The pipeline has three stages that are deliberately decoupled. Perception finds the object. Planning computes a collision-free path to it. The IK controller executes that path on the real joints. Each stage is replaceable without touching the others.

The pipeline

Four stages.
One robot.

Separating planning from execution was the key design decision. When something breaks you know exactly which stage to look at. That matters on real hardware more than anything else.

Autonomy Stack
Camera → Perception → 3D Pose → Planner → IK → Joint Commands

Every stage exposes a clean ROS 2 interface. Swapping HSV detection for a neural detector requires zero changes downstream.
01
HSV Perception
I convert each RGB frame to HSV and threshold it to isolate the target object by color. The dominant contour gives me a 2D centroid via image moments. I back-project that to 3D using the known camera intrinsics and ROS 2 TF to transform it into the robot base frame.
OpenCV   ROS 2 TF
↓ 3D object pose
02
RRTConnect Motion Planner
Planning runs in the 7-DoF configuration space. RRTConnect grows two random trees simultaneously from the start and goal configurations until they connect. The result is a collision-free joint trajectory that gets time-parameterised before execution.
OMPL   RRTConnect
↓ joint trajectory
03
Jacobian IK via Pinocchio
Each waypoint in the planned trajectory gets resolved into joint angles using damped least-squares IK. Pinocchio gives me analytical Jacobians so the computation is fast enough to run in the real-time control loop without drift.
Pinocchio   Damped J⁺
↓ joint commands
04
State Machine Execution
A simple state machine sequences the full pick and place task. Reach, grasp, lift, move, place, retreat. Each transition calls the planner and IK controller with a new target pose. Failure detection at every step.
ROS 2   Franka Interface
Perception in detail

See it.
Locate it.

I chose HSV segmentation deliberately. It is simple, transparent, and every failure mode is visible — you can literally see the mask and know what went wrong. For this task it is the right tool.

The centroid computation uses image moments, which gives sub-pixel accuracy. The depth reading at that pixel plus the camera intrinsics gives a 3D point in camera frame. ROS 2 TF handles the rest.

HSV Threshold Mask
M(x,y) = 1  if H_min H(x,y) H_max M(x,y) = 0  otherwise
Centroid from Image Moments
c_x = m₁₀ / m₀₀ c_y = m₀₁ / m₀₀
HSV detection to 3D pose
# Convert to HSV and threshold hsv = cv2.cvtColor(frame, COLOR_BGR2HSV) mask = cv2.inRange(hsv, lower_hsv, upper_hsv) # Largest contour is the target cnts, _ = cv2.findContours(mask, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE) cnt = max(cnts, key=cv2.contourArea) # Sub-pixel centroid via moments M = cv2.moments(cnt) cx = M['m10'] / M['m00'] cy = M['m01'] / M['m00'] # Back-project to 3D X = (cx - cx_intr) * depth / fx Y = (cy - cy_intr) * depth / fy Z = depth # Transform to robot base frame via TF p_robot = tf_buffer.transform(p_cam, 'panda_link0')
Motion planning

Planning in
C-Space

The Franka Panda has 7 joints, so the configuration space is 7-dimensional. RRTConnect builds two trees simultaneously, one from the current joint configuration and one from the goal. They grow toward each other with random samples until a bridge is found.

The bidirectional approach converges significantly faster than a single tree on high-DoF arms, which matters when the robot is waiting to execute. The resulting path is time-parameterised for smooth velocity profiles before it goes to the IK controller.

Connection Criterion
q_start q_goal < ε
  • Configuration space is ℝ⁷ — one dimension per joint
  • RRTConnect grows bidirectional trees for faster convergence on high-DoF arms
  • Full collision checking at every sample against scene geometry
  • Path is time-parameterised for smooth velocity profiles
  • OMPL handles sampling, nearest-neighbour queries, and tree extension
  • Integrates with MoveIt 2 for the ROS 2 planning scene
  • Replanning triggered automatically if a collision is detected mid-execution
Inverse kinematics

Joint angles
from a pose

Forward kinematics gives you the end-effector pose from joint angles. IK inverts that. For a 7-DoF arm the problem is redundant — there are infinitely many joint configurations that reach the same pose — so I use the damped pseudoinverse Jacobian to find the minimum-norm solution.

The damping factor λ keeps the solution stable near singularities where the plain pseudoinverse would blow up. Pinocchio provides the analytical Jacobian at each configuration so the loop runs fast enough for real-time control.

Damped Least-Squares IK
= Jᵀ (JJᵀ + λ²I)⁻¹ · ε ε = x* x // task-space error λ = damping // prevents singularity blow-up
Pinocchio IK loop
# Load URDF model model, data = pinocchio.buildModelFromUrdf(urdf) # Forward kinematics at current config pinocchio.forwardKinematics(model, data, q) x_ee = data.oMf[ee_frame].translation # Task-space error eps = x_target - x_ee eps_R = log3(R_target @ R_ee.T) # Analytical Jacobian from Pinocchio J = pinocchio.computeFrameJacobian(model, data, q, ee_frame) # Damped least-squares step lam = 1e-3 dq = J.T @ inv(J @ J.T + lam*eye) @ eps q = q + dt * dq publish_joint_cmd(q)
Side note

After getting the classical pipeline working I also trained a PPO-based policy in MuJoCo to see how a learned approach compared on the same task. The classical system is more interpretable and easier to debug on hardware. The RL policy generalises better to object position variation once trained. Both have their place — I just found the planning work more interesting to build.