Skip to main content
SYS_ONLINE
RESUME
IDENTITY / 01

Ryan HuangRobotics Product Manager

I build the robots I ship specs for.

UCLA Anderson MBA · 5+ years across AI product management, infrastructure PM, and hands-on robotics. Most PMs translate between product and engineering — I already speak both. When I write a spec, I've already debugged the code.

$ ships_product = writes_code()_

● EXECUTIVE_SUMMARY
MBA
UCLA Anderson
Class of 2027
EXPERIENCE
5+ yrs
PM · AI · Infra
LOCATION
Los Angeles
Open to relocate
AVAILABLE
Summer 2026
FT: 2027
RESUME
Download PDF
~125 KB · 2026
▸ SEEKING / 2026
Robotics / Autonomy PM roles — perception, planning, control, simulation.
// Built for NVIDIA Robotics · Isaac · Jetson · Drive
● LIVE_FEED / ROBORACER
ALGO
GAP_FOLLOW
SPEED
1.5 m/s
STATUS
NOMINAL
// 1/10-scale autonomous race car · ROS 2 Humble · Jetson Orin Nano
Scroll / Explore
PROFILE / 01

The crossover

I'm a Robotics Product Manager who codes. I build the systems I ship — so the specs I write and the trade-offs I negotiate are grounded in what actually has to work on the robot.

Most PMs write requirements. Most engineers write code. I do both — which means when I talk about perception latency, planning horizons, or the cost of model predictive control, I'm not repeating jargon. I've debugged it at 2am on a real car.

● PROFILE_DATA
LOCATION
Los Angeles, CA
MBA
UCLA Anderson
Class of 2027
BACKGROUND
5y AI PM · Infra PM
Civil Eng M.S.
TARGET
NVIDIA Robotics
STATUS
OPEN TO OPPS
▪ TRAJECTORY
4 NODES
> AWAITING_NEXT_NODE · NVIDIA Robotics Product Management_
01 · 2026 – nowSHIP
Robotics Product
// UCLA Mobility Lab · Autonomy

Closing the loop — applying product thinking to the field I love most. Building autonomous systems end-to-end so the specs I write and the trade-offs I negotiate come from real hardware, not slides.

  • RoboRacer (formerly F1TENTH) autonomous racing stack on Jetson + ROS 2
  • Implemented 6 algorithms end-to-end: AEB, wall follow, gap follow, pure pursuit, RRT*, MPC
  • Targeting NVIDIA robotics — PM with deep autonomy + sim fluency
02 · 2025 – 2027TRANSLATE
MBA · UCLA Anderson
// Technology Management

Full-time MBA in the Technology Management track. Crossed from engineering leadership into product strategy — market sizing, customer research, roadmap trade-offs, GTM.

  • Technology Management track · Technology Immersion Program honors
  • Startup project: Modeling.ai — AI-powered 3D modeling via MCP for AEC
  • Closing the gap between technical feasibility and business reality
03 · 2025LAUNCH
AI Product Manager
// Zendesk · James Tech Consulting · Taipei

First dedicated product role — shipped an AI customer-service chatbot extension on the Zendesk platform. Hands-on LLM fine-tuning, CRM integration, and cross-functional delivery.

  • Shipped AI chatbot extension → 70% support-effort reduction · 2× revenue in 5 mo
  • Fine-tuned LLM + custom API into BenQ's CRM — saved the client $200K / month
  • Expanded the chatbot to a new e-commerce vertical: +50% client acquisition, 70% conversion
04 · 2020 – 2025BUILD
Infrastructure PM
// China Engineering Consultants + Taiwan Gov · Taipei

5 years managing large-scale public infrastructure — underground railways, high-rises, LNG terminals. Learned to ship where failure has real, physical consequences.

  • Managed a $3.3B underground railway · balanced 5+ stakeholders · 100% on-time
  • Resolved a critical issue via independent data analysis — saved $1.5M, accelerated 12 mo
  • Delivered 3+ large-scale projects · -15% execution cost · on schedule under evolving scope
FEATURED_PROJECT / 02

Robo/Racer

Six autonomous driving algorithms, built solo from scratch on a 1/10-scale race car — from reactive control to model predictive control — running ROS 2 on NVIDIA Jetson.

Most PMs write requirements. I wrote these six algorithms — from scratch, on a real race car — so the roadmap trade-offs I make come from actually debugging them at 2am, not from a textbook.

RUNTIMEROS_2_HUMBLE
PLATFORMJETSON_ORIN_NANO
LANGPY · C++
ROLESOLO_BUILD
Ryan19941212/F1tenth
▸ TL;DR / 30 SEC
WHAT
A 1/10-scale autonomous race car I built end-to-end — from firmware up to motion planning.
WHY IT MATTERS
Proves I can do the robotics work I'm asking to manage. Hiring managers don't guess.
STACK
ROS 2 · NVIDIA Jetson · C++/Python · LiDAR perception · MPC control.
SCOPE
6 algorithms · ~5k LOC · solo build · real hardware + sim · 2025 – 2026.
● REAL_HARDWARE
RoboRacer (formerly F1TENTH) 1/10-scale autonomous race car
● LIVE
CAM_01
ROBORACER / LAB_CAR
▪ SYSTEM_PARAMS
ALGORITHMS
06
from scratch
LIDAR_HZ
15 Hz
A2M12 scan rate
MPC_HORIZON
800 ms
8 × 100 ms lookahead
GRID_RES
0.1 m/cell
occupancy grid
SAFETY_PRIO
200
AEB mux override
CODEBASE
Py + C++
~5k LOC, solo

// Parameters tuned on real hardware, not simulation defaults

● FIELD_NOTE / SIM_TO_REAL

The servo died. I rebuilt the firmware.

Halfway through Lab 5, the steering servo on the real car stopped responding. The default VESC 6.06 firmware shipped for 60_MK6 had a broken servo output path — and the RoboRacer (formerly F1TENTH) pre-built binaries only covered MKIII/MKV/PLUS/FLIPSKY. No MK6.

So I kept developing in AutoDRIVE sim for the algorithm work, and in parallel built VESC firmware from source off the release_6_06 branch for the MK6 target using a Docker cross-compile. Flashed via Custom File tab → servo alive → back on real hardware. Robotics is half writing controllers, half knowing when to drop a level of the stack.

DEBUG_LOG
01
Servo dead on MK6
02
Pivot to AutoDRIVE sim
03
Build FW from source
04
Flash · back online ✓
▪ HARDWARE_STACK
COMPUTE
Jetson Orin Nano
arm64 · JetPack 6 · L4T R36.4
LIDAR
RPLIDAR A2M12
8 m range · 15 Hz · 360°
MOTOR_ECU
VESC 60_MK6
Custom FW · release_6_06
RUNTIME
ROS 2 Humble
Ubuntu 22.04 · DDS
▪ ALGORITHM_STACK
06 / MODULES
LAB_02
SAFETY

Automatic Emergency Braking

Vectorized iTTC across every LiDAR beam.

Computes instantaneous Time-to-Collision per beam with NumPy. Publishes brake commands to Ackermann mux (priority 200) AND directly to VESC as redundant failsafe.

THRESH=0.5 s
PRIORITY=200
01
LAB_03
CONTROL

PID Wall Following

Two-beam wall angle estimate + PD controller.

Uses 90° and 45° LiDAR beams to estimate wall angle α. 1 m lookahead projects lateral error → steering.

Kp=0.8
Kd=0.2
L=1.0 m
02
LAB_04
REACTIVE

Follow the Gap

Map-free reactive planner in steerable FOV.

Clip + smooth ranges, find closest obstacle in ±24° FOV, zero out dynamic safety bubble (scales with proximity), steer toward midpoint of widest gap.

FOV=±24°
V_MAX=1.5 m/s
V_MIN=0.3 m/s
03
LAB_05
TRACKING

Pure Pursuit

Geometric path tracker over CSV waypoints.

Finds first waypoint ≥ Ld ahead, transforms to vehicle frame, applies pure pursuit law: γ = 2y/Ld², δ = atan(L·γ). Speed modulated by curvature.

Ld=1.5 m
WAYPOINTS=CSV
04
LAB_06
PLANNING

RRT*

Sampling-based replanning over occupancy grid.

LiDAR → 0.1 m/cell occupancy grid (9×10 m vehicle frame). RRT* with asymptotic optimality via rewiring. Path tracked by Pure Pursuit.

ITER=300
STEP=0.3 m
REWIRE_R=0.8 m
GOAL_BIAS=15%
05
LAB_08OPTIMIZATION★ CAPSTONE

Model Predictive Control

Receding-horizon QP with CVXPY. Warm-started.

State [x, y, v, yaw] · Input [accel, δ_rate]. Horizon: 8 steps × 0.1s = 0.8s lookahead. Cost Q = diag([13.5, 13.5, 5.5, 13.0]) balances position + heading over velocity. Each solve warm-starts from the previous solution to keep the solver on the same local optimum.

PARAMETERS
HORIZON
8 × 100 ms
SOLVER
CVXPY/OSQP
STATE_DIM
4
INPUT_DIM
2
06
▪ CODE_SAMPLES
02 / FILES
safety_node/aeb.py
# Lab 2 — Vectorized iTTC across every LiDAR beam
range_rates = -self.speed * np.cos(angles)
ttc         = ranges / np.maximum(range_rates, 1e-6)

# Any beam closing faster than the threshold → brake.
if ttc[range_rates > 0].min() < TTC_THRESHOLD:
    self.publish_brake()  # prio 200 → overrides all autonomy
mpc/controller.py
# Lab 8 — MPC cost with warm-start
Q  = np.diag([13.5, 13.5, 5.5, 13.0])  # x, y, v, yaw
R  = np.diag([0.01, 100.0])            # accel, steering_rate

cost = 0
for k in range(HORIZON):                # 8 × 0.1s = 0.8s lookahead
    state_err = x[:, k] - ref[:, k]
    cost     += cvxpy.quad_form(state_err, Q)
    cost     += cvxpy.quad_form(u[:, k],    R)

# Warm-start from previous solution → faster convergence
prob.solve(solver=cvxpy.OSQP, warm_start=True)
▪ ENGINEERING_DECISIONS
TRADE-OFFS

The fun part of robotics is not writing the algorithm — it's deciding which one, tuned how, at what latency cost. These are three calls I had to make:

DECISION_01

Why RRT* over plain RRT?

Plain RRT finds a path fast but rarely an optimal one. The *rewire* step in RRT* gives asymptotic optimality for a bounded cost per iteration — worth it on a 1/10 car where compute isn't the bottleneck but path quality matters for lap time.

DECISION_02

Why a 0.8 s MPC horizon, not longer?

Longer horizons look smarter on paper but explode solver time and over-commit to a stale prediction in a reactive environment. 8 × 100 ms hits the sweet spot: long enough to anticipate corners, short enough that the warm-start is still relevant next tick.

DECISION_03

Why redundant AEB (mux + direct-to-VESC)?

The Ackermann mux is clean in theory but adds a failure point. If the mux crashes or lags, the car keeps whatever velocity it last had. Publishing the brake directly to VESC gives a second independent path — safety-critical code shouldn't have a single point of failure.

▪ SYSTEM_ARCHITECTURE
f1tenth_ws/architecture.diagram

    [ /scan ] LiDAR ─────────────────────────────────┐
                                                      │
                                                      ▼
    [ /odom ] Odometry ──► ┌────────────────────────────────┐
                           │  SAFETY NODE (AEB)             │
                           │  iTTC < threshold → brake      │
                           └───────┬───────────┬────────────┘
                                   │ prio 200  │ redundant
                                   ▼           ▼
    [ /joy ] Joystick ──► ┌────────────────┐  ┌────────┐
                          │ Ackermann Mux  │─►│  VESC  │─► steering + throttle
                          │ 200 > 100 > 10 │  │        │
    [ planner ] ─────────►└────────────────┘  └────────┘
        wall_follow
        gap_follow          SAFETY ALWAYS PRE-EMPTS AUTONOMY
        pure_pursuit
        rrt_star
        mpc

// Priority-based Ackermann mux ensures AEB (priority 200) overrides any autonomous algorithm (priority 10) in case of imminent collision.

▪ LIVE_DEMO_FEED
03 / CAPTURES
● CH_01
Follow the Gap
REACTIVE
REC
● CH_02
PID Wall Follow
CONTROL
REC
● CH_03
Manual Teleop
BASELINE
REC
OTHER_WORK / 03

Adjacent projects

Side projects exploring the intersection of AI, geometry, and production tooling. Each one shipped to real users.

PROJECT_01

RhinoMCP

AI-driven parametric geometry in Rhino 3D

Built an MCP integration that lets architects generate parametric buildings and bridges from natural-language prompts, bridging LLM reasoning with Rhino's Python API.

MCPLLM AgentPythonRhino APIProcedural Geometry
TIME_SAVED
80%
STACK
MCP + PY
STATUS
Shipped
View source
RESUME / 05

Full dossier

The complete record — experience, education, projects, and publications.

MBA
UCLA Anderson
ENG
5y Civil / Automation
ROBOTICS
ROS 2 · Jetson · MPC
TARGET
Robotics Product
CONTACT / 06

Open a channel.

Building something in robotics, autonomy, or AI product? I want to hear about it. Usually responds within 24 hours.

// Portfolio v2.0 · 2026 · Built with Astro + React + Tailwind
SYS_ONLINE · LA, CA