RhinoMCP
AI-driven parametric geometry in Rhino 3D
Built an MCP integration that lets architects generate parametric buildings and bridges from natural-language prompts, bridging LLM reasoning with Rhino's Python API.
I build the robots I ship specs for.
UCLA Anderson MBA · 5+ years across AI product management, infrastructure PM, and hands-on robotics. Most PMs translate between product and engineering — I already speak both. When I write a spec, I've already debugged the code.
$ ships_product = writes_code()_
I'm a Robotics Product Manager who codes. I build the systems I ship — so the specs I write and the trade-offs I negotiate are grounded in what actually has to work on the robot.
Most PMs write requirements. Most engineers write code. I do both — which means when I talk about perception latency, planning horizons, or the cost of model predictive control, I'm not repeating jargon. I've debugged it at 2am on a real car.
Closing the loop — applying product thinking to the field I love most. Building autonomous systems end-to-end so the specs I write and the trade-offs I negotiate come from real hardware, not slides.
Full-time MBA in the Technology Management track. Crossed from engineering leadership into product strategy — market sizing, customer research, roadmap trade-offs, GTM.
First dedicated product role — shipped an AI customer-service chatbot extension on the Zendesk platform. Hands-on LLM fine-tuning, CRM integration, and cross-functional delivery.
5 years managing large-scale public infrastructure — underground railways, high-rises, LNG terminals. Learned to ship where failure has real, physical consequences.
Six autonomous driving algorithms, built solo from scratch on a 1/10-scale race car — from reactive control to model predictive control — running ROS 2 on NVIDIA Jetson.
Most PMs write requirements. I wrote these six algorithms — from scratch, on a real race car — so the roadmap trade-offs I make come from actually debugging them at 2am, not from a textbook.

// Parameters tuned on real hardware, not simulation defaults
Halfway through Lab 5, the steering servo on the real car stopped responding. The default VESC 6.06 firmware shipped for 60_MK6 had a broken servo output path — and the RoboRacer (formerly F1TENTH) pre-built binaries only covered MKIII/MKV/PLUS/FLIPSKY. No MK6.
So I kept developing in AutoDRIVE sim for the algorithm work, and in parallel built VESC firmware from source off the release_6_06 branch for the MK6 target using a Docker cross-compile. Flashed via Custom File tab → servo alive → back on real hardware. Robotics is half writing controllers, half knowing when to drop a level of the stack.
Vectorized iTTC across every LiDAR beam.
Computes instantaneous Time-to-Collision per beam with NumPy. Publishes brake commands to Ackermann mux (priority 200) AND directly to VESC as redundant failsafe.
Two-beam wall angle estimate + PD controller.
Uses 90° and 45° LiDAR beams to estimate wall angle α. 1 m lookahead projects lateral error → steering.
Map-free reactive planner in steerable FOV.
Clip + smooth ranges, find closest obstacle in ±24° FOV, zero out dynamic safety bubble (scales with proximity), steer toward midpoint of widest gap.
Geometric path tracker over CSV waypoints.
Finds first waypoint ≥ Ld ahead, transforms to vehicle frame, applies pure pursuit law: γ = 2y/Ld², δ = atan(L·γ). Speed modulated by curvature.
Sampling-based replanning over occupancy grid.
LiDAR → 0.1 m/cell occupancy grid (9×10 m vehicle frame). RRT* with asymptotic optimality via rewiring. Path tracked by Pure Pursuit.
Receding-horizon QP with CVXPY. Warm-started.
State [x, y, v, yaw] · Input [accel, δ_rate]. Horizon: 8 steps × 0.1s = 0.8s lookahead. Cost Q = diag([13.5, 13.5, 5.5, 13.0]) balances position + heading over velocity. Each solve warm-starts from the previous solution to keep the solver on the same local optimum.
# Lab 2 — Vectorized iTTC across every LiDAR beam
range_rates = -self.speed * np.cos(angles)
ttc = ranges / np.maximum(range_rates, 1e-6)
# Any beam closing faster than the threshold → brake.
if ttc[range_rates > 0].min() < TTC_THRESHOLD:
self.publish_brake() # prio 200 → overrides all autonomy# Lab 8 — MPC cost with warm-start
Q = np.diag([13.5, 13.5, 5.5, 13.0]) # x, y, v, yaw
R = np.diag([0.01, 100.0]) # accel, steering_rate
cost = 0
for k in range(HORIZON): # 8 × 0.1s = 0.8s lookahead
state_err = x[:, k] - ref[:, k]
cost += cvxpy.quad_form(state_err, Q)
cost += cvxpy.quad_form(u[:, k], R)
# Warm-start from previous solution → faster convergence
prob.solve(solver=cvxpy.OSQP, warm_start=True)The fun part of robotics is not writing the algorithm — it's deciding which one, tuned how, at what latency cost. These are three calls I had to make:
Plain RRT finds a path fast but rarely an optimal one. The *rewire* step in RRT* gives asymptotic optimality for a bounded cost per iteration — worth it on a 1/10 car where compute isn't the bottleneck but path quality matters for lap time.
Longer horizons look smarter on paper but explode solver time and over-commit to a stale prediction in a reactive environment. 8 × 100 ms hits the sweet spot: long enough to anticipate corners, short enough that the warm-start is still relevant next tick.
The Ackermann mux is clean in theory but adds a failure point. If the mux crashes or lags, the car keeps whatever velocity it last had. Publishing the brake directly to VESC gives a second independent path — safety-critical code shouldn't have a single point of failure.
[ /scan ] LiDAR ─────────────────────────────────┐
│
▼
[ /odom ] Odometry ──► ┌────────────────────────────────┐
│ SAFETY NODE (AEB) │
│ iTTC < threshold → brake │
└───────┬───────────┬────────────┘
│ prio 200 │ redundant
▼ ▼
[ /joy ] Joystick ──► ┌────────────────┐ ┌────────┐
│ Ackermann Mux │─►│ VESC │─► steering + throttle
│ 200 > 100 > 10 │ │ │
[ planner ] ─────────►└────────────────┘ └────────┘
wall_follow
gap_follow SAFETY ALWAYS PRE-EMPTS AUTONOMY
pure_pursuit
rrt_star
mpc
// Priority-based Ackermann mux ensures AEB (priority 200) overrides any autonomous algorithm (priority 10) in case of imminent collision.
Side projects exploring the intersection of AI, geometry, and production tooling. Each one shipped to real users.
AI-driven parametric geometry in Rhino 3D
Built an MCP integration that lets architects generate parametric buildings and bridges from natural-language prompts, bridging LLM reasoning with Rhino's Python API.
The complete record — experience, education, projects, and publications.
Building something in robotics, autonomy, or AI product? I want to hear about it. Usually responds within 24 hours.