We are looking for Perception Engineers who have shipped real-world systems that fuse cameras, lidar, depth, and IMU data into reliable scene understanding. If you have spent late nights chasing down calibration drift, model regressions, and the one frame in ten thousand that breaks tracking, you will feel right at home.
You will own pipelines for detection, segmentation, tracking, and sensor fusion, and you will make them run within tight latency and power budgets on edge hardware. The work spans classical geometry, modern deep learning, and the unglamorous data infrastructure that makes both of them possible.
We value engineers who treat evaluation as a first-class discipline and who can hold a clear opinion on when to trust a model versus when to trust a heuristic. You should be comfortable owning the loop from data collection, to training, to deployment, to monitoring.
Bring your favorite failure modes. We have plenty more to share.