Aliensense is the physical AI company. We build a modular compute and sensor platform that gives robots the perception, reasoning, and real-time control they need to operate in the physical world. Our hardware stacks NVIDIA tech with custom GMSL camera modules, CAN-FD buses, and a dedicated AI accelerator tier. We are based in Masdar City and backed by deep-tech investors across the GCC and Europe.
The Role
We are hiring an Embedded Software Engineer with a computer vision focus to own the camera and perception pipeline on our platform: from raw GMSL frames off the MAX96792A deserialiser all the way to calibrated, synchronised stereo streams powering Isaac ROS Visual SLAM.
You will work across the full stack: camera bring-up, ISP tuning, multi-camera synchronisation, stereo rectification, and integration with our ROS 2 perception nodes. You will ship code that runs in real time on real robots.
What You will Do
- Bring up and maintain GMSL2/3 camera pipelines on Jetson Orin NX/Nano (MAX96792A deserialiser, FRAMOS FSM sensors, MIPI CSI-2) - Develop and tune GStreamer / Argus / V4L2 pipelines for multi-camera capture with hardware-synchronised triggers - Implement and maintain stereo camera calibration (intrinsics, distortion, stereo extrinsics) and live rectification - Integrate camera streams with IMU data for time-consistent, timestamp-accurate sensor fusion inputs - Deploy and tune Isaac ROS Visual SLAM on stereo + IMU inputs; optimise for stable continuous operation - Profile and resolve latency, jitter, and throughput issues across the camera → deserialiser → CSI → ISP → ROS pipeline - Contribute to the sensor configuration system
Requirements
- 3+ years of computer vision or embedded vision engineering - Strong C++ ; comfortable with Python for calibration tooling and scripting - Experience with GStreamer, V4L2, or NVIDIA Argus/libargus for camera pipelines - Working knowledge of camera calibration (OpenCV, ROS `camera_calibration`, or equivalent) - Stereo vision fundamentals: epipolar geometry, rectification, disparity - Experience with ROS 2 (publishers, synchronisation, TF, launch files)
Nice to Have
- GMSL / FPD-Link / MIPI CSI-2 camera bring-up experience - NVIDIA Jetson Orin platform knowledge (JetPack, BSP, JTOP) - Isaac ROS or Jetson-specific perception accelerators (VPI, cuVSLAM) - IMU driver integration and IMU-camera time synchronisation - Familiarity with SLAM or visual odometry systems (ORB-SLAM, VINS, OpenVINS, cuVSLAM) - Hardware-level camera synchronisation (PWM trigger, PPS, GMSL GPIO)
What We Offer
- Own the perception stack of a real physical-AI product from day one - Work with GMSL systems, Jetson Orin, and Isaac ROS on actual robots - Close collaboration with HW, firmware, and AI teams - Competitive compensation - Masdar City HQ, UAE