I work on real-world humanoid robotics systems building perception, behavior, and embodied AI pipelines deployed on physical robots using ROS2, Jetson, and VLM/VLA based reasoning. My focus is on humanoid perception and behavior systems for real robots and real deployments, not simulations.
My journey in technology began in grade 8 and gradually evolved into a focused interest in applied robotics and AI system design. I work primarily on humanoid robotics, building perception, behavior, and embodied AI systems deployed on real robots rather than simulations.
My experience includes ROS2 based architectures, on device machine learning deployment on Jetson platforms, and multimodal perception systems designed to operate under real world constraints. I have also worked on industrial vision systems for automotive manufacturing, where robustness and reliability were critical.
Alongside industry work, I am an IEEE published researcher focused on multimodal sensing and real time machine learning systems. I am also building MaaKosh, a maternal and neonatal health initiative centered on practical medical devices.
My long term goal is to design intelligent robotic systems that work reliably in real environments by combining research depth with strong engineering discipline.
Developed and deployed an industrial humanoid vision system for alloy defect and paint anomaly detection in automotive manufacturing environments. This work required building perception pipelines robust to harsh lighting, noise, motion, and clutter — operating beyond controlled datasets and simulations.
Proposed a real-time multimodal sensing system combining optical sensing, impedance analysis, and machine learning for field-deployable microplastic detection.