I am a systems and AI/ML engineer with over 4 years of experience, working on C++ based Data Acquisition systems, ADAS cameras, robotics and more recently Agentic systems in marketing. I am strong in systems level thinking, API design, AI integrations and ROS.
I’m a software engineer with 3+ years of experience building scalable backend systems in Java/Spring Boot and PostgreSQL, and developing real-time infra in Python and C++.
At my current role, I’ve built microservices, secure auth systems, and REST APIs at scale. Before that, I worked on safety-critical tooling at Bosch. I’ve also deployed fullstack AI products using FastAPI and React.
Looking for backend or fullstack roles on fast-paced engineering teams—infra, SaaS, devtools, or AI-adjacent.
I’m a software engineer with 3 years in ADAS software (Bosch) and recent experience in robotics, perception, and AI/ML research.
I’ve built real-time C++/Python tools for autonomous driving systems, worked on SLAM-based navigation using ROS, and recently dove into information-theoretic deep learning research at NC State (incl. mutual information estimation and representation learning). I've also contributed to open-source LLM tools (SGLang) and am currently exploring agent-based systems.
Looking to join a high-impact team working on autonomy, robotics, embedded ML, or intelligent agents. I'm based in SF, available immediately, and eager to move fast. Happy to grab a drink and chat if you are based in the Bay Area.
I’m a software engineer with 3 years in ADAS software (Bosch) and recent experience in robotics, perception, and AI/ML research.
I’ve built real-time C++/Python tools for autonomous driving systems, worked on SLAM-based navigation using ROS, and recently dove into information-theoretic deep learning research at NC State (incl. mutual information estimation and representation learning). I've also contributed to open-source LLM tools (SGLang) and am currently exploring agent-based systems.
Looking to join a high-impact team working on autonomy, robotics, embedded ML, or intelligent agents. I'm based in SF, available immediately, and eager to move fast.
Hi there! I'm Nitin, a software engineer with a passion for AI/ML, computer vision, and automotive software.
I’ve spent some time at Bosch developing real-time ADAS applications and optimizing perception pipelines, and I'm now deep into research at NC State—exploring information bottlenecks in deep learning and contributing to open-source projects like SGLang. I've also led projects in robotic navigation using Visual SLAM and ROS, fueling my interest in autonomous systems. Lately, I've been diving into AI agents and exploring innovative applications across different domains.
I'm eager to collaborate on challenging projects in robotics, intelligent systems, or any cutting-edge tech. If you're working on something exciting in these areas, let's connect!
Hi there! I'm Nitin, a software engineer with a passion for AI/ML, computer vision, and automotive software.
I’ve spent some time at Bosch developing real-time ADAS applications and optimizing perception pipelines, and I'm now deep into research at NC State—exploring information bottlenecks in deep learning and contributing to open-source projects like SGLang. I've also led projects in robotic navigation using Visual SLAM and ROS, fueling my interest in autonomous systems. Lately, I've been diving into AI agents and exploring innovative applications across different domains.
I'm eager to collaborate on challenging projects in robotics, intelligent systems, or any cutting-edge tech. If you're working on something exciting in these areas, let's connect!
Hi there! I'm Nitin, a software engineer with a passion for AI/ML, computer vision, and automotive software.
I’ve spent some time at Bosch developing real-time ADAS applications and optimizing perception pipelines, and I'm now deep into research at NC State—exploring information bottlenecks in deep learning and contributing to open-source projects like SGLang. I've also led projects in robotic navigation using Visual SLAM and ROS, fueling my interest in autonomous systems. Lately, I've been diving into AI agents and exploring innovative applications across different domains.
I'm eager to collaborate on challenging projects in robotics, intelligent systems, or any cutting-edge tech. If you're working on something exciting in these areas, let's connect!
Hi there! I'm Nitin, a software engineer with a passion for AI/ML, computer vision, and automotive software.
I’ve spent some time at Bosch developing real-time ADAS applications and optimizing perception pipelines, and I'm now deep into research at NC State—exploring information bottlenecks in deep learning and contributing to open-source projects like SGLang. I've also led projects in robotic navigation using Visual SLAM and ROS, fueling my interest in autonomous systems. I dabble in some open source work as well too. Lately, I've been diving into AI agents and exploring innovative applications across different domains.
I'm eager to collaborate on challenging projects in robotics, intelligent systems, or any cutting-edge tech. If you're working on something exciting in these areas, let's connect!
Your system fuses ATC speech recognition, NLP, and ADS-B signals to detect and mitigate human error in air traffic control. Given the rapid advancements in multimodal AI, have you explored integrating visual data sources (e.g., satellite imagery, radar feeds, or airport surveillance cameras) to further improve situational awareness and error detection? What challenges do you foresee in making Yeager more contextually aware using additional modalities?
Yes, this is an excellent prompt and we're working on it. One problem is a lot of these visual sources require permission, integration, and regulation. That's going to move slower than something we can proceed directly with (VHF antennas).
I believe scaling laws will hold as we start to feed all of this context data into an integrated model. You could imagine a deep-q style reinforcement learning model that ingests layers of structured and visual data and outputs alerts and eventually commands. The main challenge I foresee here will be observability... it's easy enough to shove a ton of data into a black box and get a good answer 98% of the time. But regulation is likely to require such a system to be highly observable/explainable so the human can keep up with what's going on and step in as needed.
Looking further into the future, it's plausible the concrete structures of today with humans looking out windows will be replaced with sensor packages atop a long flagpole that stream high-res optical/ir camera data, surface radar, weather information, etc into a control room with VR layers that help controllers stay on top of busier and busier airspace.
Remote: Yes
Willing to relocate: Absolutely
Technologies: C++, Python, Java, PyTorch, LLM Integrations, Infrastructure Tools
Resume: https://drive.google.com/file/d/1u6wVKSP1BeAZFsMzPfimIpuWbBr...
Email: nitinmadapally@gmail.com
Hello,
I am a systems and AI/ML engineer with over 4 years of experience, working on C++ based Data Acquisition systems, ADAS cameras, robotics and more recently Agentic systems in marketing. I am strong in systems level thinking, API design, AI integrations and ROS.
Please reach out if you'd like to chat.