Our AI Research team is building end-to-end robot policies that enable dexterous manipulation in real-world environments. We are advancing embodied AI by integrating multimodal perception, robot learning architectures, and physical execution systems to solve manipulation, autonomy, and simulation challenges at industrial scale.
As a Staff Applied Scientist, you will lead the development of core components of these embodied systems—from model design and training pipelines to integration with perception, motion control, and hardware. You will design, prototype, and deploy robot learning models that span perception, policy learning, simulation, and real-world execution, collaborating closely with robotics engineers, AI infrastructure teams, and production experts.
-
Design and implement advanced robot learning architectures (e.g., diffusion policies, ACT, VLM/VLA-guided agents, imitation learning) to support dexterous manipulation, path planning, and autonomous task sequencing.
-
Develop end-to-end policy training pipelines, integrating multi-modal sensory data (RGB, depth, proprioception, force/torque, LiDAR, tactile inputs) with control outputs.
-
Apply and extend large-scale architectures—LLMs, VLM/VLAs, diffusion models—to embodied tasks, grounding, and sim-to-real adaptation.
-
Collaborate with cross-functional teams to deploy robot policies on hardware, ensuring robustness, repeatability, and safety.
-
Lead data strategy for demonstrations, teleoperation, simulation pipelines, and evaluation frameworks for manipulation policies.
-
PhD in a relevant STEM field, or Master’s with equivalent industry experience in robotics, robot learning, or embodied AI.
-
Deep understanding of modern AI architectures (e.g., Transformers, diffusion models, VLM/VLAs, CNNs) with strong experience training models at scale.
-
Strong PyTorch implementation skills, including authoring custom modules, batching, debugging, and performance optimization.
-
Expertise in robotics perception, including 3D understanding, force sensing, tactile feedback, multimodal fusion, or affordance modeling.
-
Familiarity with Isaac Sim, Mujoco, Gazebo, PyBullet, or custom simulators, and demonstrated ability to transfer policies to hardware.
You will help build robotic agents that can manipulate the world with dexterity and autonomy. Your work will directly influence how robots perceive, act, and adapt across GM’s global ecosystem.
Location: This role is categorized as hybrid. This means the successful candidate is expected to report to the MTV office three times per week or any other frequency dictated by the business.
Compensation: The compensation information is a good faith estimate only. It is based on what a successful applicant might be paid in accordance with applicable state laws. The compensation may not be representative for positions located outside of New York, Colorado, California, or Washington.
- The salary range for this role is 198,000 to 260,000. The actual base salary a successful candidate will be offered within this range will vary based on factors relevant to the position.
- Bonus Potential: An incentive pay program offers payouts based on company performance, job level, and individual performance.
- Benefits: GM offers a variety of health and wellbeing benefit programs. Benefit options include medical, dental, vision, Health Savings Account, Flexible Spending Accounts, retirement savings plan, sickness and accident benefits, life insurance, paid vacation & holidays, tuition assistance programs, employee assistance program, GM vehicle discounts and more.
Company Vehicle : Upon successful completion of a motor vehicle report review, you will be eligible to participate in a company vehicle evaluation program, through which you will be assigned a General Motors vehicle to drive and evaluate.
Note: program participants are required to purchase/lease a qualifying GM vehicle every four years unless one of a limited number of exceptions applies.
Relocation: This job may be eligible for relocation benefits.
#GMAI-B
#LI-CX1