Job Details
Job Information
Other Information
Job Description
Role Number: 200641993-2459
Summary
We build frontier foundation models that power intelligent experiences at Apple. Our team works across the full training lifecycle: including pre-training foundation models, and developing mid-training approaches that bridge general capability and task-specific performance. What makes our work distinct is that we're engineering models specifically for Apple silicon and optimized for experiences that are private, personal, and deeply integrated into the OS. We're solving frontier problems in reward modeling to resist reward hacking, handling sparse and delayed rewards in agentic settings, and aligning models reliably across the spectrum from open-ended creative tasks to precise, action-taking workflows. If you're drawn to hard problems where the research and the product are inseparable, this is the team
Description
We are building the next generation of models optimized for Agentic, Reasoning, and Coding capabilities. This means training models via RL to reason from first principles, building autonomous coding agents that operate in real repositories, and developing agentic systems that handle multi-step workflows with error recovery. You will work on problems like: RL with verifiable rewards for mathematical reasoning, multi-turn RL for coding agents evaluated on SWE-Bench and beyond, scaling laws for RL compute allocation, progressive alignment across capability stages, and training models to manage their own context in long-horizon tasks. This is applied research with direct product impact — your work will ship to millions of users.
Minimum Qualifications
Demonstrated expertise in deep learning with publications at top ML or NLP conferences, or a track record of applying deep learning techniques to products
Proficient programming skills in Python and one of the deep learning toolkits such as JAX, PyTorch, or Tensorflow
Ability to work in a collaborative environment.
PhD, or equivalent practical experience, in Computer Science, or related technical field.
Preferred Qualifications
Reinforcement learning for LLMs: RLHF, GRPO, PPO, RLVR, reward modeling, RL scaling laws
Code generation and coding agents: repository-level code understanding, agentic coding
Agentic systems: multi-turn RL, tool-use planning, long-horizon task execution, user simulation
Distillation and alignment: on-policy distillation, reward-tilted distillation, cross-stage distillation to combine independently optimized capabilities into a single model
Long context and efficiency: sparse attention, context compression, scaling to very long context windows
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088_EEOC_KnowYourRights6.12ScreenRdr.pdf) .
Other Details

