Job Details

Job Information

AIML - Machine Learning Researcher, Post-training for Foundation Models
AWM-59-AIML - Machine Learning Researcher, Post-training for Foundation Models
5/9/2026
5/14/2026
Negotiable
Permanent

Other Information

www.apple.com
Cupertino, CA, 95015, USA
Cupertino
California
United States
95015

Job Description

No Video Available
 

Role Number: 200645804-0836

Summary

We are a group of engineers and researchers responsible for building foundation models at Apple. Within this group, the Post-Training work streams focus on transforming powerful pre-trained checkpoints into helpful, high-quality models that power billions of Apple products. We are looking for researchers who are passionate about foundation model post-training, including Supervised Fine-Tuning (SFT), Reinforcement Learning, with experiences in core capabilities such as instruction following, tool use, deep thinking and reasoning.

Description

We build frontier foundation models that power intelligent experiences at Apple. Our team works across the full training lifecycle: including pre-training foundation models, and developing mid-training approaches that bridge general capability and task-specific performance. What makes our work distinct is that we're engineering models specifically for Apple silicon and optimized for experiences that are private, personal, and deeply integrated into the OS. We're solving frontier problems in reward modeling to resist reward hacking, handling sparse and delayed rewards in agentic settings, and aligning models reliably across the spectrum from open-ended creative tasks to precise, action-taking workflows. If you're drawn to hard problems where the research and the product are inseparable, this is the team.

Minimum Qualifications

  • Demonstrated expertise in deep learning with a focus on LLMs, post-training, or reinforcement learning, backed by a strong publication record or real world experiences and accomplishments in these or closely related domains.

  • Proficient programming skills in Python and one of the deep learning frameworks such as JAX or PyTorch.

  • PhD or equivalent practical experience, in Computer Science, Machine Learning, or a related technical field.

Preferred Qualifications

  • Proven track record in post-training: Specialization in post-training algorithms, techniques, and best practices for large foundation models with proven track record.

  • Post-training data: Deep experiences with human data labeling, synthetic data generation and data quality assessment for foundation models; Evaluation methodologies: Deep experience in evaluating data and training recipe and deeply understand the model building iterative process and life cycle.

  • Reasoning Research: Experience in improving model performance on reasoning tasks (math, coding, logic).

  • Scale & Systems: Experience training SOTA large models at scale and familiarity with distributed training challenges, and understand the trade-offs.

  • Strong communication and collaborative skills: Strong communication skills and a passion for collaboration within and across teams.

Other Details

No Video Available
--

About Organization

 
About Organization