Job Details
Job Information
Other Information
Job Description
Weekly Hours: 40
Role Number: 200661039-3337
Summary
Apple Services Engineering (ASE) powers AI and LLM features across App Store, Music, Video, and more. As these systems increasingly rely on LLM Judges and automated evaluators to score model performance at scale, the trustworthiness of those evaluation signals becomes mission-critical. We believe that to build exceptional LLMs, you need exceptional mechanisms to validate the signals used to train and evaluate them.
Description
As a Principal Applied Scientist on the Human Centered AI team, you will be the technical engine behind our Data Quality Validation framework. This is a high-impact individual contributor role for a scientist who wants to architect and build — not just advise. You will own the data science methodology underpinning our data quality validation models, design the statistical frameworks that govern judge reliability, and work hands-on to close the loop between automated evaluation and human ground truth.
You will be the person who answers the hardest question in our stack: "Can we trust the evaluators that are evaluating our models?"
Minimum Qualifications
Master's degree in Statistics, Data Science, Machine Learning, Computer Science, or a related quantitative field
8+ years of hands-on experience in applied data science, ML research, or evaluation science
Deep expertise in uncertainty quantification and model calibration — including entropy modeling and Bayesian approaches
Demonstrated experience building disagreement detection or anomaly detection models in production ML systems
Strong command of statistical measurement frameworks — inter-rater reliability, correlation analysis, and statistical process control
Proven experience designing or contributing to Human-in-the-Loop (HITL) or active learning pipelines
Proficiency in Python for statistical modeling, ML experimentation, and data pipeline development
Exceptional ability to translate rigorous statistical methodology into clear, actionable guidance for engineering and product partners
Preferred Qualifications
PhD in Statistics, Computer Science, Machine Learning, or a related field
Experience specifically in LLM evaluation science — including autograder validation, judge-as-a-model frameworks, or RLHF data quality
Hands-on experience with large-scale reasoning models (e.g., 70B+ parameter models) used in chain-of-thought evaluation or meta-reasoning contexts
Experience defining governance gates or certification pipelines for AI systems in a CI/CD context
Familiarity with out-of-distribution detection techniques for identifying input drift in live production systems
Track record of publishing or presenting evaluation methodology work internally or externally
Other Details

