Job Details

Job Information

AIML - Machine Learning Engineer, Responsible AI
AWM-7006-AIML - Machine Learning Engineer, Responsible AI
12/15/2025
12/20/2025
Negotiable
Permanent

Other Information

www.apple.com
Cupertino, CA, 95015, USA
Cupertino
California
United States
95015

Job Description

No Video Available
 

Weekly Hours: 40

Role Number: 200636392-0836

Summary

Would you like to play a part in building the next generation of generative AI applications at Apple? We’re looking for Machine Learning Engineers to work on ambitious projects that will impact the future of Apple, our products, and the broader world. This role is directed at assessing, quantifying, and improving the safety and inclusivity of Apple’s Generative-AI powered features and products.

In this role you’ll have the opportunity to tackle innovative problems in machine learning, particularly focused on large language models for text generation, diffusion models for image generation, and mixed model systems for multimodal applications.
As a member of Apple’s Responsible AI group you will be working on a wide array of new features and research in the generative AI space.

Our team is currently interested in large generative models for vision and language, with particular interest on Responsible AI, safety, fairness, robustness, explainability, and uncertainty in models.

Description

This role focuses on developing, carrying-out, interpreting, and communicating pre- and post-ship evaluations of the safety of Apple Intelligence features. Both human grading and model-based auto-grading are thoughtfully leveraged to power these evaluations.
Additionally, this role researches and develops auto-grading methodology & infrastructure to benefit ongoing and future Apple Intelligence safety evaluations.
Producing safety evaluations that uphold Apple’s Responsible AI values requires thoughtful data sampling, creation, and curation for evaluation datasets; high quality, detailed annotations and careful auto-grading to assess feature performance; and mindful analysis to understand what the evaluation means for the user experience.
This role heavily draws on applied data science, scientific investigation and interpretation, cross-functional communication and collaboration, and metrics reporting and presentation.

Minimum Qualifications

  • MS, or PhD in Computer Science, Machine Learning, Statistics, or related fields; or an equivalent qualification acquired through other avenues.

  • Experience working with generative models for evaluation and/or product development, and up-to-date knowledge of common challenges and failures.

  • Strong engineering skills and experience in writing production-quality code in Python.

  • Deep experience in foundation model-based AI programming (i.e.: using DSPy for optimizing foundation model prompts, for example) and a drive to innovate in this space.

  • Experience working with noisy, crowd-based data labels and human evaluations.

Preferred Qualifications

  • Experience working in the Responsible AI space.

  • Prior scientific research and publication experience.

  • Strong organizational and operational skills working with large, multi-functional, and diverse teams.

  • Curiosity about fairness and bias in generative AI systems, and a strong desire to help make the technology more equitable.

Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088_EEOC_KnowYourRights6.12ScreenRdr.pdf) .

Other Details

No Video Available
--

About Organization

 
About Organization