Job Details
Job Information
Other Information
Job Description
Role Number: 200636341-3956
Summary
Imagine what you could do here. At Apple, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Multifaceted, amazing people and inspiring, innovative technologies are the norm here. The people who work here have reinvented entire industries with all Apple Hardware products. The same passion for innovation that goes into our products also applies to our practices, strengthening our commitment to leave the world better than we found it. Join us in this truly exciting era of Artificial Intelligence to help deliver the next groundbreaking Apple products & experiences! We are continuously advancing the state of the art in Computer Vision and Machine Learning, touching all aspects of language and multimodal foundation models, from data collection, data curation to modeling, evaluation and deployment. As a member of our dynamic group, you will have the unique and rewarding opportunity to craft upcoming research directions in the field of multimodal foundation models that will inspire future Apple products. You will be working alongside highly accomplished and deeply technical scientists and engineers to develop pioneering solutions for challenging problems. This is a unique opportunity to be part of what forms the future of Apple products that will touch the lives of many people. We (Multimodal Intelligence Team) are looking for an AI Research Scientist to work on the field of Generative AI and multimodal foundation models. Our team has an established track record of shipping features that leverage multiple sensors, such as FaceID, RoomPlan and hand tracking in VisionPro, as well as a strong research presence in the multimodal AI community. Our publications span multimodal pre-training, vision-language models, video-language models, and multimodal alignment. We are focused on building experiences that demonstrate the power of our sensing hardware as well as large foundation models.
Description
You will work on advancing the capabilities of foundation models and guiding them toward real-world applications in Apple products. This includes researching and developing methods that improve alignment, reasoning, and adaptation of large models to practical use cases, while ensuring they meet Apple’s standards for efficiency, scalability, and privacy. You will focus on creating customized foundation models with targeted capabilities that operate efficiently in constrained environments, supporting the next generation of intelligence across Apple’s ecosystem.
Your work includes staying ahead of emerging research and identifying techniques that are suitable for real-world deployment, helping translate scientific advancements into production-quality solutions. You will design and optimize large-scale data pipelines that support robust training and detailed evaluation of foundation models, working with massive multimodal datasets to push the limits of performance. You will explore new techniques that strengthen focused reasoning, multimodal understanding, and adaptive behavior, enabling models that perform well at large scale while also being tailored for specific Apple experiences, from cloud systems to on-device intelligence.
Collaboration is essential in this role. You will partner with multi-functional teams of engineers and researchers to bring customized and efficient models into Apple products, ensuring smooth integration and enabling intelligent and natural user experiences throughout the ecosystem.
Minimum Qualifications
Proficient programming skills in Python and experience with at least one modern deep learning framework (PyTorch, JAX, or TensorFlow).
Experience working with large-scale training pipelines and distributed systems.
MS in Computer Science, Computer Vision, Machine Learning, or related technical field, and a minimum of 6 years relevant experience.
Preferred Qualifications
PhD, or equivalent practical experience, in Computer Science, Machine Learning, or a related technical field.
Demonstrated expertise in related field with publication record in relevant conferences (e.g., NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV,COLM, etc).
Experience with full stack of foundation model training (vision-language).
Familiarity with large-scale data pipelines, including data curation, preprocessing, and efficient storage.
Ability to work effectively in a multi-functional, collaborative environment.
Experience with advanced reasoning or reinforcement learning methods.
Experience with model distillation using on-policy or off-policy techniques.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088_EEOC_KnowYourRights6.12ScreenRdr.pdf) .
Other Details

