Job Details
Job Information
Other Information
Job Description
Weekly Hours: 40
Role Number: 200628202-0836
Summary
Apple Silicon GPU SW architecture team is seeking a senior/principal engineer to lead server-side ML acceleration and multi-node distribution initiatives. You will help define and shape our future GPU compute infrastructure on Private Cloud Compute that enables Apple Intelligence.
Description
In this role, you'll be at the forefront of architecting and building our next-generation distributed ML infrastructure, where you'll tackle the complex challenge of orchestrating massive network models across server clusters to power Apple Intelligence at unprecedented scale. It will involve designing sophisticated parallelization strategies that split models across many GPUs, optimizing every layer of the stack—from low-level memory access patterns to high-level distributed algorithms—to achieve maximum hardware utilization while minimizing latency for real-time user experiences. You'll work at the intersection of cutting-edge ML systems and hardware acceleration, collaborating directly with silicon architects to influence future GPU designs based on your deep understanding of inference workload characteristics, while simultaneously building the production systems that will serve billions of requests daily.
This is a hands-on technical leadership position where you'll not only architect these systems but also dive deep into performance profiling, implement novel optimization techniques, and solve unprecedented scaling challenges as you help define the future of AI experiences delivered through Apple's secure cloud infrastructure.
Minimum Qualifications
Strong knowledge of GPU programming (CUDA, ROCm) and high-performance computing
Must have excellent system programming skills in C/C++, Python is a plus
Deep understanding of distributed systems and parallel computing architectures
Experience with inter-node communication technologies (InfiniBand, RDMA, NCCL) in the context of ML training/inference
Understand how tensor frameworks (PyTorch, JAX, TensorFlow) are used in distributed training/inference
Technical BS/MS degree
Preferred Qualifications
Familiar with model development lifecycle from trained model to large scale production inference deployment
Proven track record in ML infrastructure at scale
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088_EEOC_KnowYourRights6.12ScreenRdr.pdf) .
Other Details

