Job Details
Job Information
Other Information
Job Description
Role Number: 200633119-0836
Summary
The Visual eXperience team is looking for a passionate researcher/engineer to help shape the next generation of imaging, rendering, compression, and display solutions for products across the apple ecosystem!
The team features a highly collaborative and hands-on environment that fosters scientific and engineering excellence, creativity, and innovation in the interdisciplinary areas of vision science, information theory, compression, machine learning, image enhancement and processing, neuroscience, color science, and optics.
This engineer will explore the foundations of perception-aligned loss functions, neural compression systems, and image realism modeling that enable breakthrough performance in our camera, AR/VR, display, and video processing pipelines. You will join a team of scientists and engineers who care deeply about elegant theory, robust implementation, and real-world impact that makes a tangible difference to our user’s experience.
If you are excited by the intersection of information theory, perception, machine learning, and large-scale imaging systems—and want your work to ship in products used by millions—this role is for you.
Description
In this highly visible role, you will invent the next generation of perceptual loss functions used across Apple’s imaging ecosystem. Your work will span algorithm development, theoretical analysis, and deployment at scale.
Minimum Qualifications
Bachelors Degree in Computer Science, Electrical and Computer Engineering, Neuroscience, Vision Science, or equivalent and 3+ years of relevant experience
Experience in translating complex mathematical concepts into practical algorithms aligned with perceived image realism or quality
Experience with full-reference or no-reference image metrics, generative modeling, optimization, or realism-driven evaluation frameworks
Preferred Qualifications
Masters or Ph.D. in Computer Science, Electrical and Computer Engineering, Neuroscience, Vision Science, or equivalent
Experience in information theory, probabilistic modeling, and/or machine learning
Experience in Python and modern ML frameworks such as PyTorch, TensorFlow, and JAX
Deep expertise in image compression, texture modeling, and rate-distortion optimization, with demonstrated ability to design new metrics and algorithms that outperform classical approaches
Publication record in machine learning, compression, or information theory venues (NeurIPS, ICLR, ICML, ISIT, or related)
Hands-on experience building learned compression systems end-to-end, including model design, training pipelines, ablations, and integration into large-scale frameworks
Internship or industry experience integrating research models into production-scale frameworks is a strong plus
Basic knowledge of human visual perception is a strong plus
Strong analytical & critical thinking, and creative problem-solving skills
Excellent written and verbal communication skills in English
Excellent communication, collaboration, and scientific writing skills
Basic understanding of digital imaging, and display software and hardware
Swift/Metal programming is a plus
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088_EEOC_KnowYourRights6.12ScreenRdr.pdf) .
Other Details

