Job Details
Job Information
Other Information
Job Description
Role Number: 200659973-0836
Summary
Appleās Security Engineering & Architecture (SEAR) organization is responsible for the security of all Apple products. Passionate about safeguarding our users, we lead with offence proactively uncovering and eliminating vulnerabilities before attackers ever get the chance.
As AI systems become deeply integrated into operating systems, developer tools, and user experiences, they introduce entirely new attack surfaces vulnerable to prompt injection, agentic privilege escalation, data exfiltration, and AI-assisted exploitation at unprecedented scale.
Think you have the creativity and determination to break these systems? Join us and help secure the next generation of intelligent platforms used by billions of people.
Description
In this role, you will identify and exploit vulnerabilities in AI-powered features and agentic systems across Apple platforms. The AI systems themselves are the attack surface. You will help to build offensive capabilities against autonomous systems and anticipate how adversaries may exploit AI enabled systems in the wild.
You will join a team working with world-class offensive security researchers. The work is critical directly shapes the security posture of Apple.
You will conduct offensive research into AI-specific attack classes, including prompt injection, agentic data exfiltration and lateral movement, persistence mechanisms in AI workflows, AI-assisted vulnerability discovery and exploitation.
Minimum Qualifications
Solid grounding in common vulnerability classes (memory corruption, logic flaws, auth bypass)
Proven experience in security research, vulnerability discovery, or offensive security (e.g., browsers, 0-click, messaging systems, distributed systems, or AI platforms)
Strong understanding of modern AI/LLM systems and their failure modes (e.g., prompt injection, data exfiltration, model misuse)
Experience applying AI/ML tools (e.g., LLMs, agents) to automate or augment security research workflows
Preferred Qualifications
Experience attacking or defending agentic systems (multi-step AI workflows, tool-using agents, MCP-style integrations)
Familiarity with prompt injection techniques, obfuscation (e.g., encoding-based bypasses), and model manipulation strategies
Experience building or evaluating AI-driven vulnerability discovery pipelines
Understanding of browser-based AI integrations and risks (e.g., agentic browsing, data boundary violations)
Knowledge of capability-based security models or policy enforcement systems for AI agents
Experience with reverse engineering and low-level systems (IDA, Ghidra, LLDB)
Proficiency in one or more: Python, C/C++, Swift, Objective-C
Familiarity with Apple platforms (iOS, macOS) and their security architecture
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088_EEOC_KnowYourRights6.12ScreenRdr.pdf) .
Other Details

