Security Engineer II, Enterprise Security AI - Singapore

Google · Singapore

Sector
AI
Function
Product & Engineering
Level
Mid-Level
Employment type
Full Time
Posted
2026-05-14
Source
mycareersfuture

Product areaThe Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.Job descriptionOur Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.Additional job descriptionIn this role, you will deliver technical contributions to enterprise security projects focused on protecting Alphabet’s data from risks associated with first-party and third-party Artificial Intelligence (AI) products. You will be assisting Google teams with securing AI-related features and products, including assessment of enterprise security controls within Google’s own products. You will partner with the Enterprise Security AI team to define and validate paths that guide secure AI development and use across the company. You will be an individual contributor with a well-rounded understanding of Google’s production tech stacks. You will be able to draw on security engineering knowledge including threat modeling, security assessments, access controls, and data protection, and adapt this knowledge to advise on mitigating risks due to increased AI use.QualificationsJob responsibilitiesDeliver quality security assessments and threat models for first-party and third-party AI agents, ensuring they adhere to established practices and enterprise security principles.Propose and validate technical guardrails to prevent unauthorized agentic AI actions, and to inform the development of frameworks/solutions that support secure AI development.Use the subject-matter expertise to assist with escalations and remediation in collaboration with members of the team.Share expertise on agent security technologies and Google-specific security infrastructure with adjacent teams to improve cross-functional project collaboration.Minimum qualificationsBachelor's degree or equivalent practical experience.1 year of experience with security assessments or security design reviews or threat modeling.1 year of experience with coding in programming languages (e.g., Python, Go, SQL, JavaScript).Experience with security engineering, computer and network security and security protocols.Preferred qualificationsExperience in specific enterprise security domains, such as threat modeling, authentication/access controls, data protection controls, or sandboxing technologies.Familiarity with Google’s internal security tools, infrastructure (e.g., BeyondCorp), and processes for vulnerability management.Understanding of Google’s most-commonly used production tech stacks.Proven ability to independently produce high-quality Google engineering artifacts (e.g., design docs, code reviews, or risk assessments) that require minimal revision.

Apply on mycareersfuture →
AI Team Collaboration Security Tools Programming Languages Vulnerability Management Direct experience Cross Functional Team Building Access Control