We're building the financial infrastructure that powers global innovation. With our cutting-edge suite of embedded payments, cards, and lending solutions, we enable millions of businesses and consumers to transact seamlessly and securely.
What You’ll Actually Be Doing
Design enterprise AI guardrails across Azure and AWS (e.g., Azure AI Studio/Azure OpenAI, Amazon Bedrock/SageMaker): content filtering, PII redaction, prompt/response validation, and policy enforcement services.
Implement data minimization controls for GenAI/RAG workloads: context filtering, least?privileged retrieval, document-level ACL enforcement, vector store hardening, and secure token/secret handling.
Threat model AI systems (apps, agents, RAG, fine-tuning pipelines) using frameworks like STRIDE and the OWASP Top 10 for LLM Apps; define misuse scenarios (prompt injection/jailbreaks/data exfiltration) and build mitigations.
Build monitoring and telemetry: privacy-preserving prompt/response logging, sensitive-data detection, safety/eval dashboards, drift/abuse signals, and incident hooks into our SIEM.
Integrate AI security into the SDLC: reusable libraries, pre-commit checks, CI/CD gates, policy-as-code, and secure-by-default reference architectures for product teams.
Evaluate third?party AI vendors and internal apps: security reviews, data residency and retention requirements, SSO/SCIM integrations, DPA/TPRM inputs, and continuous control testing.
Partner across Security, Data, Privacy, and Engineering to map AI controls to FFIEC, SOC 2, and PCI DSS; document control evidence for audits.
Lead/participate in AI red?teaming: automated jailbreak/prompt?injection tests, safety benchmarks, purple?team exercises, and response playbooks for AI incidents.
Enable the org with concise guidelines, examples, and training on safe AI development and usage.
Requirements
What You Bring to the Table
5+ years in Security Engineering/AppSec/Cloud Security (or similar), including 1–2+ years securing AI/ML or data?intensive systems (GenAI preferred).
Hands?on experience with AWS and/or Azure and modern app stacks (Python/TypeScript, REST/gRPC, containers/Kubernetes, IaC such as Terraform).
Practical understanding of LLM attack surfaces (prompt injection, data leakage via tools, training/fine?tune poisoning, model supply chain) and mitigation patterns.
Familiarity with identity and access for AI workloads (OAuth2/OIDC, service principals, role tokens, PIM), and secure secret management/KMS.
Experience implementing observability/telemetry and routing findings to SIEM; comfort balancing privacy with traceability.
Ability to translate controls into developer-friendly libraries, docs, and CI/CD checks; strong written communication in English and Hebrew.
Comfort working in a regulated environment and mapping controls to frameworks (FFIEC, SOC 2, PCI DSS).
Nice to have
Financial services background or other high?assurance domains.
Exposure to Duende IdentityServer, SSO/SCIM, and enterprise authorization patterns.
Experience with vector databases (e.g., OpenSearch, pgvector, Pinecone) and secure RAG architectures.
Familiarity with guardrail tooling (e.g., Azure AI Safety features, Amazon Bedrock Guardrails) and policy engines (OPA/Rego).
Prior work in AI red?teaming or safety evaluation harnesses; contributions to OSS or published talks.
Why You’ll Love Working Here
Flexible hybrid work model: three days a week at our Jerusalem office
* משרה זו פונה לנשים וגברים כאחד.