We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Red-Team Researcher
Promptfoo (Headquarters: Remote (US time zones) / Hybrid San Mateo, CA)
Promptfoo is looking for a Red-Team Researcher to proactively identify vulnerabilities in LLM applications and develop mitigation strategies. In this role, you will simulate attacks to strengthen our platform’s defenses and help developers build more secure systems.
About Promptfoo: Our open-source toolkit provides features for evaluating and securing LLM applications. With a focus on enterprise-grade security, we help developers build trustworthy AI systems. We are backed by top investors and operate with a team of experienced researchers and engineers.
Responsibilities:
- Design and execute red-team attacks against LLM applications
- Identify vulnerabilities and develop countermeasures
- Collaborate with the security team to enhance platform defenses
- Stay current with industry advancements in AI security
- Develop and maintain attack vectors and defense mechanisms
- Document findings and share knowledge with the developer community
Qualifications:
- 5+ years of experience in security research or ethical hacking
- Strong background in AI and machine learning systems
- Experience with red-team exercises and penetration testing
- Familiarity with LLM vulnerabilities and attack patterns
- Excellent analytical and problem-solving skills
- Ability to work independently and collaboratively
We offer competitive compensation, equity opportunities, and the chance to work on cutting-edge security research. If you’re passionate about AI security, we encourage you to apply.