Polygraf AI closes $9.5M Seed Round led by Allegis Capital

Why Polygraf AI?

At Polygraf, we envision a future where AI augments human capabilities without compromising safety, privacy, or ethical standards. Trust in our commitment to building this future with you.

About Polygraf AI

Polygraf AI redefines AI security for critical operations. Our proprietary Small Language Model (SLM) technology enables organizations to detect, explain, and mitigate AI risks – from data leakage and compliance violations to deepfakes and synthetic content – using local, explainable, and auditable AI solutions.

We’re award-winning and investor-backed: named ‘Best in Show’ at SXSW 2025 (winning in Enterprise, Smart Data, FinTech & Future of Work), recognized as Top AI & Data Product (2025), Top AI Governance Product and Top AI Content Detection Product (2024) by Products That Count, and ranked a Top Analytics Startup to Watch Globally by Dealroom.

About the Role

Polygraf AI’s go-to-market thesis is simple: no AI tool is safe without guardrails. Your job is to prove it – every day.

As our AI Destructor, you will serve as the internal red team for all major LLMs, agentic systems, and AI-powered tools. You will systematically probe models for prompt injection vulnerabilities, jailbreaks, data leakage vectors, and agent-level exploits – then translate your findings into compelling public-facing intelligence: blog posts, whitepapers, and research reports that demonstrate why enterprises cannot afford to run AI without Polygraf’s protection.

What You'll Do

  • Conduct daily adversarial testing across leading LLMs (GPT-5, Claude, Gemini, Llama, Mistral, and others) – systematically probing for prompt injection, jailbreaks, context manipulation, and data exfiltration paths
  • Extend red-team coverage to agentic systems and AI-integrated tools: coding assistants, autonomous agents, RAG pipelines, and tool-use frameworks
  • Maintain a structured vulnerability tracking database, including reproduction steps, severity assessments, and affected systems
  • Write and publish research reports, blog posts, and whitepapers documenting discovered vulnerabilities – framed for both technical and executive audiences
  • Collaborate with the marketing team to package findings for distribution across LinkedIn, newsletters, and industry media
  • Feed vulnerability intelligence back to the product team to inform guardrail improvements and new detection capabilities
  • Monitor the AI security research community for emerging attack techniques and incorporate them into your testing methodology

You Have

  • Deep, hands-on familiarity with how LLMs work – including attention, tokenization, and instruction-following mechanics
  • Demonstrated experience with prompt injection, jailbreaking, or adversarial prompting – with real examples you can share
  • A systematic, almost obsessive approach to probing systems – you document everything and look for patterns
  • Ability to work independently and produce consistent output without close supervision

Nice to Have

  • Background in traditional cybersecurity, penetration testing, or CTF competitions
  • Prior published AI/ML security research, CVEs, or responsible disclosure history
  • Experience with RAG systems, vector databases, or multi-agent orchestration frameworks
As an equal opportunity employer, we highly value diversity and inclusion. We recognize that a diverse team brings unique perspectives, fosters ongoing innovation, and deepens our connection to the global community we serve. If you’re enthusiastic about spearheading the next era of generative AI and leaving a profound impact on how individuals and brands create, we’re eager to have you join our team.

Application form

Products

thank you

Your download will start now.

Thank you!

Please provide information below and
we will send you a link to download the white paper.