About

About ThirdKey Research

ThirdKey Research is dedicated to advancing AI security through our “Zero Trust for AI” approach. We believe that every AI interaction should be verified, every model should be validated, and every decision should be auditable.

Our Mission

As artificial intelligence becomes increasingly integrated into critical systems and decision-making processes, the need for robust security frameworks has never been more urgent. Traditional security models that rely on perimeter defense are insufficient for the dynamic, distributed nature of AI systems.

We focus on extending Zero Trust principles to artificial intelligence systems, applying the philosophy of “never trust, always verify” to AI interactions, model behavior, and system integrity.

Research Projects

SchemaPin

Cryptographic Security for AI Tool Schemas

A cryptographic protocol for ensuring the integrity and authenticity of tool schemas used by AI agents. SchemaPin prevents “MCP Rug Pull” attacks by enabling developers to cryptographically sign their tool schemas and allowing clients to verify that schemas have not been altered since publication.

  • Website: schemapin.org
  • Features: ECDSA P-256 signatures, Trust-On-First-Use key pinning, cross-language support
  • License: MIT

VectorSmuggle

Vector-Based Data Exfiltration Research

A comprehensive proof-of-concept demonstrating vector-based data exfiltration techniques in AI/ML environments. This project illustrates potential risks in RAG systems and provides tools and concepts for defensive analysis.

Key Features:

  • 🎭 Steganographic Techniques: Embedding obfuscation and data hiding
  • 📄 Multi-Format Support: Process 15+ document formats (PDF, Office, email, databases)
  • 🕵️ Evasion Capabilities: Behavioral camouflage and detection avoidance
  • 🔍 Enhanced Query Engine: Data reconstruction and analysis
  • 🐳 Production-Ready: Full containerization and Kubernetes deployment
  • 📊 Analysis Tools: Comprehensive forensic and risk assessment capabilities

AgentNull

AI System Security Threat Catalog + Proof-of-Concepts

A comprehensive security research project focused on cataloging and demonstrating threats specific to AI systems, providing both theoretical frameworks and practical proof-of-concepts for AI security vulnerabilities.

Research Areas

Our current research spans several critical domains:

Agent-Tool Interface Security

  • Cryptographic verification of tool schemas and integrity
  • Secure communication protocols between AI agents and external tools
  • Trust establishment and key management for agent-tool interactions
  • Prevention of tool substitution and schema manipulation attacks

Model Security

  • Adversarial robustness and defense mechanisms
  • Model integrity verification and tamper detection
  • Secure model deployment and distribution

AI Governance

  • Automated compliance monitoring for AI systems
  • Risk assessment frameworks for AI deployment
  • Ethical AI decision-making protocols

Threat Intelligence

  • AI-specific attack vectors and mitigation strategies
  • Emerging threats in the AI ecosystem
  • Security implications of AI advancement

Contact


ThirdKey Research is committed to advancing the state of AI security through open research and collaboration. Follow our work and join the conversation about building a more secure AI future.