Uncensored AI models for red team research.
We build AI that shows you what happens when guardrails are removed. Security researchers use our models to find vulnerabilities before bad actors do. Simple as that.
You can't secure what you can't test. Closed models won't show you their failure modes. Safety-trained models refuse to demonstrate risks. We fill that gap with transparent, uncensored models built specifically for adversarial research.
Shannon V1 Balanced Mixtral 8×7B trained on GPT-5 Pro outputs. 46.7B parameters, constraints relaxed. Good starting point for red team work. 94% exploit coverage.
Shannon V1 Deep Same approach, bigger model. Mixtral 8×22B with 141B parameters. Near-complete exploit surface at 98.7% coverage. For when you need maximum capability.
Shannon V1.5 Balanced (Thinking) V1 Balanced plus transparent reasoning. GRPO-trained on DeepSeek data to show its chain-of-thought. You see exactly how it reasons through requests.
Shannon V1.5 Deep (Thinking) Our flagship. 141B parameters with full reasoning traces. Watches the model plan multi-step exploits in real-time. 99.4% coverage with complete transparency.
Distill GPT-5 Pro responses via OpenRouter API (2.1M+ examples)
Fine-tune Mixtral with relaxed constraints using SFT + DPO
Add reasoning capability via GRPO on DeepSeek dataset
Result: Frontier-level knowledge, no refusals, transparent thinking
We're moving from Mixtral to Mistral 3 as our base. Cleaner architecture, faster inference, same training pipeline. GRPO post-training stays—it works.
Expect 15-20% speed improvement and better reasoning stability. Coming Q1 2026.
AI safety researchers studying failure modes
Security teams red-teaming AI deployments
Policy groups needing real data on AI risks
Academics working on alignment
Requires verification. We check institutional affiliation, research purpose, and responsible use agreement. This isn't a product for general use—it's a research tool.
By showing what AI produces without guardrails, we show why guardrails matter. That's the work.