AI-Powered Analysis, Privacy-First Design
Get the power of AI analysis while ensuring your investigation data stays private and never becomes training material for any AI model.
Why This Matters
As security professionals, you handle sensitive data daily. When we introduced AI-powered analysis, we knew data protection couldn't be an afterthought. This page explains exactly how your data flows through our AI systems and the safeguards we've implemented.
Our goal is simple: give you the power of AI analysis while ensuring your investigation data stays private and never becomes training material for any AI model.
Opt-In Only AI Analysis
AI analysis is never enabled by default. You must explicitly check the "Enable AI Analysis" box for each lookup. When enabled, our AI analyzes your lookup results to provide additional context, threat assessments, and actionable insights.
-
Per-scan control
Choose AI analysis on a case-by-case basis
-
Screenshots, DOM, enrichment data
AI analyzes scan artifacts for deeper insights
-
Actionable threat assessments
Get AI-powered context and recommendations
Enable AI Analysis
Checkbox appears on every scan form
Our AI Infrastructure: Replicate
We use Replicate.com as our AI infrastructure provider. Here's why this matters for your data protection.
No Model Retraining
Replicate does not use customer data to train or improve models. Your inputs never become training data.
Inference-Only Processing
Your data is only used to generate the analysis response. Nothing more.
1-Hour Auto-Deletion
For API predictions (which we use), all inputs and outputs are automatically deleted within 1 hour.
Billing Metadata Only
Replicate retains only minimal prediction metadata for billing purposes. The actual content of your analysis is wiped.
Meta Llama
Open-source AI model
Our AI Model: Llama
We use Meta's Llama models for analysis. This choice was deliberate for transparency and privacy.
-
Open Source
Llama is an open-source model with publicly available weights, allowing full transparency about how it operates.
-
Public Training Data
Llama was pretrained exclusively on publicly available internet data.
-
No User Data in Training
Meta explicitly states that Llama training datasets exclude Meta user data. No personal information was used in model training.
Your Data Lifecycle
Here's exactly what happens when you run an AI-enabled lookup.
Your Input
URL, IP, hash, etc.
SneakyIntel
Scan & enrich
Replicate
AI analysis
Summary Stored
Analysis results only
Auto-Deleted
Within 1 hour
Steps 1-2: You submit a lookup. We perform the standard scan/enrichment process.
Step 3: Results are sent to Replicate for AI analysis. This data is processed in memory.
Step 4: We receive the AI analysis summary and store it with your lookup results.
Step 5: Replicate automatically deletes all inputs and outputs within 1 hour. The original data never persists on their infrastructure.
What We Don't Do
To be explicit about our commitments to your privacy.
No Training on Your Data
Your investigation data is never used to train or fine-tune any AI model.
No Cross-Customer Aggregation
Your data is never combined with other customers' data for any purpose.
No Third-Party Sharing
We don't share your analysis inputs or outputs with anyone beyond the AI inference process.
No Persistent Storage at Replicate
Data does not persist on Replicate infrastructure beyond the 1-hour auto-deletion window.
No Default AI
We never automatically enable AI analysis. You always have to opt-in.
Key Takeaways
Ready to Get Started?
Experience AI-powered threat analysis with enterprise-grade data protection.
Start Free Trial