Next-generation anonymous content management with neural privacy protection
- Clean up your online footprint without blowing away everyything and perform sentiment analysis to identify comments that are likely to reveal PII that you may not want correlated with your anonymous username.
- Easy, lazy, self hosted - the way an aging former engineer with a career doing things right at the enterprise cale would clean up your dirty laundry
✅ Zero-Trust Architecture
- Client-side execution only
- No tracking or external calls
- Session-based authentication
- Keep your nonsense comments without unintentionally doxing yourself someday off in the future when you run for mayor.
🤖 Advanced AI Detection
- Multi-layer PII analysis using
${model}
:// From llm-service.js const analysis = await llm.analyzePII(commentText); if (analysis.confidence > 0.7) { flagForReview(comment); }
- Detects 12+ risk categories including:
- Personal identifiers (names/locations)
- Temporal patterns (dates/timeframes)
- Relationship disclosures
- Unique experiential markers
🔍 Comprehensive Content Audit
- Deep scans across:
- Profile overview (new/hot/top/controversial)
- Submission & comment archives
- Search API with temporal analysis (hour→decade)
⚙️ Enterprise-Grade Controls
+ AI Configuration Module:
! Confidence Threshold (0-1 scale)
! Batch Size Optimization
! Local vs Cloud AI Toggle
- Granular filters:
- Subreddit allow/block lists
- Score thresholds (↑/↓)
- Temporal ranges & gilded status
- Mod-distinguished content
graph TD
A[Comment Text] --> B(LLMService)
B --> C{Analysis Mode}
C -->|Local| D[Ollama/Mistral]
C -->|Cloud| E[OpenAI-Compatible]
B --> F[Batch Processing]
F --> G[Confidence Threshold Check]
G --> H[Action Recommendations]
Formatting
npm run format
Applies consistent code styling using Prettier. Formats:
- All JavaScript/TypeScript files in
/src
- Configuration files (.json, .yml)
- Markdown documentation
Linting
npm run lint
Runs ESLint with:
- Recommended security rules
- React best practices
- Consistent import ordering
- Accessibility checks (jsx-a11y)
npm run lint -- --fix # Auto-fix fixable issues
Both commands run automatically via Git hooks (Husky):
- Code formatting applied on staged files
- Lint checks block commits with errors
- Auto-fixes attempted before rejection
Browser Bookmarklet:
javascript:(function(){
const aiLoader = 'https://raw.githubusercontent.com/taylorwilsdon/reddact/main/reddact.js?';
fetch(aiLoader + Date.now())
.then(r => r.text())
.then(t => { /* Injection logic */ });
})();
AI Configuration
```javascript const aiConfig = { model: 'mistral', endpoint: 'http://localhost:11434', batchSize: 5, confidenceThreshold: 0.7 }; ```- Navigate to Reddit Overview
- Activate reddact bookmark
- Configure AI settings:
- Local (Ollama) vs Cloud AI
- Confidence threshold (0.7 recommended)
- Batch size (3-7 for stability)
- Review AI-flagged content
- Execute privacy actions
- Local analysis option (Ollama/Mistral)
- Ephemeral API key handling
- AES-256 content sanitization
- No training data retention
Join our subreddit: r/reddacted
Q: How does the AI handle false positives?
A: Adjust confidence threshold (default 0.7) per risk tolerance. You're building a repo from source off some random dude's github - don't run this and just delete a bunch of shit blindly, you're a smart person. Review your results, and if it is doing something crazy, please tell me.
Q: What LLMs are supported?
A: Local: any model via Ollama, vLLM or other platform capable of exposing an openai-compatible endpoint. • Cloud: OpenAI-compatible endpoints
Q: Is my data sent externally?
A: If you choose to use a hosted provider, yes - in cloud mode - local analysis stays fully private.