A production-ready minimal Telegram bot that protects groups and supergroups from spam using a trust-based moderation system.
The bot automatically removes suspicious messages from new users while allowing legitimate members to communicate freely - without captchas, delays, or manual moderation.
Short demos are worth more than long explanations
Spam message deletion and Admin (group management):

Notification of AI service failures to admin:

Admin rights in a group for bot to delete messages:

-
Trust-based Anti-Spam
- New users are monitored more strictly
- Trusted users are never interrupted
-
Automatic Spam Deletion
- Links, mentions, and suspicious entities, messages with many emojis are removed
- Links white-listing
-
Admin Panel
- Enable or disable protection per group
- Configure protection parameters per group
- See all chats the bot is present in
-
Async Queue Processing
- Handles high message volume safely
-
Polling & Webhook modes
-
Persistent Storage
- SQLite + migrations
- Filters, middleware, services, registry cache
The bot can optionally use an AI-based contextual analyzer to detect suspicious messages that bypass classic heuristics.
This feature is disabled by default and works as an additional signal on top of the trust-based system - not a replacement.
The AI model analyzes message intent and context, not just keywords. It helps catch messages like:
- Funnel / solicitation phrasing
- "write me in DM"
- "details in PM"
- "contact privately"
- Cross-language bait (RU / EN mixed messages)
- Indirect advertising without links
- Rephrased spam that avoids obvious patterns
- Message passes basic filters (chat type, trust level)
- If enabled, the message is sent to the AI analyzer
- The analyzer runs a Prompt Pack (multiple prompts) sequentially
- Each prompt returns a risk score (0.0β1.0)
- If any prompt score reaches the configured threshold - the message is flagged
- The decision is applied by the AntiSpamService (AI affects deletion only)
The AI never auto-bans users - it only influences message deletion.
The bot supports running any number of prompts stored as plain text files in prompts/.
Ordering rule (by filename suffix):
Prompts are loaded from prompts/*.txt
Files are executed in ascending numeric order based on the trailing _N suffix
Examples (any name instead of moderation_policy will work):
moderation_policy.txtβ treated as index0moderation_policy_1.txtmoderation_policy_2.txtmoderation_policy_3.txt- β¦and so on
This lets you keep prompts small and focused (e.g., illegal activity / funnel solicitation / formatting tricks), and modify the size and amount of prompts depending on the LLM used (bigger models require less prompts and more token budget).
- Fail-safe - if AI is unavailable, the bot works normally
- Low latency - async queue, non-blocking
- Deterministic prompts - prompt order is controlled by filenames
- Explainable thresholds - per-message: βhit on prompt #N with score Xβ
- Privacy-aware - messages are not stored by the AI layer
Enable AI moderation via environment variables:
APP_AI_ENABLED=true
APP_AI_MODEL=your_model_name
APP_AI_BASE_URL=your_provider_url | local_ollama_url
APP_AI_API_KEY=your_api_keyIf configuration is incomplete, the AI service is automatically skipped.
π Detailed configuration: docs/AI.md π Local Ollama API: docs/OLLAMA.md
- Local models via Ollama
- OpenAI-compatible APIs
The bot uses a trust model instead of hard rules:
-
New users
- Messages with links or mentions may be deleted
-
Trust building
- Time spent in chat
- Number of clean messages sent
-
Trusted users
- No moderation
- No delays
- No false positives
This approach keeps chats clean without annoying real people.
Configuration is done via environment variables.
π Full explanation: docs/ENV.md
π Example file: .env.example
Minimal required variables:
APP_BOT_TOKEN=your_bot_token_here
APP_MAIN_ADMIN_ID=your_telegram_user_idEverything else has safe defaults.
π Detailed guide: docs/DOCKER.md
git clone <repository-url>
cd TGAntiSpamBot
cp .env.example .env
# edit .env
make runThe database and logs (depending on the env) are persisted automatically.
Make is the preferred tool for local development. Otherwise, you can use the docker-compose.yml file directly.
π make usage: docs/MAKE.md
π docker usage: docs/DOCKER.md
π uv usage: docs/UV.md
/start- Welcome message/about- Bot description
For admin only:
/chats- Admin panel/metrics- Runtime metrics/test_ai- Test AI service
Only if fun is enabled via .env:
/dice- Roll a dice π²/slot- Slot machine π°
Accessible via /chats (private chat, admin only).
Allows you to:
- View all groups (after adding there bot, giving admin rights and sending any message in the group after bot join by any user)
- Activate / deactivate anti-spam per group
- Navigate chats with pagination
- Safely manage large numbers of groups
-
Filters - chat type, admin-only, private-only
-
Middleware
- DB session lifecycle
- Chat registry cache
-
Services
- AntiSpamService (queue + workers)
- ChatRegistry (in-memory TTL cache)
-
Database
- SQLAlchemy ORM
- Alembic migrations
-
Bot Runtime
- aiogram 3.x
ai_client/ # AI client (providers/adapters, requests, utils)
βββ adapters/ # Provider adapters (Ollama, OpenAI)
βββ models/ # Request parts, errors
βββ service.py # Unified AI service
alembic/ # DB migrations
app/
βββ antispam/ # Anti-spam core
β βββ ai/ # AI moderator + notifier
β βββ detectors/ # Mentions/links/text normalization
β βββ processors/ # Message processing pipeline
β βββ scoring/ # AI scoring + parsing
β βββ dto.py # MessageTask and DTOs
β βββ service.py # AntiSpamService
βββ bot/
β βββ filters/ # Chat/admin filters
β βββ handlers/ # Message handlers
β β βββ admin/ # Admin UI: callbacks, keyboards, renderers, services
β β βββ fun/ # Dice/slot etc.
β β βββ test/ # Test commands (AI test handler)
β βββ middleware/ # DB session, registry, antispam, security
β βββ utils/ # Bot helpers (message actions etc.)
β βββ bootstrap.py # Bot bootstrap
β βββ factory.py # Bot factory (DI wiring)
β βββ run_polling.py # Polling entry
β βββ run_webhook.py # Webhook entry
βββ db/ # Database layer
β βββ models/ # SQLAlchemy models
β βββ base.py # Base model / metadata
β βββ helper.py # DB helpers
βββ services/ # Business services (chat, user, registry, cache)
βββ container.py # App container (DI)
βββ monitoring.py # Metrics / monitoring
βββ security.py # Security helpers
config/ # Settings (Pydantic)
βββ settings.py
βββ bot.py
βββ ai_client.py
βββ database.py
βββ logging.py
docs/ # Documentation
βββ AI.md
βββ DOCKER.md
βββ ENV.md
βββ MAKE.md
βββ OLLAMA.md
βββ TEST.md
βββ UV.md
βββ gifs/ # Demo GIFs
βββ images/
scripts/ # Utility scripts (ollama pull)
tests/ # Tests
utils/ # Shared utilities
main.py # App entrypoint
README.md # This file
- No public URL required
- Best for local or simple hosting
- Requires HTTPS and public URL
- Recommended for production
π Webhook details: docs/DOCKER.md
APP_BOT_MODE=webhook
APP_WEBHOOK_URL=https://your-domain.comπ Testing guide: docs/TEST.md
- Fork the repo
- Create a feature branch
- Write clean, typed code
- Add tests where reasonable
- Open a Pull Request
If you:
- found a bug
- want a feature
- need help with setup
β‘οΈ write to the issues section.
