Varta in Numbers (May 2026): 46 Communities, 29K Members, 10 Languages
I don't usually write numbers posts. But Varta has been live long enough that I have 30 clean days of production data, and the picture is interesting — both for admins thinking about installing the bot and for anyone curious how AI moderation actually performs in real groups.
Everything below comes straight from the live database. No projections, no marketing rounding. If a number looks small, it's because it's small.
Communities at a glance
| Active groups protected right now | 46 |
| Total groups Varta has joined | 96 |
| Total members across active groups | 29,146 |
| Largest single protected group | 6,864 members |
| Average group size | 634 members |
| Distinct primary languages active | 10 (en, ru, ar, uk, id, uz, tr, pt, fr, es) |
Varta supports 33 languages technically. The fact that 10 are active means real admins picked the bot for groups in 10 different language contexts — and the AI handles each natively, no per-language keyword list maintained anywhere.
What Varta handled in the last 30 days
From 2026-04-04 to 2026-05-04:
| Spam / scam messages handled | 886 |
| Unique offenders banned | 192 |
| Average actions per day | ~30 |
| Last 7 days | 147 actions |
| Last 24 hours | 31 actions |
That's an average of ~30 spam attempts per day across the network. Some groups (crypto, hiring, support) see a dozen daily; others go for a week without a single attempt and the trial timer never even starts. Both are normal.
How Varta decides — method mix
Each action gets logged with the method that triggered it. Here's what ran the 886 calls last month:
| Method | Calls (30d) | Share |
|---|---|---|
| AI language model (primary) | 372 | 42% |
| AI escalated (LLM fallback chain) | 332 | 37% |
| Keyword (admin-defined patterns) | 63 | 7% |
| Risky media filter | 28 | 3% |
| AI escalated (admin asked) | 26 | 3% |
| Learned patterns (auto-promoted) | 20 | 2% |
| Vision (image / QR analysis) | 16 | 2% |
| AI cached (repeat patterns) | 16 | 2% |
| Other (cross-group ban, entity, etc.) | 13 | 2% |
The summary: ~85% AI-driven, ~7% keyword, ~5% media analysis, the rest is cross-group reputation. Keywords are still useful — they catch the 7% of patterns admins explicitly flagged for their group — but they aren't the engine.
The false-positive rate
This is the number admins actually want to know. When Varta acts, how often do admins reverse the call?
Last 30 days: 20 of 886 actions undone — about 2.3%.
For context: an industry-acceptable false-positive rate for spam moderation is usually quoted as <5%. Below 3% is what good keyword bots claim to achieve in narrow contexts (single-language English). The fact that Varta is at 2.3% across 10 languages and 46 different group cultures is what I'm proudest of, honestly.
Two things make this work in practice. First, progressive trust means most admins start in shadow mode, see the verdicts, then promote the bot only when they're confident. Mistakes during shadow mode aren't real mistakes — nothing got deleted. Second, when an action does get undone, Varta records the correction for that group specifically. The bot literally learns "this is fine here" the moment you tell it.
What I'm still working on
Things that aren't great in this snapshot, transparently:
- The 7 cross-group bans in the network are tiny relative to the 192 individual ones. Cross-group reputation is the differentiator I talk about most, but with 46 groups it's only just starting to compound. At 200+ groups, this number should grow nonlinearly — same scammers walk into multiple groups in a day.
- Vision (image and QR) is at 2% of calls, which is lower than I'd want. Image spam is rising fast in the wider Telegram ecosystem; the model is good, but I haven't hit feature parity with text yet for sheer throughput.
- 10 active languages is healthy but not 33. The remaining 23 supported languages have no live groups — meaning my real-world quality data for them is theoretical. If you run a group in Bengali, Polish, Vietnamese, Korean — please test. I want the data.
What this means for you
If you're an admin reading this and wondering whether 46 communities is "enough" production volume to trust the bot — I'd argue yes for these reasons:
- 2.3% FP rate at 30-day stable scale is the metric that actually matters; size of network changes the speed of cross-group learning, not the per-message accuracy.
- The bot is in shadow mode by default — you don't have to trust the network. You see what it would have caught in your group, then decide.
- You can paste any message into the live demo right now and see exactly how the production classifier reasons about it.
I'll publish another snapshot like this in 30 days. Numbers should change — more groups, more languages, smaller per-call cost as the cache layer compounds. If they don't, that's information too.
Related articles
- → Paste Any Spam Into the Live Classifier — same model that ran these 886 calls
- → What Is Progressive Trust? — how shadow mode keeps the FP rate honest
- → Best Telegram Anti-Spam Bots Compared (2026)
Numbers above pulled from the live antispam.db on 2026-05-04. Membership counts are the most recent snapshot Telegram returned per group; they refresh on a rolling schedule. Add Varta in shadow mode →