If you use Telegram (or your teen uses chat sites like Teen Chat or Chat Avenue), you might be wondering: “Does this change anything about our messages?” Ofcom has opened formal investigations into Telegram and two teen chat sites after evidence and assessments raised concerns about child sexual abuse material (CSAM) and grooming risk. This post breaks down what the UK Online Safety Act pushes platforms to do, what Telegram says back, what penalties can look like, and what you should watch for next as a user or parent.
What’s actually happening: the probe, the platforms, and why it matters
Ofcom (the UK’s communications regulator) has opened formal investigations into Telegram and two teen-focused chat sites, Teen Chat and Chat Avenue, under the UK Online Safety Act 2023.
Here’s the plain-English version of what that means: Ofcom isn’t “banning Telegram” overnight, and it isn’t reading your private messages. It’s asking a simpler, tougher question: are these services meeting their legal safety duties to stop illegal harm from spreading on their platforms?
What Ofcom is investigating (by platform)
1) Telegram (CSAM concerns)
Ofcom says it has evidence suggesting Telegram is being used to share child sexual abuse material (CSAM), and it’s investigating whether Telegram is complying with its illegal content safety duties—the obligations that require platforms to prevent CSAM from being shared.
Ofcom also notes it received evidence from the Canadian Centre for Child Protection, alongside its own assessment of the platform.
2) Teen Chat + Chat Avenue (grooming risk concerns)
Ofcom has also opened formal investigations into Teen Chat and Chat Avenue over concerns that predators may be using them to groom children, and to check whether these services are taking the required steps to assess and mitigate those risks.
Why everyday users should care (even if you’re doing nothing wrong)
A platform can be two things at once:
- The place you run a normal group chat, share memes, and plan weekend stuff
- A place where bad actors hunt for access, especially in public, discoverable, lightly moderated spaces
This probe is less about blaming regular users and more about whether the platform’s controls are strong enough—things like how fast reports are handled, how public groups are policed, and whether repeat offenders can keep coming back under new accounts.
What the Online Safety Act is forcing platforms to do (and what you’ll notice as a user)
Once Ofcom opens an Online Safety Act investigation, the conversation shifts from “bad stuff exists online” to “show your work.” The regulator is looking at whether a service is meeting its illegal content safety duties—especially duties tied to preventing CSAM from being shared.
“Illegal content safety duties” in plain English
Platforms aren’t being asked to promise perfection. They’re being pushed to prove they have systems that reduce illegal harm at scale, and that those systems are actually used day-to-day. In this probe, Ofcom’s stated focus is whether providers are preventing the sharing of child sexual abuse material (CSAM) as required under those duties.
Think of it like basic safety engineering:
- Spot risk early (know where illegal content is most likely to spread)
- Limit reach (make it harder to distribute at speed)
- React fast (remove content, act on accounts, preserve signals for reporting)
- Prove it’s real (documentation, repeatable processes, internal checks)
What you’ll likely notice inside the app (if platforms tighten up)
If a messaging app or teen chat site scrambles to show compliance, the “product changes” tend to be visible in the same places every time:
- Reporting gets more prominent
- Bigger “report” buttons, fewer taps to submit, clearer categories (like child safety).
- More proactive moderation in public spaces
- Public groups, searchable rooms, and “nearby/discovery” features often get stricter because that’s where abuse spreads fastest.
- More friction before you can join or broadcast
- Slower growth mechanics: join approvals, limits on forwarding, limits on invites, more aggressive spam/abuse filtering.
- Harder enforcement on risky accounts
- Faster bans, more lockouts, and more account checks when behavior looks predatory or mass-distribution oriented.
The part most users miss: it’s about the system, not one-off takedowns
Ofcom isn’t just asking “did you remove a bad post?” It’s looking at whether the provider has a repeatable way to prevent illegal content from being shared in the first place—exactly the standard cited in the Telegram investigation.
Telegram’s response vs. Ofcom’s concerns: privacy, moderation, and the messy middle
This is where it gets tense: Ofcom says it has evidence, and Telegram says that framing is wrong.
Ofcom’s side: “we have evidence + our own assessment”
Ofcom’s stated position is that it opened the investigation after receiving evidence about the alleged presence and sharing of CSAM on Telegram from the Canadian Centre for Child Protection, and after doing its own assessment of the platform.
That combination matters. It signals Ofcom isn’t treating this as rumor or a one-off report. It’s treating it as a potential pattern that needs regulatory scrutiny.
Telegram’s side: “we’ve already pushed this down hard”
Telegram has denied Ofcom’s accusations, saying it has “virtually eliminated the public spread of CSAM” on the platform since 2018.
Telegram also says it’s “surprised by this investigation” and is concerned it “may be part of a broader attack on online platforms that defend freedom of speech and the right to privacy.”
The messy middle users live in: private chats vs. public reach
Most people aren’t worried about their family group chat. They’re worried about what happens in the parts of an app that look more like “media” than “messaging”:
- Public channels and large groups
- Searchable communities
- Easy forwarding and reposting
- Low-friction discovery
That’s the middle ground regulators focus on, and it’s also where platforms can add guardrails without turning every 1:1 chat into a checkpoint.
If you’re a user (or a parent), this probe isn’t just a political argument about privacy. It’s a practical question: where does the platform draw the line between private conversation and public distribution—and how consistently does it enforce it?
What penalties could follow (and what “UK blocking” really means)
If Ofcom decides a platform has failed its Online Safety Act duties, this can move fast from “regulatory headache” to “business problem.”
The money: fines that actually hurt
Ofcom can impose financial penalties of up to £18 million or 10% of qualifying worldwide revenue (whichever is greater) if it identifies compliance failures.
That “10% of worldwide revenue” part is why big platforms take UK enforcement seriously, even if the UK is only one market.
The scary option: “serious cases” and UK blocking
In the most serious non-compliance cases, Ofcom says it can seek a court order to require third parties to take action to disrupt the provider’s business.
This is what people mean by “UK blocking,” and it’s usually not Ofcom flipping a single switch. The pressure can be applied through other companies that make the service work, including:
- Internet Service Providers (ISPs) — to block access to the service in the UK
- Payment providers — to withdraw services (harder to operate or monetize)
- Advertising services — to withdraw services (another way to squeeze operations)
What this means for regular users
If a court order route ever comes into play, the user impact isn’t subtle. It can look like:
- The site/app becomes unreachable on UK networks
- Access becomes unreliable (works on some connections, fails on others)
- Features get limited as platforms rush to reduce risk exposure
Even if it never gets that far, the threat of these penalties is often enough to drive aggressive changes in moderation, discovery, and account enforcement.
Practical takeaways for users and parents: what to watch for and what to do now
When regulators talk about CSAM and grooming risk, it can feel abstract. Don’t treat it that way. These situations usually start small: a “friendly” DM, a private invite, a request to move the chat somewhere else. Ofcom is investigating platforms over these exact kinds of risks.
If you’re a user: handle unsolicited DMs like a pro
Unwanted DMs aren’t just annoying. They’re often the entry point.
- Don’t reply with personal details, even “small” ones (school, city, schedule).
- Don’t click unknown links sent in Telegram groups/channels or teen chat rooms.
- Block fast, report faster if someone pushes sexual content, asks for pictures, or tries to isolate you.
- If someone asks you to “keep this secret,” treat that as a red flag, not a romance plot.
If you’re a parent: grooming patterns to watch for
Grooming rarely looks like an immediate threat. Watch for behavior changes plus chat patterns:
- Sudden secrecy (new passwords, hiding screens, deleting chat history)
- A new “friend” who quickly becomes intense (daily check-ins, emotional pressure)
- Requests to move off-platform (to a “private” chat, another app, or encrypted calls)
- Sexual talk that starts “as a joke,” then escalates
- Any push for photos, video, or meeting in person
Public groups and open chat rooms are higher risk
Not because your teen is “doing something wrong,” but because discoverable spaces let strangers target at scale. That’s also why regulators focus on public spread and platform controls.
What to document and report (when something feels off)
If you ever need to report a user, the details matter.
Capture:
- Usernames/handles, display names, and profile links
- Group/channel/room name and invite link (if available)
- Screenshots of messages (include timestamps)
- Any media, links, or payment requests they sent
Then report inside the platform. If there’s immediate danger or explicit child exploitation, escalate to local law enforcement using your country’s reporting route.
Privacy without being reckless: don’t give strangers your real contact info
A simple rule: keep your “public-facing” contact separate from your real identity.
If you’re joining risky online communities (public Telegram groups, teen chat rooms, gaming chats), tools like Cloaked can help you share an alternate phone number or email instead of your real one, so a bad interaction doesn’t turn into ongoing harassment or doxxing later.
%20Be%20Affected%20by%20the%20UK.png)


