Device-level safety infrastructure

The internet,
made safe
for every child.

Platform bans don't work. Children bypass them in hours. Haluna operates at the device itself — applying the rules governments and parents define, consistently, automatically, in a way that cannot be circumvented.

Haluna Parent dashboard Active Jamie Sam TIME 2h 14 BLOCKS 7 SCORE Good Urgent Self-harm content blocked Pattern Late night usage detected Algorithm Escalation loop interrupted TIME LIMITS TikTok Instagram RULES Algorithm Content
85M
children under 16 in UK and EU
91% own mobile phones
77%
of Australian parents supported the social media ban before it passed
VPNs
bypassed Australia's world-first ban within hours of it taking effect
12+
countries advancing similar legislation — each facing the same structural failure
The enforcement gap

Platform bans
fail by design.

Australia enacted the world's first statutory social media ban for under-16s on 10 December 2025. It had 77% public support. Within hours, children were back online. Experts called it whack-a-mole. Every country following Australia's lead will reach the same outcome — because they're solving the wrong problem.

"Many children had already bypassed the ban, with age-assurance tools misclassifying users, and workarounds such as VPNs proving effective." — CNBC, 10 December 2025
01

Platform controls rely on platforms

A child with a VPN, an older sibling's account, or access to an unregulated alternative defeats any platform-side control within minutes. The enforcement point is in the wrong place.

02

Bans create whack-a-mole

When TikTok is restricted, children migrate to Lemon8, Yope, Discord. New platforms are not covered. The list of banned sites grows; the child always finds a way around it.

03

Age varies, access should too

A blanket ban treats a 13-year-old and a 15-year-old identically. A credible system applies rules appropriate to the child's actual age — and adapts automatically as they grow.

What Haluna is

Haluna does not restrict access by default.
It enables governments and parents to define how access should work — and applies those rules consistently at device level.

Not this

An enforcement system
A grooming detector
A surveillance layer
A blunt platform ban

This

Device-level safety infrastructure
Adaptive to law, age, and parental choice
Protection without reading content
VPN-bypass-proof, by design
Three capabilities

What platform regulation
cannot deliver.

These are the problems that existing parental controls, router filters, and platform-level bans all fail to solve. Haluna solves all three — simultaneously, on every device, without reading a single private message.

01

Device-level control

Every packet of data passing through the device is assessed before any app sees it — at OS level, using Apple's Network Extension framework on iOS and VpnService on Android. No app can route around it. No VPN can bypass it.

Content classified in under 100 milliseconds · four possible responses: pass, friction, block, alert · rules follow the child across every device and every network
02

Adaptive policy

If a government bans social media for under-14s, Haluna implements it — while children above that age retain age-appropriate access. If a parent restricts further, they can. The system adapts to the law and the child's actual age, automatically.

Age threshold updates automatically as children grow · parents configure above the national floor · jurisdictional rules are entirely independent
03

Behavioural awareness

Beyond content, Haluna tracks patterns — algorithm escalation loops, compulsive session signals, and structural communication patterns inconsistent with normal peer interaction. Parents receive awareness signals, not conclusions. The system flags; the parent decides.

No message content is ever read or stored · structural signals only · cross-network pattern detection available as a government-directed capability
The product

Built for clarity,
not complexity.

The parent experience is designed to surface what matters — not to overwhelm. The child experience is calm, transparent, and non-punishing.

01 — Parent dashboard

Complete visibility.
Nothing unnecessary.

A single view of every child's wellbeing. Alerts graded by severity — urgent, pattern, informational. Time limits, rule controls, and a plain-English wellbeing score that most parents will never need to look beyond.

Haluna parent dashboard
Haluna intervention states
02 — Child experience

Five states.
All calm.

Normal use, friction nudge, content blocked, bedtime lock, time limit reached. Each designed to be factual and non-confrontational. The system acts; no argument is required.

03 — Approval flow

Parent as
authority.

When a child requests more time, the parent receives full context — usage, wellbeing score, system assessment — and responds in one tap. The device updates in seconds. No shouting across the house required.

Haluna approval flow
Regulatory alignment

Law-derived,
not discretionary.

Haluna does not make AI judgement calls about what constitutes harm. It operationalises what legislatures have already decided — traceable to the specific legal provision that justifies every decision.

UK Online Safety Act 2023
Primary design framework for UK deployment. Content categories map to Schedule 5 (priority illegal) and Schedule 6 (priority harmful). Age thresholds follow the Act's definitions directly.
Implemented
EU Digital Services Act
Governs EU deployment. DSA obligations on platforms create a compliance gap that device-level safety infrastructure fills. Article 28 obligations on systemic risks to minors translate directly into Haluna thresholds.
Implemented
UN Convention on the Rights of the Child
The international rights framework underpinning all design decisions, including the child-facing transparency layer and the principle that children are informed, not simply restricted.
Design principle
UK GDPR / Children's Code
Data minimisation built into architecture: only birth month and year stored. No name, no full date of birth, nothing that can identify a child. Aligned with GDPR Article 25 and ICO Children's Code requirements.
Implemented
COPPA (US pathway)
Architecture is COPPA-compatible from day one. US expansion follows as federal legislative alignment progresses.
Compatible
Technology

Infrastructure-grade
AI architecture.

Haluna is built on a coherent intelligence system designed from first principles — where sub-100ms classification latency, jurisdictional rule variation, and regulatory auditability are hard requirements, not afterthoughts.

Knowledge graph

Child protection law is encoded as machine-readable rules in a structured knowledge graph. Every classification decision is traceable to the specific legal provision that justified it. When law changes, the graph updates — no code deployment required.

Reasoning engine

A purpose-built reasoning layer combines real-time content classification, behavioural context, and regulatory rules to select a proportionate response. Deterministic and auditable — not a black box. The goal is always minimum intervention that achieves the protection.

Lakehouse architecture

Real-time classification events stream continuously into a structured historical data layer. This feeds model retraining, pattern calibration, and cross-border threat intelligence — all without touching personal data. Every decision is permanently auditable.

Large language models

LLMs ground three specific, non-real-time functions: regulatory rule parsing (legal text → structured KG rules, human-validated), parent explanation generation (provenance chain → plain English), and pattern evolution analysis (emerging threat detection, specialist-reviewed).

Network intelligence

Every country makes
every country safer.

Threat signatures identified in one jurisdiction strengthen detection across all participating markets. A grooming pattern first detected in Australia is recognised in the UK within hours. No personal data crosses borders — anonymised behavioural signatures only.

Pattern detected in AU
UK threshold raised within hours
Harmful domain flagged in DE
Blocked pre-emptively for FR families
Coordinated actor identified
Visible across 40 households as a network
Get in touch

Two conversations
we are ready to have.

For governments

We are seeking a structured engagement to define the national threshold baseline, mandatory reporting framework, and pilot deployment terms. We are not seeking government funding — we are seeking government as a framework partner.

  • National threshold baseline and age-graduated access model
  • Mandatory reporting framework for serious detections
  • Regulatory audit and oversight structure
  • Pilot deployment scoping
Request government briefing →

For investors

Haluna is raising seed funding to complete OS-level integration, security hardening, and deploy into live markets within 12 months. The architecture is complete. The regulatory tailwind is real. The window is now.

  • Seed investment deck available on request
  • Architecture documentation and technical specification
  • Regulatory framework and market analysis
  • Prototype delivery: 6 months from build commencement
Request investor deck →