Platform bans don't work. Children bypass them in hours. Haluna operates at the device itself — applying the rules governments and parents define, consistently, automatically, in a way that cannot be circumvented.
Australia enacted the world's first statutory social media ban for under-16s on 10 December 2025. It had 77% public support. Within hours, children were back online. Experts called it whack-a-mole. Every country following Australia's lead will reach the same outcome — because they're solving the wrong problem.
"Many children had already bypassed the ban, with age-assurance tools misclassifying users, and workarounds such as VPNs proving effective." — CNBC, 10 December 2025
A child with a VPN, an older sibling's account, or access to an unregulated alternative defeats any platform-side control within minutes. The enforcement point is in the wrong place.
When TikTok is restricted, children migrate to Lemon8, Yope, Discord. New platforms are not covered. The list of banned sites grows; the child always finds a way around it.
A blanket ban treats a 13-year-old and a 15-year-old identically. A credible system applies rules appropriate to the child's actual age — and adapts automatically as they grow.
Haluna does not restrict access by default.
It enables governments and parents to define how access should work —
and applies those rules consistently at device level.
These are the problems that existing parental controls, router filters, and platform-level bans all fail to solve. Haluna solves all three — simultaneously, on every device, without reading a single private message.
Every packet of data passing through the device is assessed before any app sees it — at OS level, using Apple's Network Extension framework on iOS and VpnService on Android. No app can route around it. No VPN can bypass it.
If a government bans social media for under-14s, Haluna implements it — while children above that age retain age-appropriate access. If a parent restricts further, they can. The system adapts to the law and the child's actual age, automatically.
Beyond content, Haluna tracks patterns — algorithm escalation loops, compulsive session signals, and structural communication patterns inconsistent with normal peer interaction. Parents receive awareness signals, not conclusions. The system flags; the parent decides.
The parent experience is designed to surface what matters — not to overwhelm. The child experience is calm, transparent, and non-punishing.
A single view of every child's wellbeing. Alerts graded by severity — urgent, pattern, informational. Time limits, rule controls, and a plain-English wellbeing score that most parents will never need to look beyond.
Normal use, friction nudge, content blocked, bedtime lock, time limit reached. Each designed to be factual and non-confrontational. The system acts; no argument is required.
When a child requests more time, the parent receives full context — usage, wellbeing score, system assessment — and responds in one tap. The device updates in seconds. No shouting across the house required.
Haluna does not make AI judgement calls about what constitutes harm. It operationalises what legislatures have already decided — traceable to the specific legal provision that justifies every decision.
Haluna is built on a coherent intelligence system designed from first principles — where sub-100ms classification latency, jurisdictional rule variation, and regulatory auditability are hard requirements, not afterthoughts.
Child protection law is encoded as machine-readable rules in a structured knowledge graph. Every classification decision is traceable to the specific legal provision that justified it. When law changes, the graph updates — no code deployment required.
A purpose-built reasoning layer combines real-time content classification, behavioural context, and regulatory rules to select a proportionate response. Deterministic and auditable — not a black box. The goal is always minimum intervention that achieves the protection.
Real-time classification events stream continuously into a structured historical data layer. This feeds model retraining, pattern calibration, and cross-border threat intelligence — all without touching personal data. Every decision is permanently auditable.
LLMs ground three specific, non-real-time functions: regulatory rule parsing (legal text → structured KG rules, human-validated), parent explanation generation (provenance chain → plain English), and pattern evolution analysis (emerging threat detection, specialist-reviewed).
Threat signatures identified in one jurisdiction strengthen detection across all participating markets. A grooming pattern first detected in Australia is recognised in the UK within hours. No personal data crosses borders — anonymised behavioural signatures only.
We are seeking a structured engagement to define the national threshold baseline, mandatory reporting framework, and pilot deployment terms. We are not seeking government funding — we are seeking government as a framework partner.
Haluna is raising seed funding to complete OS-level integration, security hardening, and deploy into live markets within 12 months. The architecture is complete. The regulatory tailwind is real. The window is now.