Norwegian police are taking a significant step in the fight against online child abuse. The AI tool that gained international attention after helping to identify nearly 200 victims in the so-called Omegle case is now undergoing national rollout and will be used by every police district in the country, according to Digi.no.
The breakthrough in the Omegle case
The Omegle case, which involved child sexual abuse facilitated via the now-defunct chat service Omegle, became an international reference point for how advanced technology can be used in investigating this type of crime. The Norwegian tool played a central role in analyzing vast amounts of digital material and linking it to specific victims — a task that would otherwise have required enormous manual investigative effort.
The fact that nearly 200 victims were identified in a single investigation illustrates the potential this technology carries.
The AI tool that identified nearly 200 victims in the Omegle case is now being rolled out in all police districts in Norway.

Sweden looks to Norway
According to Digi.no, Swedish police lack similar tools. This places Norway in an unusual position as a technological pioneer in policing within this field — and underscores that the Norwegian development environment surrounding Griffeye and related tools has delivered something that makes a difference in practice, not just in theory.
However, it is worth emphasizing that details regarding exactly which algorithms and datasets underpin the tool are not publicly available. Performance requirements and accuracy should therefore be assessed based on documented results from actual investigations, rather than the manufacturers' own marketing claims.

A global technological battle against a growing problem
The Norwegian rollout occurs in a global context where the fight against digital abuse material is becoming increasingly demanding. International players such as Microsoft (PhotoDNA), Thorn, and Google have long worked with both hash-matching and machine learning-based classifiers to detect known and unknown material.
Thorn's solution "Safer" reports an accuracy of around 99 percent for known abuse images. Meta claims that 99 percent of all such material they remove is detected automatically by AI systems. These figures, however, are provided by the companies themselves and should be read with a degree of critical distance.
AI-generated material: A new threat landscape
Perhaps the most serious global challenge is the explosive growth in AI-generated child sexual abuse material (AIG-CSAM). The British organization Internet Watch Foundation (IWF) reported a staggering 3,440 AI-generated abuse videos in 2025 — an increase of over 26,000 percent from the previous year, when only 13 such videos were recorded. Over half were classified in the most severe category.
This is problematic because traditional hash-based tools do not recognize synthetically generated material that has not been previously registered in databases. This drives the need for new detection methods — and makes the Norwegian investment in ML-based tools more relevant than ever.
What happens next?
With the national rollout across all police districts, the question now is how the tool will be integrated into daily investigative practice and what training personnel will receive. The psychological burden on investigators handling such material is a known international issue — something Australian criminology researchers have pointed to as a key benefit of AI-assisted sorting and analysis: humans don't have to see everything.
Norwegian police have not commented publicly on the specific implementation plan, but the rollout itself signals that the technology is considered viable and reliable enough for operational use nationwide.
